pax_global_header 0000666 0000000 0000000 00000000064 13370020422 0014503 g ustar 00root root 0000000 0000000 52 comment=8387a4c33f650d4392d6085750000f27ee51f278
Radicale-2.1.11/ 0000775 0000000 0000000 00000000000 13370020422 0013271 5 ustar 00root root 0000000 0000000 Radicale-2.1.11/.gitignore 0000664 0000000 0000000 00000000205 13370020422 0015256 0 ustar 00root root 0000000 0000000 *~
.*.swp
*.pyc
__pycache__
/MANIFEST
/build
/dist
/Radicale.egg-info
.cache
.coverage
.eggs
.project
.pydevproject
.settings
.tox
Radicale-2.1.11/.travis.yml 0000664 0000000 0000000 00000000747 13370020422 0015412 0 ustar 00root root 0000000 0000000 language: python
sudo: false
matrix:
include:
- os: linux
python: 3.4
- os: linux
python: 3.5
- os: linux
python: 3.6
- os: osx
language: generic
before_install:
- |
if [ "${TRAVIS_OS_NAME}" == osx ]; then
rm '/usr/local/include/c++'
brew install python3 ||
brew upgrade python3
fi
- pip3 install --upgrade six
install:
- pip3 install --upgrade --editable .[test,md5,bcrypt]
script:
- python3 setup.py test
Radicale-2.1.11/COPYING 0000664 0000000 0000000 00000104513 13370020422 0014330 0 ustar 00root root 0000000 0000000 GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
Copyright (C)
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
.
Radicale-2.1.11/Dockerfile 0000664 0000000 0000000 00000001767 13370020422 0015276 0 ustar 00root root 0000000 0000000 FROM alpine:latest
# Version of Radicale (e.g. 2.0.0)
ARG VERSION=master
# Install dependencies
RUN apk add --no-cache \
python3 \
python3-dev \
build-base \
libffi-dev \
ca-certificates \
openssl
# Install Radicale
RUN wget --quiet https://github.com/Kozea/Radicale/archive/${VERSION}.tar.gz --output-document=radicale.tar.gz && \
tar xzf radicale.tar.gz && \
pip3 install ./Radicale-${VERSION}[md5,bcrypt] && \
rm -r radicale.tar.gz Radicale-${VERSION}
# Install dependencies for Radicale<2.1.9
RUN pip3 install passlib[bcrypt]
# Remove build dependencies
RUN apk del \
python3-dev \
build-base \
libffi-dev
# Persistent storage for data (Mount it somewhere on the host!)
VOLUME /var/lib/radicale
# Configuration data (Put the "config" file here!)
VOLUME /etc/radicale
# TCP port of Radicale (Publish it on a host interface!)
EXPOSE 5232
# Run Radicale (Configure it here or provide a "config" file!)
CMD ["radicale", "--hosts", "0.0.0.0:5232"]
Radicale-2.1.11/MANIFEST.in 0000664 0000000 0000000 00000000160 13370020422 0015024 0 ustar 00root root 0000000 0000000 include COPYING NEWS.md README.md
include config logging rights
include radicale.py radicale.fcgi radicale.wsgi
Radicale-2.1.11/NEWS.md 0000664 0000000 0000000 00000030364 13370020422 0014375 0 ustar 00root root 0000000 0000000 News
====
2.1.11 - Wild Radish
--------------------
This release is compatible with version 2.0.0.
* Fix moving items between collections
2.1.10 - Wild Radish
--------------------
This release is compatible with version 2.0.0.
* Update required versions for dependencies
* Get ``RADICALE_CONFIG`` from WSGI environ
* Improve HTTP status codes
* Fix race condition in storage lock creation
* Raise default limits for content length and timeout
* Log output from hook
2.1.9 - Wild Radish
-------------------
This release is compatible with version 2.0.0.
* Specify versions for dependencies
* Move WSGI initialization into module
* Check if ``REPORT`` method is actually supported
* Include ``rights`` file in source distribution
* Specify ``md5`` and ``bcrypt`` as extras
* Improve logging messages
* Windows: Fix crash when item path is a directory
2.1.8 - Wild Radish
-------------------
This release is compatible with version 2.0.0.
* Flush files before fsync'ing
2.1.7 - Wild Radish
-------------------
This release is compatible with version 2.0.0.
* Don't print warning when cache format changes
* Add documentation for ``BaseAuth``
* Add ``is_authenticated2(login, user, password)`` to ``BaseAuth``
* Fix names of custom properties in PROPFIND requests with
``D:propname`` or ``D:allprop``
* Return all properties in PROPFIND requests with ``D:propname`` or
``D:allprop``
* Allow ``D:displayname`` property on all collections
* Answer with ``D:unauthenticated`` for ``D:current-user-principal`` property
when not logged in
* Remove non-existing ``ICAL:calendar-color`` and ``C:calendar-timezone``
properties from PROPFIND requests with ``D:propname`` or ``D:allprop``
* Add ``D:owner`` property to calendar and address book objects
* Remove ``D:getetag`` and ``D:getlastmodified`` properties from regular
collections
2.1.6 - Wild Radish
-------------------
This release is compatible with version 2.0.0.
* Fix content-type of VLIST
* Specify correct COMPONENT in content-type of VCALENDAR
* Cache COMPONENT of calendar objects (improves speed with some clients)
* Stricter parsing of filters
* Improve support for CardDAV filter
* Fix some smaller bugs in CalDAV filter
* Add X-WR-CALNAME and X-WR-CALDESC to calendars downloaded via HTTP/WebDAV
* Use X-WR-CALNAME and X-WR-CALDESC from calendars published via WebDAV
2.1.5 - Wild Radish
-------------------
This release is compatible with version 2.0.0.
* Add ``--verify-storage`` command-line argument
* Allow comments in the htpasswd file
* Don't strip whitespaces from user names and passwords in the htpasswd file
* Remove cookies from logging output
* Allow uploads of whole collections with many components
* Show warning message if server.timeout is used with Python < 3.5.2
2.1.4 - Wild Radish
-------------------
This release is compatible with version 2.0.0.
* Fix incorrect time range matching and calculation for some edge-cases with
rescheduled recurrences
* Fix owner property
2.1.3 - Wild Radish
-------------------
This release is compatible with version 2.0.0.
* Enable timeout for SSL handshakes and move them out of the main thread
* Create cache entries during upload of items
* Stop built-in server on Windows when Ctrl+C is pressed
* Prevent slow down when multiple requests hit a collection during cache warm-up
2.1.2 - Wild Radish
-------------------
This release is compatible with version 2.0.0.
* Remove workarounds for bugs in VObject < 0.9.5
* Error checking of collection tags and associated components
* Improve error checking of uploaded collections and components
* Don't delete empty collection properties implicitly
* Improve logging of VObject serialization
2.1.1 - Wild Radish Again
-------------------
This release is compatible with version 2.0.0.
* Add missing UIDs instead of failing
* Improve error checking of calendar and address book objects
* Fix upload of whole address books
2.1.0 - Wild Radish
-------------------
This release is compatible with version 2.0.0.
* Built-in web interface for creating and managing address books and calendars
* can be extended with web plugins
* Much faster storage backend
* Significant reduction in memory usage
* Improved logging
* Include paths (of invalid items / requests) in log messages
* Include configuration values causing problems in log messages
* Log warning message for invalid requests by clients
* Log error message for invalid files in the storage backend
* No stack traces unless debugging is enabled
* Time range filter also regards overwritten recurrences
* Items that couldn't be filtered because of bugs in VObject are always
returned (and a warning message is logged)
* Basic error checking of configuration files
* File system locking isn't disabled implicitly anymore, instead a new
configuration option gets introduced
* The permissions of the lock file are not changed anymore
* Support for sync-token
* Support for client-side SSL certificates
* Rights plugins can decide if access to an item is granted explicitly
* Respond with 403 instead of 404 for principal collections of non-existing
users when ``owner_only`` plugin is used (information leakage)
* Authentication plugins can provide the login and password from the
environment
* new ``remote_user`` plugin, that gets the login from the ``REMOTE_USER``
environment variable (for WSGI server)
* new ``http_x_remote_user`` plugin, that gets the login from the
``X-Remote-User`` HTTP header (for reverse proxies)
2.0.0 - Little Big Radish
-------------------------
This feature is not compatible with the 1.x.x versions. See
http://radicale.org/1to2/ if you want to switch from 1.x.x to
2.0.0.
* Support Python 3.3+ only, Python 2 is not supported anymore
* Keep only one simple filesystem-based storage system
* Remove built-in Git support
* Remove built-in authentication modules
* Keep the WSGI interface, use Python HTTP server by default
* Use a real iCal parser, rely on the "vobject" external module
* Add a solid calendar discovery
* Respect the difference between "files" and "folders", don't rely on slashes
* Remove the calendar creation with GET requests
* Be stateless
* Use a file locker
* Add threading
* Get atomic writes
* Support new filters
* Support read-only permissions
* Allow External plugins for authentication, rights management, storage and
version control
1.1.4 - Fifth Law of Nature
---------------------------
* Use ``shutil.move`` for ``--export-storage``
1.1.3 - Fourth Law of Nature
----------------------------
* Add a ``--export-storage=FOLDER`` command-line argument (by Unrud, see #606)
1.1.2 - Third Law of Nature
---------------------------
* **Security fix**: Add a random timer to avoid timing oracles and simple
bruteforce attacks when using the htpasswd authentication method.
* Various minor fixes.
1.1.1 - Second Law of Nature
----------------------------
* Fix the owner_write rights rule
1.1 - Law of Nature
-------------------
One feature in this release is **not backward compatible**:
* Use the first matching section for rights (inspired from daald)
Now, the first section matching the path and current user in your custom rights
file is used. In the previous versions, the most permissive rights of all the
matching sections were applied. This new behaviour gives a simple way to make
specific rules at the top of the file independant from the generic ones.
Many **improvements in this release are related to security**, you should
upgrade Radicale as soon as possible:
* Improve the regex used for well-known URIs (by Unrud)
* Prevent regex injection in rights management (by Unrud)
* Prevent crafted HTTP request from calling arbitrary functions (by Unrud)
* Improve URI sanitation and conversion to filesystem path (by Unrud)
* Decouple the daemon from its parent environment (by Unrud)
Some bugs have been fixed and little enhancements have been added:
* Assign new items to corret key (by Unrud)
* Avoid race condition in PID file creation (by Unrud)
* Improve the docker version (by cdpb)
* Encode message and commiter for git commits
* Test with Python 3.5
1.0.1 - Sunflower Again
-----------------------
* Update the version because of a **stupid** "feature"™ of PyPI
1.0 - Sunflower
---------------
* Enhanced performances (by Mathieu Dupuy)
* Add MD5-APR1 and BCRYPT for htpasswd-based authentication (by Jan-Philip Gehrcke)
* Use PAM service (by Stephen Paul Weber)
* Don't discard PROPPATCH on empty collections (by Markus Unterwaditzer)
* Write the path of the collection in the git message (by Matthew Monaco)
* Tests launched on Travis
0.10 - Lovely Endless Grass
---------------------------
* Support well-known URLs (by Mathieu Dupuy)
* Fix collection discovery (by Markus Unterwaditzer)
* Reload logger config on SIGHUP (by Élie Bouttier)
* Remove props files when deleting a collection (by Vincent Untz)
* Support salted SHA1 passwords (by Marc Kleine-Budde)
* Don't spam the logs about non-SSL IMAP connections to localhost (by Giel van Schijndel)
0.9 - Rivers
------------
* Custom handlers for auth, storage and rights (by Sergey Fursov)
* 1-file-per-event storage (by Jean-Marc Martins)
* Git support for filesystem storages (by Jean-Marc Martins)
* DB storage working with PostgreSQL, MariaDB and SQLite (by Jean-Marc Martins)
* Clean rights manager based on regular expressions (by Sweil)
* Support of contacts for Apple's clients
* Support colors (by Jochen Sprickerhof)
* Decode URLs in XML (by Jean-Marc Martins)
* Fix PAM authentication (by Stepan Henek)
* Use consistent etags (by 9m66p93w)
* Use consistent sorting order (by Daniel Danner)
* Return 401 on unauthorized DELETE requests (by Eduard Braun)
* Move pid file creation in child process (by Mathieu Dupuy)
* Allow requests without base_prefix (by jheidemann)
0.8 - Rainbow
-------------
* New authentication and rights management modules (by Matthias Jordan)
* Experimental database storage
* Command-line option for custom configuration file (by Mark Adams)
* Root URL not at the root of a domain (by Clint Adams, Fabrice Bellet, Vincent Untz)
* Improved support for iCal, CalDAVSync, CardDAVSync, CalDavZAP and CardDavMATE
* Empty PROPFIND requests handled (by Christoph Polcin)
* Colon allowed in passwords
* Configurable realm message
0.7.1 - Waterfalls
------------------
* Many address books fixes
* New IMAP ACL (by Daniel Aleksandersen)
* PAM ACL fixed (by Daniel Aleksandersen)
* Courier ACL fixed (by Benjamin Frank)
* Always set display name to collections (by Oskari Timperi)
* Various DELETE responses fixed
0.7 - Eternal Sunshine
----------------------
* Repeating events
* Collection deletion
* Courier and PAM authentication methods
* CardDAV support
* Custom LDAP filters supported
0.6.4 - Tulips
--------------
* Fix the installation with Python 3.1
0.6.3 - Red Roses
-----------------
* MOVE requests fixed
* Faster REPORT answers
* Executable script moved into the package
0.6.2 - Seeds
-------------
* iPhone and iPad support fixed
* Backslashes replaced by slashes in PROPFIND answers on Windows
* PyPI archive set as default download URL
0.6.1 - Growing Up
------------------
* Example files included in the tarball
* htpasswd support fixed
* Redirection loop bug fixed
* Testing message on GET requests
0.6 - Sapling
-------------
* WSGI support
* IPv6 support
* Smart, verbose and configurable logs
* Apple iCal 4 and iPhone support (by Łukasz Langa)
* KDE KOrganizer support
* LDAP auth backend (by Corentin Le Bail)
* Public and private calendars (by René Neumann)
* PID file
* MOVE requests management
* Journal entries support
* Drop Python 2.5 support
0.5 - Historical Artifacts
--------------------------
* Calendar depth
* MacOS and Windows support
* HEAD requests management
* htpasswd user from calendar path
0.4 - Hot Days Back
-------------------
* Personal calendars
* Last-Modified HTTP header
* ``no-ssl`` and ``foreground`` options
* Default configuration file
0.3 - Dancing Flowers
---------------------
* Evolution support
* Version management
0.2 - Snowflakes
----------------
* Sunbird pre-1.0 support
* SSL connection
* Htpasswd authentication
* Daemon mode
* User configuration
* Twisted dependency removed
* Python 3 support
* Real URLs for PUT and DELETE
* Concurrent modification reported to users
* Many bugs fixed (by Roger Wenham)
0.1 - Crazy Vegetables
----------------------
* First release
* Lightning/Sunbird 0.9 compatibility
* Easy installer
Radicale-2.1.11/README.md 0000664 0000000 0000000 00000000304 13370020422 0014545 0 ustar 00root root 0000000 0000000 Read Me
=======
Radicale is a free and open-source CalDAV and CardDAV server.
For complete documentation, please visit the
[Radicale online documentation](http://www.radicale.org/documentation)
Radicale-2.1.11/config 0000664 0000000 0000000 00000007077 13370020422 0014474 0 ustar 00root root 0000000 0000000 # -*- mode: conf -*-
# vim:ft=cfg
# Config file for Radicale - A simple calendar server
#
# Place it into /etc/radicale/config (global)
# or ~/.config/radicale/config (user)
#
# The current values are the default ones
[server]
# CalDAV server hostnames separated by a comma
# IPv4 syntax: address:port
# IPv6 syntax: [address]:port
# For example: 0.0.0.0:9999, [::]:9999
#hosts = 127.0.0.1:5232
# Daemon flag
#daemon = False
# File storing the PID in daemon mode
#pid =
# Max parallel connections
#max_connections = 20
# Max size of request body (bytes)
#max_content_length = 100000000
# Socket timeout (seconds)
#timeout = 30
# SSL flag, enable HTTPS protocol
#ssl = False
# SSL certificate path
#certificate = /etc/ssl/radicale.cert.pem
# SSL private key
#key = /etc/ssl/radicale.key.pem
# CA certificate for validating clients. This can be used to secure
# TCP traffic between Radicale and a reverse proxy
#certificate_authority =
# SSL Protocol used. See python's ssl module for available values
#protocol = PROTOCOL_TLSv1_2
# Available ciphers. See python's ssl module for available ciphers
#ciphers =
# Reverse DNS to resolve client address in logs
#dns_lookup = True
# Message displayed in the client when a password is needed
#realm = Radicale - Password Required
[encoding]
# Encoding for responding requests
#request = utf-8
# Encoding for storing local collections
#stock = utf-8
[auth]
# Authentication method
# Value: none | htpasswd | remote_user | http_x_remote_user
#type = none
# Htpasswd filename
#htpasswd_filename = /etc/radicale/users
# Htpasswd encryption method
# Value: plain | sha1 | ssha | crypt | bcrypt | md5
# Only bcrypt can be considered secure.
# bcrypt and md5 require the passlib library to be installed.
#htpasswd_encryption = bcrypt
# Incorrect authentication delay (seconds)
#delay = 1
[rights]
# Rights backend
# Value: none | authenticated | owner_only | owner_write | from_file
#type = owner_only
# File for rights management from_file
#file = /etc/radicale/rights
[storage]
# Storage backend
# Value: multifilesystem
#type = multifilesystem
# Folder for storing local collections, created if not present
#filesystem_folder = /var/lib/radicale/collections
# Lock the storage. Never start multiple instances of Radicale or edit the
# storage externally while Radicale is running if disabled.
#filesystem_locking = True
# Sync all changes to disk during requests. (This can impair performance.)
# Disabling it increases the risk of data loss, when the system crashes or
# power fails!
#filesystem_fsync = True
# Delete sync token that are older (seconds)
#max_sync_token_age = 2592000
# Close the lock file when no more clients are waiting.
# This option is not very useful in general, but on Windows files that are
# opened cannot be deleted.
#filesystem_close_lock_file = False
# Command that is run after changes to storage
# Example: ([ -d .git ] || git init) && git add -A && (git diff --cached --quiet || git commit -m "Changes by "%(user)s)
#hook =
[web]
# Web interface backend
# Value: none | internal
#type = internal
[logging]
# Logging configuration file
# If no config is given, simple information is printed on the standard output
# For more information about the syntax of the configuration file, see:
# http://docs.python.org/library/logging.config.html
#config =
# Set the default logging level to debug
#debug = False
# Store all environment variables (including those set in the shell)
#full_environment = False
# Don't include passwords in logs
#mask_passwords = True
[headers]
# Additional HTTP headers
#Access-Control-Allow-Origin = *
Radicale-2.1.11/logging 0000664 0000000 0000000 00000001723 13370020422 0014645 0 ustar 00root root 0000000 0000000 # -*- mode: conf -*-
# vim:ft=cfg
# Logging config file for Radicale - A simple calendar server
#
# The recommended path for this file is /etc/radicale/logging
# The path must be specified in the logging section of the configuration file
#
# Some examples are included in Radicale's documentation, see:
# http://radicale.org/logging/
#
# Other handlers are available. For more information, see:
# http://docs.python.org/library/logging.config.html
# Loggers, handlers and formatters keys
[loggers]
# Loggers names, main configuration slots
keys = root
[handlers]
# Logging handlers, defining logging output methods
keys = console
[formatters]
# Logging formatters
keys = simple
# Loggers
[logger_root]
# Root logger
level = WARNING
handlers = console
# Handlers
[handler_console]
# Console handler
class = StreamHandler
args = (sys.stderr,)
formatter = simple
# Formatters
[formatter_simple]
# Simple output format
format = [%(thread)x] %(levelname)s: %(message)s
Radicale-2.1.11/radicale.fcgi 0000775 0000000 0000000 00000002034 13370020422 0015671 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3
#
# This file is part of Radicale Server - Calendar Server
# Copyright © 2011-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Radicale FastCGI Example.
Launch a Radicale FastCGI server according to configuration.
This script relies on flup but can be easily adapted to use another
WSGI-to-FastCGI mapper.
"""
from flup.server.fcgi import WSGIServer
from radicale import application
WSGIServer(application).run()
Radicale-2.1.11/radicale.py 0000775 0000000 0000000 00000001725 13370020422 0015417 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3
#
# This file is part of Radicale Server - Calendar Server
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Radicale CalDAV Server.
Launch the server according to configuration and command-line options.
"""
import radicale.__main__
radicale.__main__.run()
Radicale-2.1.11/radicale.wsgi 0000775 0000000 0000000 00000001522 13370020422 0015733 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3
#
# This file is part of Radicale Server - Calendar Server
# Copyright © 2011-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Radicale WSGI file (mod_wsgi and uWSGI compliant).
"""
from radicale import application
Radicale-2.1.11/radicale/ 0000775 0000000 0000000 00000000000 13370020422 0015035 5 ustar 00root root 0000000 0000000 Radicale-2.1.11/radicale/__init__.py 0000664 0000000 0000000 00000123655 13370020422 0017162 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Radicale Server module.
This module offers a WSGI application class.
To use this module, you should take a look at the file ``radicale.py`` that
should have been included in this package.
"""
import base64
import contextlib
import datetime
import io
import itertools
import logging
import os
import pkg_resources
import posixpath
import pprint
import random
import socket
import socketserver
import ssl
import sys
import threading
import time
import wsgiref.simple_server
import zlib
from http import client
from urllib.parse import unquote, urlparse
from xml.etree import ElementTree as ET
import vobject
from radicale import auth, config, log, rights, storage, web, xmlutils
VERSION = pkg_resources.get_distribution('radicale').version
NOT_ALLOWED = (
client.FORBIDDEN, (("Content-Type", "text/plain"),),
"Access to the requested resource forbidden.")
FORBIDDEN = (
client.FORBIDDEN, (("Content-Type", "text/plain"),),
"Action on the requested resource refused.")
BAD_REQUEST = (
client.BAD_REQUEST, (("Content-Type", "text/plain"),), "Bad Request")
NOT_FOUND = (
client.NOT_FOUND, (("Content-Type", "text/plain"),),
"The requested resource could not be found.")
CONFLICT = (
client.CONFLICT, (("Content-Type", "text/plain"),),
"Conflict in the request.")
WEBDAV_PRECONDITION_FAILED = (
client.CONFLICT, (("Content-Type", "text/plain"),),
"WebDAV precondition failed.")
METHOD_NOT_ALLOWED = (
client.METHOD_NOT_ALLOWED, (("Content-Type", "text/plain"),),
"The method is not allowed on the requested resource.")
PRECONDITION_FAILED = (
client.PRECONDITION_FAILED,
(("Content-Type", "text/plain"),), "Precondition failed.")
REQUEST_TIMEOUT = (
client.REQUEST_TIMEOUT, (("Content-Type", "text/plain"),),
"Connection timed out.")
REQUEST_ENTITY_TOO_LARGE = (
client.REQUEST_ENTITY_TOO_LARGE, (("Content-Type", "text/plain"),),
"Request body too large.")
REMOTE_DESTINATION = (
client.BAD_GATEWAY, (("Content-Type", "text/plain"),),
"Remote destination not supported.")
DIRECTORY_LISTING = (
client.FORBIDDEN, (("Content-Type", "text/plain"),),
"Directory listings are not supported.")
INTERNAL_SERVER_ERROR = (
client.INTERNAL_SERVER_ERROR, (("Content-Type", "text/plain"),),
"A server error occurred. Please contact the administrator.")
DAV_HEADERS = "1, 2, 3, calendar-access, addressbook, extended-mkcol"
class HTTPServer(wsgiref.simple_server.WSGIServer):
"""HTTP server."""
# These class attributes must be set before creating instance
client_timeout = None
max_connections = None
logger = None
def __init__(self, address, handler, bind_and_activate=True):
"""Create server."""
ipv6 = ":" in address[0]
if ipv6:
self.address_family = socket.AF_INET6
# Do not bind and activate, as we might change socket options
super().__init__(address, handler, False)
if ipv6:
# Only allow IPv6 connections to the IPv6 socket
self.socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, 1)
if self.max_connections:
self.connections_guard = threading.BoundedSemaphore(
self.max_connections)
else:
# use dummy context manager
self.connections_guard = contextlib.ExitStack()
if bind_and_activate:
try:
self.server_bind()
self.server_activate()
except BaseException:
self.server_close()
raise
if self.client_timeout and sys.version_info < (3, 5, 2):
self.logger.warning("Using server.timeout with Python < 3.5.2 "
"can cause network connection failures")
def get_request(self):
# Set timeout for client
_socket, address = super().get_request()
if self.client_timeout:
_socket.settimeout(self.client_timeout)
return _socket, address
def handle_error(self, request, client_address):
if issubclass(sys.exc_info()[0], socket.timeout):
self.logger.info("client timed out", exc_info=True)
else:
self.logger.error("An exception occurred during request: %s",
sys.exc_info()[1], exc_info=True)
class HTTPSServer(HTTPServer):
"""HTTPS server."""
# These class attributes must be set before creating instance
certificate = None
key = None
protocol = None
ciphers = None
certificate_authority = None
def __init__(self, address, handler):
"""Create server by wrapping HTTP socket in an SSL socket."""
super().__init__(address, handler, bind_and_activate=False)
self.socket = ssl.wrap_socket(
self.socket, self.key, self.certificate, server_side=True,
cert_reqs=ssl.CERT_REQUIRED if self.certificate_authority else
ssl.CERT_NONE,
ca_certs=self.certificate_authority or None,
ssl_version=self.protocol, ciphers=self.ciphers,
do_handshake_on_connect=False)
self.server_bind()
self.server_activate()
class ThreadedHTTPServer(socketserver.ThreadingMixIn, HTTPServer):
def process_request_thread(self, request, client_address):
with self.connections_guard:
return super().process_request_thread(request, client_address)
class ThreadedHTTPSServer(socketserver.ThreadingMixIn, HTTPSServer):
def process_request_thread(self, request, client_address):
try:
try:
request.do_handshake()
except socket.timeout:
raise
except Exception as e:
raise RuntimeError("SSL handshake failed: %s" % e) from e
except Exception:
try:
self.handle_error(request, client_address)
finally:
self.shutdown_request(request)
return
with self.connections_guard:
return super().process_request_thread(request, client_address)
class RequestHandler(wsgiref.simple_server.WSGIRequestHandler):
"""HTTP requests handler."""
# These class attributes must be set before creating instance
logger = None
def __init__(self, *args, **kwargs):
# Store exception for logging
self.error_stream = io.StringIO()
super().__init__(*args, **kwargs)
def get_stderr(self):
return self.error_stream
def log_message(self, *args, **kwargs):
"""Disable inner logging management."""
def get_environ(self):
env = super().get_environ()
if hasattr(self.connection, "getpeercert"):
# The certificate can be evaluated by the auth module
env["REMOTE_CERTIFICATE"] = self.connection.getpeercert()
# Parent class only tries latin1 encoding
env["PATH_INFO"] = unquote(self.path.split("?", 1)[0])
return env
def handle(self):
super().handle()
# Log exception
error = self.error_stream.getvalue().strip("\n")
if error:
self.logger.error(
"An unhandled exception occurred during request:\n%s" % error)
class Application:
"""WSGI application managing collections."""
def __init__(self, configuration, logger):
"""Initialize application."""
super().__init__()
self.configuration = configuration
self.logger = logger
self.Auth = auth.load(configuration, logger)
self.Collection = storage.load(configuration, logger)
self.Rights = rights.load(configuration, logger)
self.Web = web.load(configuration, logger)
self.encoding = configuration.get("encoding", "request")
def headers_log(self, environ):
"""Sanitize headers for logging."""
request_environ = dict(environ)
# Remove environment variables
if not self.configuration.getboolean("logging", "full_environment"):
for shell_variable in os.environ:
request_environ.pop(shell_variable, None)
# Mask passwords
mask_passwords = self.configuration.getboolean(
"logging", "mask_passwords")
authorization = request_environ.get("HTTP_AUTHORIZATION", "")
if mask_passwords and authorization.startswith("Basic"):
request_environ["HTTP_AUTHORIZATION"] = "Basic **masked**"
if request_environ.get("HTTP_COOKIE"):
request_environ["HTTP_COOKIE"] = "**masked**"
return request_environ
def decode(self, text, environ):
"""Try to magically decode ``text`` according to given ``environ``."""
# List of charsets to try
charsets = []
# First append content charset given in the request
content_type = environ.get("CONTENT_TYPE")
if content_type and "charset=" in content_type:
charsets.append(
content_type.split("charset=")[1].split(";")[0].strip())
# Then append default Radicale charset
charsets.append(self.encoding)
# Then append various fallbacks
charsets.append("utf-8")
charsets.append("iso8859-1")
# Try to decode
for charset in charsets:
try:
return text.decode(charset)
except UnicodeDecodeError:
pass
raise UnicodeDecodeError
def collect_allowed_items(self, items, user):
"""Get items from request that user is allowed to access."""
read_allowed_items = []
write_allowed_items = []
for item in items:
if isinstance(item, storage.BaseCollection):
path = storage.sanitize_path("/%s/" % item.path)
can_read = self.Rights.authorized(user, path, "r")
can_write = self.Rights.authorized(user, path, "w")
target = "collection %r" % item.path
else:
path = storage.sanitize_path("/%s/%s" % (item.collection.path,
item.href))
can_read = self.Rights.authorized_item(user, path, "r")
can_write = self.Rights.authorized_item(user, path, "w")
target = "item %r from %r" % (item.href, item.collection.path)
text_status = []
if can_read:
text_status.append("read")
read_allowed_items.append(item)
if can_write:
text_status.append("write")
write_allowed_items.append(item)
self.logger.debug(
"%s has %s access to %s",
repr(user) if user else "anonymous user",
" and ".join(text_status) if text_status else "NO", target)
return read_allowed_items, write_allowed_items
def __call__(self, environ, start_response):
try:
status, headers, answers = self._handle_request(environ)
except Exception as e:
try:
method = str(environ["REQUEST_METHOD"])
except Exception:
method = "unknown"
try:
path = str(environ.get("PATH_INFO", ""))
except Exception:
path = ""
self.logger.error("An exception occurred during %s request on %r: "
"%s", method, path, e, exc_info=True)
status, headers, answer = INTERNAL_SERVER_ERROR
answer = answer.encode("ascii")
status = "%d %s" % (
status, client.responses.get(status, "Unknown"))
headers = [("Content-Length", str(len(answer)))] + list(headers)
answers = [answer]
start_response(status, headers)
return answers
def _handle_request(self, environ):
"""Manage a request."""
def response(status, headers=(), answer=None):
headers = dict(headers)
# Set content length
if answer:
if hasattr(answer, "encode"):
self.logger.debug("Response content:\n%s", answer)
headers["Content-Type"] += "; charset=%s" % self.encoding
answer = answer.encode(self.encoding)
accept_encoding = [
encoding.strip() for encoding in
environ.get("HTTP_ACCEPT_ENCODING", "").split(",")
if encoding.strip()]
if "gzip" in accept_encoding:
zcomp = zlib.compressobj(wbits=16 + zlib.MAX_WBITS)
answer = zcomp.compress(answer) + zcomp.flush()
headers["Content-Encoding"] = "gzip"
headers["Content-Length"] = str(len(answer))
# Add extra headers set in configuration
if self.configuration.has_section("headers"):
for key in self.configuration.options("headers"):
headers[key] = self.configuration.get("headers", key)
# Start response
time_end = datetime.datetime.now()
status = "%d %s" % (
status, client.responses.get(status, "Unknown"))
self.logger.info(
"%s response status for %r%s in %.3f seconds: %s",
environ["REQUEST_METHOD"], environ.get("PATH_INFO", ""),
depthinfo, (time_end - time_begin).total_seconds(), status)
# Return response content
return status, list(headers.items()), [answer] if answer else []
remote_host = "unknown"
if environ.get("REMOTE_HOST"):
remote_host = repr(environ["REMOTE_HOST"])
elif environ.get("REMOTE_ADDR"):
remote_host = environ["REMOTE_ADDR"]
if environ.get("HTTP_X_FORWARDED_FOR"):
remote_host = "%r (forwarded by %s)" % (
environ["HTTP_X_FORWARDED_FOR"], remote_host)
remote_useragent = ""
if environ.get("HTTP_USER_AGENT"):
remote_useragent = " using %r" % environ["HTTP_USER_AGENT"]
depthinfo = ""
if environ.get("HTTP_DEPTH"):
depthinfo = " with depth %r" % environ["HTTP_DEPTH"]
time_begin = datetime.datetime.now()
self.logger.info(
"%s request for %r%s received from %s%s",
environ["REQUEST_METHOD"], environ.get("PATH_INFO", ""), depthinfo,
remote_host, remote_useragent)
headers = pprint.pformat(self.headers_log(environ))
self.logger.debug("Request headers:\n%s", headers)
# Let reverse proxies overwrite SCRIPT_NAME
if "HTTP_X_SCRIPT_NAME" in environ:
# script_name must be removed from PATH_INFO by the client.
unsafe_base_prefix = environ["HTTP_X_SCRIPT_NAME"]
self.logger.debug("Script name overwritten by client: %r",
unsafe_base_prefix)
else:
# SCRIPT_NAME is already removed from PATH_INFO, according to the
# WSGI specification.
unsafe_base_prefix = environ.get("SCRIPT_NAME", "")
# Sanitize base prefix
base_prefix = storage.sanitize_path(unsafe_base_prefix).rstrip("/")
self.logger.debug("Sanitized script name: %r", base_prefix)
# Sanitize request URI (a WSGI server indicates with an empty path,
# that the URL targets the application root without a trailing slash)
path = storage.sanitize_path(environ.get("PATH_INFO", ""))
self.logger.debug("Sanitized path: %r", path)
# Get function corresponding to method
function = getattr(self, "do_%s" % environ["REQUEST_METHOD"].upper())
# If "/.well-known" is not available, clients query "/"
if path == "/.well-known" or path.startswith("/.well-known/"):
return response(*NOT_FOUND)
# Ask authentication backend to check rights
external_login = self.Auth.get_external_login(environ)
authorization = environ.get("HTTP_AUTHORIZATION", "")
if external_login:
login, password = external_login
elif authorization.startswith("Basic"):
authorization = authorization[len("Basic"):].strip()
login, password = self.decode(base64.b64decode(
authorization.encode("ascii")), environ).split(":", 1)
else:
# DEPRECATED: use remote_user backend instead
login = environ.get("REMOTE_USER", "")
password = ""
user = self.Auth.map_login_to_user(login)
if not user:
is_authenticated = True
elif not storage.is_safe_path_component(user):
# Prevent usernames like "user/calendar.ics"
self.logger.info("Refused unsafe username: %r", user)
is_authenticated = False
else:
is_authenticated = self.Auth.is_authenticated2(login, user,
password)
if not is_authenticated:
self.logger.info("Failed login attempt: %r", user)
# Random delay to avoid timing oracles and bruteforce attacks
delay = self.configuration.getfloat("auth", "delay")
if delay > 0:
random_delay = delay * (0.5 + random.random())
self.logger.debug("Sleeping %.3f seconds", random_delay)
time.sleep(random_delay)
else:
self.logger.info("Successful login: %r", user)
# Create principal collection
if user and is_authenticated:
principal_path = "/%s/" % user
if self.Rights.authorized(user, principal_path, "w"):
with self.Collection.acquire_lock("r", user):
principal = next(
self.Collection.discover(principal_path, depth="1"),
None)
if not principal:
with self.Collection.acquire_lock("w", user):
try:
self.Collection.create_collection(principal_path)
except ValueError as e:
self.logger.warning("Failed to create principal "
"collection %r: %s", user, e)
is_authenticated = False
else:
self.logger.warning("Access to principal path %r denied by "
"rights backend", principal_path)
# Verify content length
content_length = int(environ.get("CONTENT_LENGTH") or 0)
if content_length:
max_content_length = self.configuration.getint(
"server", "max_content_length")
if max_content_length and content_length > max_content_length:
self.logger.info(
"Request body too large: %d", content_length)
return response(*REQUEST_ENTITY_TOO_LARGE)
if is_authenticated:
status, headers, answer = function(
environ, base_prefix, path, user)
if (status, headers, answer) == NOT_ALLOWED:
self.logger.info("Access to %r denied for %s", path,
repr(user) if user else "anonymous user")
else:
status, headers, answer = NOT_ALLOWED
if (status, headers, answer) == NOT_ALLOWED and not (
user and is_authenticated) and not external_login:
# Unknown or unauthorized user
self.logger.debug("Asking client for authentication")
status = client.UNAUTHORIZED
realm = self.configuration.get("server", "realm")
headers = dict(headers)
headers.update({
"WWW-Authenticate":
"Basic realm=\"%s\"" % realm})
return response(status, headers, answer)
def _access(self, user, path, permission, item=None):
"""Check if ``user`` can access ``path`` or the parent collection.
``permission`` must either be "r" or "w".
If ``item`` is given, only access to that class of item is checked.
"""
allowed = False
if not item or isinstance(item, storage.BaseCollection):
allowed |= self.Rights.authorized(user, path, permission)
if not item or not isinstance(item, storage.BaseCollection):
allowed |= self.Rights.authorized_item(user, path, permission)
return allowed
def _read_raw_content(self, environ):
content_length = int(environ.get("CONTENT_LENGTH") or 0)
if not content_length:
return b""
content = environ["wsgi.input"].read(content_length)
if len(content) < content_length:
raise RuntimeError("Request body too short: %d" % len(content))
return content
def _read_content(self, environ):
content = self.decode(self._read_raw_content(environ), environ)
self.logger.debug("Request content:\n%s", content)
return content
def _read_xml_content(self, environ):
content = self.decode(self._read_raw_content(environ), environ)
if not content:
return None
try:
xml_content = ET.fromstring(content)
except ET.ParseError as e:
self.logger.debug("Request content (Invalid XML):\n%s", content)
raise RuntimeError("Failed to parse XML: %s" % e) from e
if self.logger.isEnabledFor(logging.DEBUG):
self.logger.debug("Request content:\n%s",
xmlutils.pretty_xml(xml_content))
return xml_content
def _write_xml_content(self, xml_content):
if self.logger.isEnabledFor(logging.DEBUG):
self.logger.debug("Response content:\n%s",
xmlutils.pretty_xml(xml_content))
f = io.BytesIO()
ET.ElementTree(xml_content).write(f, encoding=self.encoding,
xml_declaration=True)
return f.getvalue()
def _webdav_error_response(self, namespace, name,
status=WEBDAV_PRECONDITION_FAILED[0]):
"""Generate XML error response."""
headers = {"Content-Type": "text/xml; charset=%s" % self.encoding}
content = self._write_xml_content(
xmlutils.webdav_error(namespace, name))
return status, headers, content
def do_DELETE(self, environ, base_prefix, path, user):
"""Manage DELETE request."""
if not self._access(user, path, "w"):
return NOT_ALLOWED
with self.Collection.acquire_lock("w", user):
item = next(self.Collection.discover(path), None)
if not self._access(user, path, "w", item):
return NOT_ALLOWED
if not item:
return NOT_FOUND
if_match = environ.get("HTTP_IF_MATCH", "*")
if if_match not in ("*", item.etag):
# ETag precondition not verified, do not delete item
return PRECONDITION_FAILED
if isinstance(item, storage.BaseCollection):
xml_answer = xmlutils.delete(base_prefix, path, item)
else:
xml_answer = xmlutils.delete(
base_prefix, path, item.collection, item.href)
headers = {"Content-Type": "text/xml; charset=%s" % self.encoding}
return client.OK, headers, self._write_xml_content(xml_answer)
def do_GET(self, environ, base_prefix, path, user):
"""Manage GET request."""
# Redirect to .web if the root URL is requested
if not path.strip("/"):
web_path = ".web"
if not environ.get("PATH_INFO"):
web_path = posixpath.join(posixpath.basename(base_prefix),
web_path)
return (client.FOUND,
{"Location": web_path, "Content-Type": "text/plain"},
"Redirected to %s" % web_path)
# Dispatch .web URL to web module
if path == "/.web" or path.startswith("/.web/"):
return self.Web.get(environ, base_prefix, path, user)
if not self._access(user, path, "r"):
return NOT_ALLOWED
with self.Collection.acquire_lock("r", user):
item = next(self.Collection.discover(path), None)
if not self._access(user, path, "r", item):
return NOT_ALLOWED
if not item:
return NOT_FOUND
if isinstance(item, storage.BaseCollection):
tag = item.get_meta("tag")
if not tag:
return DIRECTORY_LISTING
content_type = xmlutils.MIMETYPES[tag]
else:
content_type = xmlutils.OBJECT_MIMETYPES[item.name]
headers = {
"Content-Type": content_type,
"Last-Modified": item.last_modified,
"ETag": item.etag}
answer = item.serialize()
return client.OK, headers, answer
def do_HEAD(self, environ, base_prefix, path, user):
"""Manage HEAD request."""
status, headers, answer = self.do_GET(
environ, base_prefix, path, user)
return status, headers, None
def do_MKCALENDAR(self, environ, base_prefix, path, user):
"""Manage MKCALENDAR request."""
if not self.Rights.authorized(user, path, "w"):
return NOT_ALLOWED
try:
xml_content = self._read_xml_content(environ)
except RuntimeError as e:
self.logger.warning(
"Bad MKCALENDAR request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
except socket.timeout:
self.logger.debug("client timed out", exc_info=True)
return REQUEST_TIMEOUT
with self.Collection.acquire_lock("w", user):
item = next(self.Collection.discover(path), None)
if item:
return self._webdav_error_response(
"D", "resource-must-be-null")
parent_path = storage.sanitize_path(
"/%s/" % posixpath.dirname(path.strip("/")))
parent_item = next(self.Collection.discover(parent_path), None)
if not parent_item:
return CONFLICT
if (not isinstance(parent_item, storage.BaseCollection) or
parent_item.get_meta("tag")):
return FORBIDDEN
props = xmlutils.props_from_request(xml_content)
props["tag"] = "VCALENDAR"
# TODO: use this?
# timezone = props.get("C:calendar-timezone")
try:
storage.check_and_sanitize_props(props)
self.Collection.create_collection(path, props=props)
except ValueError as e:
self.logger.warning(
"Bad MKCALENDAR request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
return client.CREATED, {}, None
def do_MKCOL(self, environ, base_prefix, path, user):
"""Manage MKCOL request."""
if not self.Rights.authorized(user, path, "w"):
return NOT_ALLOWED
try:
xml_content = self._read_xml_content(environ)
except RuntimeError as e:
self.logger.warning(
"Bad MKCOL request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
except socket.timeout:
self.logger.debug("client timed out", exc_info=True)
return REQUEST_TIMEOUT
with self.Collection.acquire_lock("w", user):
item = next(self.Collection.discover(path), None)
if item:
return METHOD_NOT_ALLOWED
parent_path = storage.sanitize_path(
"/%s/" % posixpath.dirname(path.strip("/")))
parent_item = next(self.Collection.discover(parent_path), None)
if not parent_item:
return CONFLICT
if (not isinstance(parent_item, storage.BaseCollection) or
parent_item.get_meta("tag")):
return FORBIDDEN
props = xmlutils.props_from_request(xml_content)
try:
storage.check_and_sanitize_props(props)
self.Collection.create_collection(path, props=props)
except ValueError as e:
self.logger.warning(
"Bad MKCOL request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
return client.CREATED, {}, None
def do_MOVE(self, environ, base_prefix, path, user):
"""Manage MOVE request."""
raw_dest = environ.get("HTTP_DESTINATION", "")
to_url = urlparse(raw_dest)
if to_url.netloc != environ["HTTP_HOST"]:
self.logger.info("Unsupported destination address: %r", raw_dest)
# Remote destination server, not supported
return REMOTE_DESTINATION
if not self._access(user, path, "w"):
return NOT_ALLOWED
to_path = storage.sanitize_path(to_url.path)
if not (to_path + "/").startswith(base_prefix + "/"):
self.logger.warning("Destination %r from MOVE request on %r does"
"n't start with base prefix", to_path, path)
return NOT_ALLOWED
to_path = to_path[len(base_prefix):]
if not self._access(user, to_path, "w"):
return NOT_ALLOWED
with self.Collection.acquire_lock("w", user):
item = next(self.Collection.discover(path), None)
if not self._access(user, path, "w", item):
return NOT_ALLOWED
if not self._access(user, to_path, "w", item):
return NOT_ALLOWED
if not item:
return NOT_FOUND
if isinstance(item, storage.BaseCollection):
# TODO: support moving collections
return METHOD_NOT_ALLOWED
to_item = next(self.Collection.discover(to_path), None)
if isinstance(to_item, storage.BaseCollection):
return FORBIDDEN
to_parent_path = storage.sanitize_path(
"/%s/" % posixpath.dirname(to_path.strip("/")))
to_collection = next(
self.Collection.discover(to_parent_path), None)
if not to_collection:
return CONFLICT
tag = item.collection.get_meta("tag")
if not tag or tag != to_collection.get_meta("tag"):
return FORBIDDEN
if to_item and environ.get("HTTP_OVERWRITE", "F") != "T":
return PRECONDITION_FAILED
to_href = posixpath.basename(to_path.strip("/"))
try:
self.Collection.move(item, to_collection, to_href)
except ValueError as e:
self.logger.warning(
"Bad MOVE request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
return client.NO_CONTENT if to_item else client.CREATED, {}, None
def do_OPTIONS(self, environ, base_prefix, path, user):
"""Manage OPTIONS request."""
headers = {
"Allow": ", ".join(
name[3:] for name in dir(self) if name.startswith("do_")),
"DAV": DAV_HEADERS}
return client.OK, headers, None
def do_PROPFIND(self, environ, base_prefix, path, user):
"""Manage PROPFIND request."""
if not self._access(user, path, "r"):
return NOT_ALLOWED
try:
xml_content = self._read_xml_content(environ)
except RuntimeError as e:
self.logger.warning(
"Bad PROPFIND request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
except socket.timeout:
self.logger.debug("client timed out", exc_info=True)
return REQUEST_TIMEOUT
with self.Collection.acquire_lock("r", user):
items = self.Collection.discover(
path, environ.get("HTTP_DEPTH", "0"))
# take root item for rights checking
item = next(items, None)
if not self._access(user, path, "r", item):
return NOT_ALLOWED
if not item:
return NOT_FOUND
# put item back
items = itertools.chain([item], items)
read_items, write_items = self.collect_allowed_items(items, user)
headers = {"DAV": DAV_HEADERS,
"Content-Type": "text/xml; charset=%s" % self.encoding}
status, xml_answer = xmlutils.propfind(
base_prefix, path, xml_content, read_items, write_items, user)
if status == client.FORBIDDEN:
return NOT_ALLOWED
return status, headers, self._write_xml_content(xml_answer)
def do_PROPPATCH(self, environ, base_prefix, path, user):
"""Manage PROPPATCH request."""
if not self.Rights.authorized(user, path, "w"):
return NOT_ALLOWED
try:
xml_content = self._read_xml_content(environ)
except RuntimeError as e:
self.logger.warning(
"Bad PROPPATCH request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
except socket.timeout:
self.logger.debug("client timed out", exc_info=True)
return REQUEST_TIMEOUT
with self.Collection.acquire_lock("w", user):
item = next(self.Collection.discover(path), None)
if not item:
return NOT_FOUND
if not isinstance(item, storage.BaseCollection):
return FORBIDDEN
headers = {"DAV": DAV_HEADERS,
"Content-Type": "text/xml; charset=%s" % self.encoding}
try:
xml_answer = xmlutils.proppatch(base_prefix, path, xml_content,
item)
except ValueError as e:
self.logger.warning(
"Bad PROPPATCH request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
return (client.MULTI_STATUS, headers,
self._write_xml_content(xml_answer))
def do_PUT(self, environ, base_prefix, path, user):
"""Manage PUT request."""
if not self._access(user, path, "w"):
return NOT_ALLOWED
try:
content = self._read_content(environ)
except RuntimeError as e:
self.logger.warning(
"Bad PUT request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
except socket.timeout:
self.logger.debug("client timed out", exc_info=True)
return REQUEST_TIMEOUT
with self.Collection.acquire_lock("w", user):
parent_path = storage.sanitize_path(
"/%s/" % posixpath.dirname(path.strip("/")))
item = next(self.Collection.discover(path), None)
parent_item = next(self.Collection.discover(parent_path), None)
if not parent_item:
return CONFLICT
write_whole_collection = (
isinstance(item, storage.BaseCollection) or
not parent_item.get_meta("tag"))
if write_whole_collection:
if not self.Rights.authorized(user, path, "w"):
return NOT_ALLOWED
elif not self.Rights.authorized_item(user, path, "w"):
return NOT_ALLOWED
etag = environ.get("HTTP_IF_MATCH", "")
if not item and etag:
# Etag asked but no item found: item has been removed
return PRECONDITION_FAILED
if item and etag and item.etag != etag:
# Etag asked but item not matching: item has changed
return PRECONDITION_FAILED
match = environ.get("HTTP_IF_NONE_MATCH", "") == "*"
if item and match:
# Creation asked but item found: item can't be replaced
return PRECONDITION_FAILED
try:
items = tuple(vobject.readComponents(content or ""))
if write_whole_collection:
content_type = environ.get("CONTENT_TYPE",
"").split(";")[0]
tags = {value: key
for key, value in xmlutils.MIMETYPES.items()}
tag = tags.get(content_type)
if items and items[0].name == "VCALENDAR":
tag = "VCALENDAR"
elif items and items[0].name in ("VCARD", "VLIST"):
tag = "VADDRESSBOOK"
else:
tag = parent_item.get_meta("tag")
if tag == "VCALENDAR" and len(items) > 1:
raise RuntimeError("VCALENDAR collection contains %d "
"components" % len(items))
for i in items:
storage.check_and_sanitize_item(
i, is_collection=write_whole_collection, uid=item.uid
if not write_whole_collection and item else None,
tag=tag)
except Exception as e:
self.logger.warning(
"Bad PUT request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
if write_whole_collection:
props = {}
if tag:
props["tag"] = tag
if tag == "VCALENDAR" and items:
if hasattr(items[0], "x_wr_calname"):
calname = items[0].x_wr_calname.value
if calname:
props["D:displayname"] = calname
if hasattr(items[0], "x_wr_caldesc"):
caldesc = items[0].x_wr_caldesc.value
if caldesc:
props["C:calendar-description"] = caldesc
try:
storage.check_and_sanitize_props(props)
new_item = self.Collection.create_collection(
path, items, props)
except ValueError as e:
self.logger.warning(
"Bad PUT request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
else:
href = posixpath.basename(path.strip("/"))
try:
if tag and not parent_item.get_meta("tag"):
new_props = parent_item.get_meta()
new_props["tag"] = tag
storage.check_and_sanitize_props(new_props)
parent_item.set_meta_all(new_props)
new_item = parent_item.upload(href, items[0])
except ValueError as e:
self.logger.warning(
"Bad PUT request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
headers = {"ETag": new_item.etag}
return client.CREATED, headers, None
def do_REPORT(self, environ, base_prefix, path, user):
"""Manage REPORT request."""
if not self._access(user, path, "r"):
return NOT_ALLOWED
try:
xml_content = self._read_xml_content(environ)
except RuntimeError as e:
self.logger.warning(
"Bad REPORT request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
except socket.timeout:
self.logger.debug("client timed out", exc_info=True)
return REQUEST_TIMEOUT
with self.Collection.acquire_lock("r", user):
item = next(self.Collection.discover(path), None)
if not self._access(user, path, "r", item):
return NOT_ALLOWED
if not item:
return NOT_FOUND
if isinstance(item, storage.BaseCollection):
collection = item
else:
collection = item.collection
headers = {"Content-Type": "text/xml; charset=%s" % self.encoding}
try:
status, xml_answer = xmlutils.report(
base_prefix, path, xml_content, collection)
except ValueError as e:
self.logger.warning(
"Bad REPORT request on %r: %s", path, e, exc_info=True)
return BAD_REQUEST
return (status, headers, self._write_xml_content(xml_answer))
_application = None
_application_config_path = None
_application_lock = threading.Lock()
def _init_application(config_path):
global _application, _application_config_path
with _application_lock:
if _application is not None:
return
_application_config_path = config_path
configuration = config.load([config_path] if config_path else [],
ignore_missing_paths=False)
filename = os.path.expanduser(configuration.get("logging", "config"))
debug = configuration.getboolean("logging", "debug")
logger = log.start("radicale", filename, debug)
_application = Application(configuration, logger)
def application(environ, start_response):
config_path = environ.get("RADICALE_CONFIG",
os.environ.get("RADICALE_CONFIG"))
if _application is None:
_init_application(config_path)
if _application_config_path != config_path:
raise ValueError("RADICALE_CONFIG must not change: %s != %s" %
(repr(config_path), repr(_application_config_path)))
return _application(environ, start_response)
Radicale-2.1.11/radicale/__main__.py 0000664 0000000 0000000 00000025503 13370020422 0017134 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2011-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Radicale executable module.
This module can be executed from a command line with ``$python -m radicale`` or
from a python programme with ``radicale.__main__.run()``.
"""
import argparse
import atexit
import os
import select
import signal
import socket
import ssl
import sys
from wsgiref.simple_server import make_server
from radicale import (VERSION, Application, RequestHandler, ThreadedHTTPServer,
ThreadedHTTPSServer, config, log, storage)
def run():
"""Run Radicale as a standalone server."""
# Get command-line arguments
parser = argparse.ArgumentParser(usage="radicale [OPTIONS]")
parser.add_argument("--version", action="version", version=VERSION)
parser.add_argument("--verify-storage", action="store_true",
help="check the storage for errors and exit")
parser.add_argument(
"-C", "--config", help="use a specific configuration file")
groups = {}
for section, values in config.INITIAL_CONFIG.items():
group = parser.add_argument_group(section)
groups[group] = []
for option, data in values.items():
kwargs = data.copy()
long_name = "--{0}-{1}".format(
section, option.replace("_", "-"))
args = kwargs.pop("aliases", [])
args.append(long_name)
kwargs["dest"] = "{0}_{1}".format(section, option)
groups[group].append(kwargs["dest"])
del kwargs["value"]
if "internal" in kwargs:
del kwargs["internal"]
if kwargs["type"] == bool:
del kwargs["type"]
kwargs["action"] = "store_const"
kwargs["const"] = "True"
opposite_args = kwargs.pop("opposite", [])
opposite_args.append("--no{0}".format(long_name[1:]))
group.add_argument(*args, **kwargs)
kwargs["const"] = "False"
kwargs["help"] = "do not {0} (opposite of {1})".format(
kwargs["help"], long_name)
group.add_argument(*opposite_args, **kwargs)
else:
group.add_argument(*args, **kwargs)
args = parser.parse_args()
if args.config is not None:
config_paths = [args.config] if args.config else []
ignore_missing_paths = False
else:
config_paths = ["/etc/radicale/config",
os.path.expanduser("~/.config/radicale/config")]
if "RADICALE_CONFIG" in os.environ:
config_paths.append(os.environ["RADICALE_CONFIG"])
ignore_missing_paths = True
try:
configuration = config.load(config_paths,
ignore_missing_paths=ignore_missing_paths)
except Exception as e:
print("ERROR: Invalid configuration: %s" % e, file=sys.stderr)
if args.logging_debug:
raise
exit(1)
# Update Radicale configuration according to arguments
for group, actions in groups.items():
section = group.title
for action in actions:
value = getattr(args, action)
if value is not None:
configuration.set(section, action.split('_', 1)[1], value)
if args.verify_storage:
# Write to stderr when storage verification is requested
configuration["logging"]["config"] = ""
# Start logging
filename = os.path.expanduser(configuration.get("logging", "config"))
debug = configuration.getboolean("logging", "debug")
try:
logger = log.start("radicale", filename, debug)
except Exception as e:
print("ERROR: Failed to start logger: %s" % e, file=sys.stderr)
if debug:
raise
exit(1)
if args.verify_storage:
logger.info("Verifying storage")
try:
Collection = storage.load(configuration, logger)
with Collection.acquire_lock("r"):
if not Collection.verify():
logger.error("Storage verifcation failed")
exit(1)
except Exception as e:
logger.error("An exception occurred during storage verification: "
"%s", e, exc_info=True)
exit(1)
return
try:
serve(configuration, logger)
except Exception as e:
logger.error("An exception occurred during server startup: %s", e,
exc_info=True)
exit(1)
def daemonize(configuration, logger):
"""Fork and decouple if Radicale is configured as daemon."""
# Check and create PID file in a race-free manner
if configuration.get("server", "pid"):
try:
pid_path = os.path.abspath(os.path.expanduser(
configuration.get("server", "pid")))
pid_fd = os.open(
pid_path, os.O_CREAT | os.O_EXCL | os.O_WRONLY)
except OSError as e:
raise OSError("PID file exists: %r" %
configuration.get("server", "pid")) from e
pid = os.fork()
if pid:
# Write PID
if configuration.get("server", "pid"):
with os.fdopen(pid_fd, "w") as pid_file:
pid_file.write(str(pid))
sys.exit()
if configuration.get("server", "pid"):
os.close(pid_fd)
# Register exit function
def cleanup():
"""Remove the PID files."""
logger.debug("Cleaning up")
# Remove PID file
os.unlink(pid_path)
atexit.register(cleanup)
# Decouple environment
os.chdir("/")
os.setsid()
with open(os.devnull, "r") as null_in:
os.dup2(null_in.fileno(), sys.stdin.fileno())
with open(os.devnull, "w") as null_out:
os.dup2(null_out.fileno(), sys.stdout.fileno())
os.dup2(null_out.fileno(), sys.stderr.fileno())
def serve(configuration, logger):
"""Serve radicale from configuration."""
logger.info("Starting Radicale")
# Create collection servers
servers = {}
if configuration.getboolean("server", "ssl"):
server_class = ThreadedHTTPSServer
server_class.certificate = configuration.get("server", "certificate")
server_class.key = configuration.get("server", "key")
server_class.certificate_authority = configuration.get(
"server", "certificate_authority")
server_class.ciphers = configuration.get("server", "ciphers")
server_class.protocol = getattr(
ssl, configuration.get("server", "protocol"), ssl.PROTOCOL_SSLv23)
# Test if the SSL files can be read
for name in ["certificate", "key"] + (
["certificate_authority"]
if server_class.certificate_authority else []):
filename = getattr(server_class, name)
try:
open(filename, "r").close()
except OSError as e:
raise RuntimeError("Failed to read SSL %s %r: %s" %
(name, filename, e)) from e
else:
server_class = ThreadedHTTPServer
server_class.client_timeout = configuration.getint("server", "timeout")
server_class.max_connections = configuration.getint(
"server", "max_connections")
server_class.logger = logger
RequestHandler.logger = logger
if not configuration.getboolean("server", "dns_lookup"):
RequestHandler.address_string = lambda self: self.client_address[0]
shutdown_program = False
for host in configuration.get("server", "hosts").split(","):
try:
address, port = host.strip().rsplit(":", 1)
address, port = address.strip("[] "), int(port)
except ValueError as e:
raise RuntimeError(
"Failed to parse address %r: %s" % (host, e)) from e
application = Application(configuration, logger)
try:
server = make_server(
address, port, application, server_class, RequestHandler)
except OSError as e:
raise RuntimeError(
"Failed to start server %r: %s" % (host, e)) from e
servers[server.socket] = server
logger.info("Listening to %r on port %d%s",
server.server_name, server.server_port, " using SSL"
if configuration.getboolean("server", "ssl") else "")
# Create a socket pair to notify the select syscall of program shutdown
# This is not available in python < 3.5 on Windows
if hasattr(socket, "socketpair"):
shutdown_program_socket_in, shutdown_program_socket_out = (
socket.socketpair())
else:
shutdown_program_socket_in, shutdown_program_socket_out = None, None
# SIGTERM and SIGINT (aka KeyboardInterrupt) should just mark this for
# shutdown
def shutdown(*args):
nonlocal shutdown_program
if shutdown_program:
# Ignore following signals
return
logger.info("Stopping Radicale")
shutdown_program = True
if shutdown_program_socket_in:
shutdown_program_socket_in.sendall(b"goodbye")
signal.signal(signal.SIGTERM, shutdown)
signal.signal(signal.SIGINT, shutdown)
# Main loop: wait for requests on any of the servers or program shutdown
sockets = list(servers.keys())
if shutdown_program_socket_out:
# Use socket pair to get notified of program shutdown
sockets.append(shutdown_program_socket_out)
select_timeout = None
if not shutdown_program_socket_out or os.name == "nt":
# Fallback to busy waiting. (select.select blocks SIGINT on Windows.)
select_timeout = 1.0
if configuration.getboolean("server", "daemon"):
daemonize(configuration, logger)
logger.info("Radicale server ready")
while not shutdown_program:
try:
rlist, _, xlist = select.select(
sockets, [], sockets, select_timeout)
except (KeyboardInterrupt, select.error):
# SIGINT is handled by signal handler above
rlist, xlist = [], []
if xlist:
raise RuntimeError("unhandled socket error")
if rlist:
server = servers.get(rlist[0])
if server:
server.handle_request()
if __name__ == "__main__":
run()
Radicale-2.1.11/radicale/auth.py 0000664 0000000 0000000 00000024417 13370020422 0016360 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Authentication management.
Default is htpasswd authentication.
Apache's htpasswd command (httpd.apache.org/docs/programs/htpasswd.html)
manages a file for storing user credentials. It can encrypt passwords using
different methods, e.g. BCRYPT, MD5-APR1 (a version of MD5 modified for
Apache), SHA1, or by using the system's CRYPT routine. The CRYPT and SHA1
encryption methods implemented by htpasswd are considered as insecure. MD5-APR1
provides medium security as of 2015. Only BCRYPT can be considered secure by
current standards.
MD5-APR1-encrypted credentials can be written by all versions of htpasswd (it
is the default, in fact), whereas BCRYPT requires htpasswd 2.4.x or newer.
The `is_authenticated(user, password)` function provided by this module
verifies the user-given credentials by parsing the htpasswd credential file
pointed to by the ``htpasswd_filename`` configuration value while assuming
the password encryption method specified via the ``htpasswd_encryption``
configuration value.
The following htpasswd password encrpytion methods are supported by Radicale
out-of-the-box:
- plain-text (created by htpasswd -p...) -- INSECURE
- CRYPT (created by htpasswd -d...) -- INSECURE
- SHA1 (created by htpasswd -s...) -- INSECURE
When passlib (https://pypi.python.org/pypi/passlib) is importable, the
following significantly more secure schemes are parsable by Radicale:
- MD5-APR1 (htpasswd -m...) -- htpasswd's default method
- BCRYPT (htpasswd -B...) -- Requires htpasswd 2.4.x
"""
import base64
import functools
import hashlib
import hmac
import os
from importlib import import_module
INTERNAL_TYPES = ("None", "none", "remote_user", "http_x_remote_user",
"htpasswd")
def load(configuration, logger):
"""Load the authentication manager chosen in configuration."""
auth_type = configuration.get("auth", "type")
if auth_type in ("None", "none"): # DEPRECATED: use "none"
class_ = NoneAuth
elif auth_type == "remote_user":
class_ = RemoteUserAuth
elif auth_type == "http_x_remote_user":
class_ = HttpXRemoteUserAuth
elif auth_type == "htpasswd":
class_ = Auth
else:
try:
class_ = import_module(auth_type).Auth
except Exception as e:
raise RuntimeError("Failed to load authentication module %r: %s" %
(auth_type, e)) from e
logger.info("Authentication type is %r", auth_type)
return class_(configuration, logger)
class BaseAuth:
def __init__(self, configuration, logger):
self.configuration = configuration
self.logger = logger
def get_external_login(self, environ):
"""Optionally provide the login and password externally.
``environ`` a dict with the WSGI environment
If ``()`` is returned, Radicale handles HTTP authentication.
Otherwise, returns a tuple ``(login, password)``. For anonymous users
``login`` must be ``""``.
"""
return ()
def is_authenticated2(self, login, user, password):
"""Validate credentials.
``login`` the login name
``user`` the user from ``map_login_to_user(login)``.
``password`` the login password
"""
return self.is_authenticated(user, password)
def is_authenticated(self, user, password):
"""Validate credentials.
DEPRECATED: use ``is_authenticated2`` instead
"""
raise NotImplementedError
def map_login_to_user(self, login):
"""Map login name to internal user.
``login`` the login name, ``""`` for anonymous users
Returns a string with the user name.
If a login can't be mapped to an user, return ``login`` and
return ``False`` in ``is_authenticated2(...)``.
"""
return login
class NoneAuth(BaseAuth):
def is_authenticated(self, user, password):
return True
class Auth(BaseAuth):
def __init__(self, configuration, logger):
super().__init__(configuration, logger)
self.filename = os.path.expanduser(
configuration.get("auth", "htpasswd_filename"))
self.encryption = configuration.get("auth", "htpasswd_encryption")
if self.encryption == "ssha":
self.verify = self._ssha
elif self.encryption == "sha1":
self.verify = self._sha1
elif self.encryption == "plain":
self.verify = self._plain
elif self.encryption == "md5":
try:
from passlib.hash import apr_md5_crypt
except ImportError as e:
raise RuntimeError(
"The htpasswd encryption method 'md5' requires "
"the passlib module.") from e
self.verify = functools.partial(self._md5apr1, apr_md5_crypt)
elif self.encryption == "bcrypt":
try:
from passlib.hash import bcrypt
except ImportError as e:
raise RuntimeError(
"The htpasswd encryption method 'bcrypt' requires "
"the passlib module with bcrypt support.") from e
# A call to `encrypt` raises passlib.exc.MissingBackendError with a
# good error message if bcrypt backend is not available. Trigger
# this here.
bcrypt.encrypt("test-bcrypt-backend")
self.verify = functools.partial(self._bcrypt, bcrypt)
elif self.encryption == "crypt":
try:
import crypt
except ImportError as e:
raise RuntimeError(
"The htpasswd encryption method 'crypt' requires "
"the crypt() system support.") from e
self.verify = functools.partial(self._crypt, crypt)
else:
raise RuntimeError(
"The htpasswd encryption method %r is not "
"supported." % self.encryption)
def _plain(self, hash_value, password):
"""Check if ``hash_value`` and ``password`` match, plain method."""
return hmac.compare_digest(hash_value, password)
def _crypt(self, crypt, hash_value, password):
"""Check if ``hash_value`` and ``password`` match, crypt method."""
hash_value = hash_value.strip()
return hmac.compare_digest(crypt.crypt(password, hash_value),
hash_value)
def _sha1(self, hash_value, password):
"""Check if ``hash_value`` and ``password`` match, sha1 method."""
hash_value = base64.b64decode(hash_value.strip().replace(
"{SHA}", "").encode("ascii"))
password = password.encode(self.configuration.get("encoding", "stock"))
sha1 = hashlib.sha1()
sha1.update(password)
return hmac.compare_digest(sha1.digest(), hash_value)
def _ssha(self, hash_value, password):
"""Check if ``hash_value`` and ``password`` match, salted sha1 method.
This method is not directly supported by htpasswd, but it can be
written with e.g. openssl, and nginx can parse it.
"""
hash_value = base64.b64decode(hash_value.strip().replace(
"{SSHA}", "").encode("ascii"))
password = password.encode(self.configuration.get("encoding", "stock"))
salt_value = hash_value[20:]
hash_value = hash_value[:20]
sha1 = hashlib.sha1()
sha1.update(password)
sha1.update(salt_value)
return hmac.compare_digest(sha1.digest(), hash_value)
def _bcrypt(self, bcrypt, hash_value, password):
hash_value = hash_value.strip()
return bcrypt.verify(password, hash_value)
def _md5apr1(self, md5_apr1, hash_value, password):
hash_value = hash_value.strip()
return md5_apr1.verify(password, hash_value)
def is_authenticated(self, user, password):
"""Validate credentials.
Iterate through htpasswd credential file until user matches, extract
hash (encrypted password) and check hash against user-given password,
using the method specified in the Radicale config.
The content of the file is not cached because reading is generally a
very cheap operation, and it's useful to get live updates of the
htpasswd file.
"""
try:
with open(self.filename) as f:
for line in f:
line = line.rstrip("\n")
if line.lstrip() and not line.lstrip().startswith("#"):
try:
login, hash_value = line.split(":", maxsplit=1)
# Always compare both login and password to avoid
# timing attacks, see #591.
login_ok = hmac.compare_digest(login, user)
password_ok = self.verify(hash_value, password)
if login_ok and password_ok:
return True
except ValueError as e:
raise RuntimeError("Invalid htpasswd file %r: %s" %
(self.filename, e)) from e
except OSError as e:
raise RuntimeError("Failed to load htpasswd file %r: %s" %
(self.filename, e)) from e
return False
class RemoteUserAuth(NoneAuth):
def get_external_login(self, environ):
return environ.get("REMOTE_USER", ""), ""
class HttpXRemoteUserAuth(NoneAuth):
def get_external_login(self, environ):
return environ.get("HTTP_X_REMOTE_USER", ""), ""
Radicale-2.1.11/radicale/config.py 0000664 0000000 0000000 00000022272 13370020422 0016661 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Radicale configuration module.
Give a configparser-like interface to read and write configuration.
"""
import math
import os
from collections import OrderedDict
from configparser import RawConfigParser as ConfigParser
from radicale import auth, rights, storage, web
def positive_int(value):
value = int(value)
if value < 0:
raise ValueError("value is negative: %d" % value)
return value
def positive_float(value):
value = float(value)
if not math.isfinite(value):
raise ValueError("value is infinite")
if math.isnan(value):
raise ValueError("value is not a number")
if value < 0:
raise ValueError("value is negative: %f" % value)
return value
# Default configuration
INITIAL_CONFIG = OrderedDict([
("server", OrderedDict([
("hosts", {
"value": "127.0.0.1:5232",
"help": "set server hostnames including ports",
"aliases": ["-H", "--hosts"],
"type": str}),
("daemon", {
"value": "False",
"help": "launch as daemon",
"aliases": ["-d", "--daemon"],
"opposite": ["-f", "--foreground"],
"type": bool}),
("pid", {
"value": "",
"help": "set PID filename for daemon mode",
"aliases": ["-p", "--pid"],
"type": str}),
("max_connections", {
"value": "20",
"help": "maximum number of parallel connections",
"type": positive_int}),
("max_content_length", {
"value": "100000000",
"help": "maximum size of request body in bytes",
"type": positive_int}),
("timeout", {
"value": "30",
"help": "socket timeout",
"type": positive_int}),
("ssl", {
"value": "False",
"help": "use SSL connection",
"aliases": ["-s", "--ssl"],
"opposite": ["-S", "--no-ssl"],
"type": bool}),
("certificate", {
"value": "/etc/ssl/radicale.cert.pem",
"help": "set certificate file",
"aliases": ["-c", "--certificate"],
"type": str}),
("key", {
"value": "/etc/ssl/radicale.key.pem",
"help": "set private key file",
"aliases": ["-k", "--key"],
"type": str}),
("certificate_authority", {
"value": "",
"help": "set CA certificate for validating clients",
"aliases": ["--certificate-authority"],
"type": str}),
("protocol", {
"value": "PROTOCOL_TLSv1_2",
"help": "SSL protocol used",
"type": str}),
("ciphers", {
"value": "",
"help": "available ciphers",
"type": str}),
("dns_lookup", {
"value": "True",
"help": "use reverse DNS to resolve client address in logs",
"type": bool}),
("realm", {
"value": "Radicale - Password Required",
"help": "message displayed when a password is needed",
"type": str})])),
("encoding", OrderedDict([
("request", {
"value": "utf-8",
"help": "encoding for responding requests",
"type": str}),
("stock", {
"value": "utf-8",
"help": "encoding for storing local collections",
"type": str})])),
("auth", OrderedDict([
("type", {
"value": "none",
"help": "authentication method",
"type": str,
"internal": auth.INTERNAL_TYPES}),
("htpasswd_filename", {
"value": "/etc/radicale/users",
"help": "htpasswd filename",
"type": str}),
("htpasswd_encryption", {
"value": "bcrypt",
"help": "htpasswd encryption method",
"type": str}),
("delay", {
"value": "1",
"help": "incorrect authentication delay",
"type": positive_float})])),
("rights", OrderedDict([
("type", {
"value": "owner_only",
"help": "rights backend",
"type": str,
"internal": rights.INTERNAL_TYPES}),
("file", {
"value": "/etc/radicale/rights",
"help": "file for rights management from_file",
"type": str})])),
("storage", OrderedDict([
("type", {
"value": "multifilesystem",
"help": "storage backend",
"type": str,
"internal": storage.INTERNAL_TYPES}),
("filesystem_folder", {
"value": os.path.expanduser(
"/var/lib/radicale/collections"),
"help": "path where collections are stored",
"type": str}),
("max_sync_token_age", {
"value": 2592000, # 30 days
"help": "delete sync token that are older",
"type": int}),
("filesystem_fsync", {
"value": "True",
"help": "sync all changes to filesystem during requests",
"type": bool}),
("filesystem_locking", {
"value": "True",
"help": "lock the storage while accessing it",
"type": bool}),
("filesystem_close_lock_file", {
"value": "False",
"help": "close the lock file when no more clients are waiting",
"type": bool}),
("hook", {
"value": "",
"help": "command that is run after changes to storage",
"type": str})])),
("web", OrderedDict([
("type", {
"value": "internal",
"help": "web interface backend",
"type": str,
"internal": web.INTERNAL_TYPES})])),
("logging", OrderedDict([
("config", {
"value": "",
"help": "logging configuration file",
"type": str}),
("debug", {
"value": "False",
"help": "print debug information",
"aliases": ["-D", "--debug"],
"type": bool}),
("full_environment", {
"value": "False",
"help": "store all environment variables",
"type": bool}),
("mask_passwords", {
"value": "True",
"help": "mask passwords in logs",
"type": bool})]))])
def load(paths=(), extra_config=None, ignore_missing_paths=True):
config = ConfigParser()
for section, values in INITIAL_CONFIG.items():
config.add_section(section)
for key, data in values.items():
config.set(section, key, data["value"])
if extra_config:
for section, values in extra_config.items():
for key, value in values.items():
config.set(section, key, value)
for path in paths:
if path or not ignore_missing_paths:
try:
if not config.read(path) and not ignore_missing_paths:
raise RuntimeError("No such file: %r" % path)
except Exception as e:
raise RuntimeError(
"Failed to load config file %r: %s" % (path, e)) from e
# Check the configuration
for section in config.sections():
if section == "headers":
continue
if section not in INITIAL_CONFIG:
raise RuntimeError("Invalid section %r in config" % section)
allow_extra_options = ("type" in INITIAL_CONFIG[section] and
config.get(section, "type") not in
INITIAL_CONFIG[section]["type"].get("internal",
()))
for option in config[section]:
if option not in INITIAL_CONFIG[section]:
if allow_extra_options:
continue
raise RuntimeError("Invalid option %r in section %r in "
"config" % (option, section))
type_ = INITIAL_CONFIG[section][option]["type"]
try:
if type_ == bool:
config.getboolean(section, option)
else:
type_(config.get(section, option))
except Exception as e:
raise RuntimeError(
"Invalid %s value for option %r in section %r in config: "
"%r" % (type_.__name__, option, section,
config.get(section, option))) from e
return config
Radicale-2.1.11/radicale/log.py 0000664 0000000 0000000 00000005120 13370020422 0016166 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2011-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Radicale logging module.
Manage logging from a configuration file. For more information, see:
http://docs.python.org/library/logging.config.html
"""
import logging
import logging.config
import signal
import sys
def configure_from_file(logger, filename, debug):
logging.config.fileConfig(filename, disable_existing_loggers=False)
if debug:
logger.setLevel(logging.DEBUG)
for handler in logger.handlers:
handler.setLevel(logging.DEBUG)
return logger
class RemoveTracebackFilter(logging.Filter):
def filter(self, record):
record.exc_info = None
return True
def start(name="radicale", filename=None, debug=False):
"""Start the logging according to the configuration."""
logger = logging.getLogger(name)
if debug:
logger.setLevel(logging.DEBUG)
else:
logger.addFilter(RemoveTracebackFilter())
if filename:
# Configuration taken from file
try:
configure_from_file(logger, filename, debug)
except Exception as e:
raise RuntimeError("Failed to load logging configuration file %r: "
"%s" % (filename, e)) from e
# Reload config on SIGHUP (UNIX only)
if hasattr(signal, "SIGHUP"):
def handler(signum, frame):
try:
configure_from_file(logger, filename, debug)
except Exception as e:
logger.error("Failed to reload logging configuration file "
"%r: %s", filename, e, exc_info=True)
signal.signal(signal.SIGHUP, handler)
else:
# Default configuration, standard output
handler = logging.StreamHandler(sys.stderr)
handler.setFormatter(
logging.Formatter("[%(thread)x] %(levelname)s: %(message)s"))
logger.addHandler(handler)
return logger
Radicale-2.1.11/radicale/rights.py 0000664 0000000 0000000 00000015157 13370020422 0016720 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2012-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Rights backends.
This module loads the rights backend, according to the rights
configuration.
Default rights are based on a regex-based file whose name is specified in the
config (section "right", key "file").
Authentication login is matched against the "user" key, and collection's path
is matched against the "collection" key. You can use Python's ConfigParser
interpolation values %(login)s and %(path)s. You can also get groups from the
user regex in the collection with {0}, {1}, etc.
For example, for the "user" key, ".+" means "authenticated user" and ".*"
means "anybody" (including anonymous users).
Section names are only used for naming the rule.
Leading or ending slashes are trimmed from collection's path.
"""
import configparser
import os.path
import posixpath
import re
from importlib import import_module
from radicale import storage
INTERNAL_TYPES = ("None", "none", "authenticated", "owner_write", "owner_only",
"from_file")
def load(configuration, logger):
"""Load the rights manager chosen in configuration."""
rights_type = configuration.get("rights", "type")
if configuration.get("auth", "type") in ("None", "none"): # DEPRECATED
rights_type = "None"
if rights_type in ("None", "none"): # DEPRECATED: use "none"
rights_class = NoneRights
elif rights_type == "authenticated":
rights_class = AuthenticatedRights
elif rights_type == "owner_write":
rights_class = OwnerWriteRights
elif rights_type == "owner_only":
rights_class = OwnerOnlyRights
elif rights_type == "from_file":
rights_class = Rights
else:
try:
rights_class = import_module(rights_type).Rights
except Exception as e:
raise RuntimeError("Failed to load rights module %r: %s" %
(rights_type, e)) from e
logger.info("Rights type is %r", rights_type)
return rights_class(configuration, logger)
class BaseRights:
def __init__(self, configuration, logger):
self.configuration = configuration
self.logger = logger
def authorized(self, user, path, permission):
"""Check if the user is allowed to read or write the collection.
If ``user`` is empty, check for anonymous rights.
``path`` is sanitized.
``permission`` is "r" or "w".
"""
raise NotImplementedError
def authorized_item(self, user, path, permission):
"""Check if the user is allowed to read or write the item."""
path = storage.sanitize_path(path)
parent_path = storage.sanitize_path(
"/%s/" % posixpath.dirname(path.strip("/")))
return self.authorized(user, parent_path, permission)
class NoneRights(BaseRights):
def authorized(self, user, path, permission):
return True
class AuthenticatedRights(BaseRights):
def authorized(self, user, path, permission):
return bool(user)
class OwnerWriteRights(BaseRights):
def authorized(self, user, path, permission):
sane_path = storage.sanitize_path(path).strip("/")
return bool(user) and (permission == "r" or
user == sane_path.split("/", maxsplit=1)[0])
class OwnerOnlyRights(BaseRights):
def authorized(self, user, path, permission):
sane_path = storage.sanitize_path(path).strip("/")
return bool(user) and (
permission == "r" and not sane_path or
user == sane_path.split("/", maxsplit=1)[0])
def authorized_item(self, user, path, permission):
sane_path = storage.sanitize_path(path).strip("/")
if "/" not in sane_path:
return False
return super().authorized_item(user, path, permission)
class Rights(BaseRights):
def __init__(self, configuration, logger):
super().__init__(configuration, logger)
self.filename = os.path.expanduser(configuration.get("rights", "file"))
def authorized(self, user, path, permission):
user = user or ""
sane_path = storage.sanitize_path(path).strip("/")
# Prevent "regex injection"
user_escaped = re.escape(user)
sane_path_escaped = re.escape(sane_path)
regex = configparser.ConfigParser(
{"login": user_escaped, "path": sane_path_escaped})
try:
if not regex.read(self.filename):
raise RuntimeError("No such file: %r" %
self.filename)
except Exception as e:
raise RuntimeError("Failed to load rights file %r: %s" %
(self.filename, e)) from e
for section in regex.sections():
try:
re_user_pattern = regex.get(section, "user")
re_collection_pattern = regex.get(section, "collection")
# Emulate fullmatch
user_match = re.match(r"(?:%s)\Z" % re_user_pattern, user)
collection_match = user_match and re.match(
r"(?:%s)\Z" % re_collection_pattern.format(
*map(re.escape, user_match.groups())), sane_path)
except Exception as e:
raise RuntimeError("Error in section %r of rights file %r: "
"%s" % (section, self.filename, e)) from e
if user_match and collection_match:
self.logger.debug("Rule %r:%r matches %r:%r from section %r",
user, sane_path, re_user_pattern,
re_collection_pattern, section)
return permission in regex.get(section, "permission")
else:
self.logger.debug("Rule %r:%r doesn't match %r:%r from section"
" %r", user, sane_path, re_user_pattern,
re_collection_pattern, section)
self.logger.info(
"Rights: %r:%r doesn't match any section", user, sane_path)
return False
Radicale-2.1.11/radicale/storage.py 0000664 0000000 0000000 00000202547 13370020422 0017065 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Storage backends.
This module loads the storage backend, according to the storage configuration.
Default storage uses one folder per collection and one file per collection
entry.
"""
import binascii
import contextlib
import json
import logging
import os
import pickle
import posixpath
import shlex
import subprocess
import sys
import threading
import time
from contextlib import contextmanager
from hashlib import md5
from importlib import import_module
from itertools import chain, groupby
from math import log
from random import getrandbits
from tempfile import NamedTemporaryFile, TemporaryDirectory
import vobject
if sys.version_info >= (3, 5):
# HACK: Avoid import cycle for Python < 3.5
from radicale import xmlutils
if os.name == "nt":
import ctypes
import ctypes.wintypes
import msvcrt
LOCKFILE_EXCLUSIVE_LOCK = 2
if ctypes.sizeof(ctypes.c_void_p) == 4:
ULONG_PTR = ctypes.c_uint32
else:
ULONG_PTR = ctypes.c_uint64
class Overlapped(ctypes.Structure):
_fields_ = [
("internal", ULONG_PTR),
("internal_high", ULONG_PTR),
("offset", ctypes.wintypes.DWORD),
("offset_high", ctypes.wintypes.DWORD),
("h_event", ctypes.wintypes.HANDLE)]
lock_file_ex = ctypes.windll.kernel32.LockFileEx
lock_file_ex.argtypes = [
ctypes.wintypes.HANDLE,
ctypes.wintypes.DWORD,
ctypes.wintypes.DWORD,
ctypes.wintypes.DWORD,
ctypes.wintypes.DWORD,
ctypes.POINTER(Overlapped)]
lock_file_ex.restype = ctypes.wintypes.BOOL
unlock_file_ex = ctypes.windll.kernel32.UnlockFileEx
unlock_file_ex.argtypes = [
ctypes.wintypes.HANDLE,
ctypes.wintypes.DWORD,
ctypes.wintypes.DWORD,
ctypes.wintypes.DWORD,
ctypes.POINTER(Overlapped)]
unlock_file_ex.restype = ctypes.wintypes.BOOL
elif os.name == "posix":
import fcntl
INTERNAL_TYPES = ("multifilesystem",)
def load(configuration, logger):
"""Load the storage manager chosen in configuration."""
if sys.version_info < (3, 5):
# HACK: Avoid import cycle for Python < 3.5
global xmlutils
from radicale import xmlutils
storage_type = configuration.get("storage", "type")
if storage_type == "multifilesystem":
collection_class = Collection
else:
try:
collection_class = import_module(storage_type).Collection
except Exception as e:
raise RuntimeError("Failed to load storage module %r: %s" %
(storage_type, e)) from e
logger.info("Storage type is %r", storage_type)
class CollectionCopy(collection_class):
"""Collection copy, avoids overriding the original class attributes."""
CollectionCopy.configuration = configuration
CollectionCopy.logger = logger
CollectionCopy.static_init()
return CollectionCopy
def check_and_sanitize_item(vobject_item, is_collection=False, uid=None,
tag=None):
"""Check vobject items for common errors and add missing UIDs.
``is_collection`` indicates that vobject_item contains unrelated
components.
If ``uid`` is not set, the UID is generated randomly.
The ``tag`` of the collection.
"""
if tag and tag not in ("VCALENDAR", "VADDRESSBOOK"):
raise ValueError("Unsupported collection tag: %r" % tag)
if vobject_item.name == "VCALENDAR" and tag == "VCALENDAR":
component_name = None
object_uid = None
object_uid_set = False
for component in vobject_item.components():
# https://tools.ietf.org/html/rfc4791#section-4.1
if component.name == "VTIMEZONE":
continue
if component_name is None or is_collection:
component_name = component.name
elif component_name != component.name:
raise ValueError("Multiple component types in object: %r, %r" %
(component_name, component.name))
if component_name not in ("VTODO", "VEVENT", "VJOURNAL"):
continue
component_uid = get_uid(component)
if not object_uid_set or is_collection:
object_uid_set = True
object_uid = component_uid
if component_uid is None:
component.add("UID").value = uid or random_uuid4()
elif not component_uid:
component.uid.value = uid or random_uuid4()
elif not object_uid or not component_uid:
raise ValueError("Multiple %s components without UID in "
"object" % component_name)
elif object_uid != component_uid:
raise ValueError(
"Multiple %s components with different UIDs in object: "
"%r, %r" % (component_name, object_uid, component_uid))
# vobject interprets recurrence rules on demand
try:
component.rruleset
except Exception as e:
raise ValueError("invalid recurrence rules in %s" %
component.name) from e
elif vobject_item.name == "VCARD" and tag == "VADDRESSBOOK":
# https://tools.ietf.org/html/rfc6352#section-5.1
object_uid = get_uid(vobject_item)
if object_uid is None:
vobject_item.add("UID").value = uid or random_uuid4()
elif not object_uid:
vobject_item.uid.value = uid or random_uuid4()
elif vobject_item.name == "VLIST" and tag == "VADDRESSBOOK":
# Custom format used by SOGo Connector to store lists of contacts
pass
else:
raise ValueError("Item type %r not supported in %s collection" %
(vobject_item.name, repr(tag) if tag else "generic"))
def check_and_sanitize_props(props):
"""Check collection properties for common errors."""
tag = props.get("tag")
if tag and tag not in ("VCALENDAR", "VADDRESSBOOK"):
raise ValueError("Unsupported collection tag: %r" % tag)
def random_uuid4():
"""Generate a pseudo-random UUID"""
r = "%016x" % getrandbits(128)
return "%s-%s-%s-%s-%s" % (r[:8], r[8:12], r[12:16], r[16:20], r[20:])
def scandir(path, only_dirs=False, only_files=False):
"""Iterator for directory elements. (For compatibility with Python < 3.5)
``only_dirs`` only return directories
``only_files`` only return files
"""
if sys.version_info >= (3, 5):
for entry in os.scandir(path):
if ((not only_files or entry.is_file()) and
(not only_dirs or entry.is_dir())):
yield entry.name
else:
for name in os.listdir(path):
p = os.path.join(path, name)
if ((not only_files or os.path.isfile(p)) and
(not only_dirs or os.path.isdir(p))):
yield name
def get_etag(text):
"""Etag from collection or item.
Encoded as quoted-string (see RFC 2616).
"""
etag = md5()
etag.update(text.encode("utf-8"))
return '"%s"' % etag.hexdigest()
def get_uid(vobject_component):
"""UID value of an item if defined."""
return (vobject_component.uid.value
if hasattr(vobject_component, "uid") else None)
def get_uid_from_object(vobject_item):
"""UID value of an calendar/addressbook object."""
if vobject_item.name == "VCALENDAR":
if hasattr(vobject_item, "vevent"):
return get_uid(vobject_item.vevent)
if hasattr(vobject_item, "vjournal"):
return get_uid(vobject_item.vjournal)
if hasattr(vobject_item, "vtodo"):
return get_uid(vobject_item.vtodo)
elif vobject_item.name == "VCARD":
return get_uid(vobject_item)
return None
def sanitize_path(path):
"""Make path absolute with leading slash to prevent access to other data.
Preserve a potential trailing slash.
"""
trailing_slash = "/" if path.endswith("/") else ""
path = posixpath.normpath(path)
new_path = "/"
for part in path.split("/"):
if not is_safe_path_component(part):
continue
new_path = posixpath.join(new_path, part)
trailing_slash = "" if new_path.endswith("/") else trailing_slash
return new_path + trailing_slash
def is_safe_path_component(path):
"""Check if path is a single component of a path.
Check that the path is safe to join too.
"""
return path and "/" not in path and path not in (".", "..")
def is_safe_filesystem_path_component(path):
"""Check if path is a single component of a local and posix filesystem
path.
Check that the path is safe to join too.
"""
return (
path and not os.path.splitdrive(path)[0] and
not os.path.split(path)[0] and path not in (os.curdir, os.pardir) and
not path.startswith(".") and not path.endswith("~") and
is_safe_path_component(path))
def path_to_filesystem(root, *paths):
"""Convert path to a local filesystem path relative to base_folder.
`root` must be a secure filesystem path, it will be prepend to the path.
Conversion of `paths` is done in a secure manner, or raises ``ValueError``.
"""
paths = [sanitize_path(path).strip("/") for path in paths]
safe_path = root
for path in paths:
if not path:
continue
for part in path.split("/"):
if not is_safe_filesystem_path_component(part):
raise UnsafePathError(part)
safe_path_parent = safe_path
safe_path = os.path.join(safe_path, part)
# Check for conflicting files (e.g. case-insensitive file systems
# or short names on Windows file systems)
if (os.path.lexists(safe_path) and
part not in scandir(safe_path_parent)):
raise CollidingPathError(part)
return safe_path
def left_encode_int(v):
length = int(log(v, 256)) + 1 if v != 0 else 1
return bytes((length,)) + v.to_bytes(length, 'little')
class UnsafePathError(ValueError):
def __init__(self, path):
message = "Can't translate name safely to filesystem: %r" % path
super().__init__(message)
class CollidingPathError(ValueError):
def __init__(self, path):
message = "File name collision: %r" % path
super().__init__(message)
class ComponentExistsError(ValueError):
def __init__(self, path):
message = "Component already exists: %r" % path
super().__init__(message)
class ComponentNotFoundError(ValueError):
def __init__(self, path):
message = "Component doesn't exist: %r" % path
super().__init__(message)
class Item:
def __init__(self, collection, item=None, href=None, last_modified=None,
text=None, etag=None, uid=None, name=None,
component_name=None):
"""Initialize an item.
``collection`` the parent collection.
``href`` the href of the item.
``last_modified`` the HTTP-datetime of when the item was modified.
``text`` the text representation of the item (optional if ``item`` is
set).
``item`` the vobject item (optional if ``text`` is set).
``etag`` the etag of the item (optional). See ``get_etag``.
``uid`` the UID of the object (optional). See ``get_uid_from_object``.
"""
if text is None and item is None:
raise ValueError("at least one of 'text' or 'item' must be set")
self.collection = collection
self.href = href
self.last_modified = last_modified
self._text = text
self._item = item
self._etag = etag
self._uid = uid
self._name = name
self._component_name = component_name
def __getattr__(self, attr):
return getattr(self.item, attr)
def serialize(self):
if self._text is None:
try:
self._text = self.item.serialize()
except Exception as e:
raise RuntimeError("Failed to serialize item %r from %r: %s" %
(self.href, self.collection.path, e)) from e
return self._text
@property
def item(self):
if self._item is None:
try:
self._item = vobject.readOne(self._text)
except Exception as e:
raise RuntimeError("Failed to parse item %r from %r: %s" %
(self.href, self.collection.path, e)) from e
return self._item
@property
def etag(self):
"""Encoded as quoted-string (see RFC 2616)."""
if self._etag is None:
self._etag = get_etag(self.serialize())
return self._etag
@property
def uid(self):
if self._uid is None:
self._uid = get_uid_from_object(self.item)
return self._uid
@property
def name(self):
if self._name is not None:
return self._name
return self.item.name
@property
def component_name(self):
if self._component_name is not None:
return self._component_name
return xmlutils.find_tag(self.item)
class BaseCollection:
# Overriden on copy by the "load" function
configuration = None
logger = None
# Properties of instance
"""The sanitized path of the collection without leading or trailing ``/``.
"""
path = ""
@classmethod
def static_init():
"""init collection copy"""
pass
@property
def owner(self):
"""The owner of the collection."""
return self.path.split("/", maxsplit=1)[0]
@property
def is_principal(self):
"""Collection is a principal."""
return bool(self.path) and "/" not in self.path
@owner.setter
def owner(self, value):
# DEPRECATED: Included for compatibility reasons
pass
@is_principal.setter
def is_principal(self, value):
# DEPRECATED: Included for compatibility reasons
pass
@classmethod
def discover(cls, path, depth="0"):
"""Discover a list of collections under the given ``path``.
``path`` is sanitized.
If ``depth`` is "0", only the actual object under ``path`` is
returned.
If ``depth`` is anything but "0", it is considered as "1" and direct
children are included in the result.
The root collection "/" must always exist.
"""
raise NotImplementedError
@classmethod
def move(cls, item, to_collection, to_href):
"""Move an object.
``item`` is the item to move.
``to_collection`` is the target collection.
``to_href`` is the target name in ``to_collection``. An item with the
same name might already exist.
"""
if item.collection.path == to_collection.path and item.href == to_href:
return
to_collection.upload(to_href, item.item)
item.collection.delete(item.href)
@property
def etag(self):
"""Encoded as quoted-string (see RFC 2616)."""
etag = md5()
for item in self.get_all():
etag.update((item.href + "/" + item.etag).encode("utf-8"))
etag.update(json.dumps(self.get_meta(), sort_keys=True).encode())
return '"%s"' % etag.hexdigest()
@classmethod
def create_collection(cls, href, collection=None, props=None):
"""Create a collection.
``href`` is the sanitized path.
If the collection already exists and neither ``collection`` nor
``props`` are set, this method shouldn't do anything. Otherwise the
existing collection must be replaced.
``collection`` is a list of vobject components.
``props`` are metadata values for the collection.
``props["tag"]`` is the type of collection (VCALENDAR or
VADDRESSBOOK). If the key ``tag`` is missing, it is guessed from the
collection.
"""
raise NotImplementedError
def sync(self, old_token=None):
"""Get the current sync token and changed items for synchronization.
``old_token`` an old sync token which is used as the base of the
delta update. If sync token is missing, all items are returned.
ValueError is raised for invalid or old tokens.
WARNING: This simple default implementation treats all sync-token as
invalid. It adheres to the specification but some clients
(e.g. InfCloud) don't like it. Subclasses should provide a
more sophisticated implementation.
"""
token = "http://radicale.org/ns/sync/%s" % self.etag.strip("\"")
if old_token:
raise ValueError("Sync token are not supported")
return token, self.list()
def list(self):
"""List collection items."""
raise NotImplementedError
def get(self, href):
"""Fetch a single item."""
raise NotImplementedError
def get_multi(self, hrefs):
"""Fetch multiple items. Duplicate hrefs must be ignored.
DEPRECATED: use ``get_multi2`` instead
"""
return (self.get(href) for href in set(hrefs))
def get_multi2(self, hrefs):
"""Fetch multiple items.
Functionally similar to ``get``, but might bring performance benefits
on some storages when used cleverly. It's not required to return the
requested items in the correct order. Duplicated hrefs can be ignored.
Returns tuples with the href and the item or None if the item doesn't
exist.
"""
return ((href, self.get(href)) for href in hrefs)
def get_all(self):
"""Fetch all items.
Functionally similar to ``get``, but might bring performance benefits
on some storages when used cleverly.
"""
return map(self.get, self.list())
def get_all_filtered(self, filters):
"""Fetch all items with optional filtering.
This can largely improve performance of reports depending on
the filters and this implementation.
Returns tuples in the form ``(item, filters_matched)``.
``filters_matched`` is a bool that indicates if ``filters`` are fully
matched.
This returns all events by default
"""
return ((item, False) for item in self.get_all())
def pre_filtered_list(self, filters):
"""List collection items with optional pre filtering.
DEPRECATED: use ``get_all_filtered`` instead
"""
return self.get_all()
def has(self, href):
"""Check if an item exists by its href.
Functionally similar to ``get``, but might bring performance benefits
on some storages when used cleverly.
"""
return self.get(href) is not None
def upload(self, href, vobject_item):
"""Upload a new or replace an existing item."""
raise NotImplementedError
def delete(self, href=None):
"""Delete an item.
When ``href`` is ``None``, delete the collection.
"""
raise NotImplementedError
def get_meta(self, key=None):
"""Get metadata value for collection.
Return the value of the property ``key``. If ``key`` is ``None`` return
a dict with all properties
"""
raise NotImplementedError
def set_meta(self, props):
"""Set metadata values for collection.
``props`` a dict with updates for properties. If a value is empty, the
property must be deleted.
DEPRECATED: use ``set_meta_all`` instead
"""
raise NotImplementedError
def set_meta_all(self, props):
"""Set metadata values for collection.
``props`` a dict with values for properties.
"""
delta_props = self.get_meta()
for key in delta_props.keys():
if key not in props:
delta_props[key] = None
delta_props.update(props)
self.set_meta(self, delta_props)
@property
def last_modified(self):
"""Get the HTTP-datetime of when the collection was modified."""
raise NotImplementedError
def serialize(self):
"""Get the unicode string representing the whole collection."""
if self.get_meta("tag") == "VCALENDAR":
in_vcalendar = False
vtimezones = ""
included_tzids = set()
vtimezone = []
tzid = None
components = ""
# Concatenate all child elements of VCALENDAR from all items
# together, while preventing duplicated VTIMEZONE entries.
# VTIMEZONEs are only distinguished by their TZID, if different
# timezones share the same TZID this produces errornous ouput.
# VObject fails at this too.
for item in self.get_all():
depth = 0
for line in item.serialize().split("\r\n"):
if line.startswith("BEGIN:"):
depth += 1
if depth == 1 and line == "BEGIN:VCALENDAR":
in_vcalendar = True
elif in_vcalendar:
if depth == 1 and line.startswith("END:"):
in_vcalendar = False
if depth == 2 and line == "BEGIN:VTIMEZONE":
vtimezone.append(line + "\r\n")
elif vtimezone:
vtimezone.append(line + "\r\n")
if depth == 2 and line.startswith("TZID:"):
tzid = line[len("TZID:"):]
elif depth == 2 and line.startswith("END:"):
if tzid is None or tzid not in included_tzids:
vtimezones += "".join(vtimezone)
included_tzids.add(tzid)
vtimezone.clear()
tzid = None
elif depth >= 2:
components += line + "\r\n"
if line.startswith("END:"):
depth -= 1
template = vobject.iCalendar()
displayname = self.get_meta("D:displayname")
if displayname:
template.add("X-WR-CALNAME")
template.x_wr_calname.value_param = "TEXT"
template.x_wr_calname.value = displayname
description = self.get_meta("C:calendar-description")
if description:
template.add("X-WR-CALDESC")
template.x_wr_caldesc.value_param = "TEXT"
template.x_wr_caldesc.value = description
template = template.serialize()
template_insert_pos = template.find("\r\nEND:VCALENDAR\r\n") + 2
assert template_insert_pos != -1
return (template[:template_insert_pos] +
vtimezones + components +
template[template_insert_pos:])
elif self.get_meta("tag") == "VADDRESSBOOK":
return "".join((item.serialize() for item in self.get_all()))
return ""
@classmethod
@contextmanager
def acquire_lock(cls, mode, user=None):
"""Set a context manager to lock the whole storage.
``mode`` must either be "r" for shared access or "w" for exclusive
access.
``user`` is the name of the logged in user or empty.
"""
raise NotImplementedError
@classmethod
def verify(cls):
"""Check the storage for errors."""
return True
ITEM_CACHE_VERSION = 1
class Collection(BaseCollection):
"""Collection stored in several files per calendar."""
@classmethod
def static_init(cls):
# init storage lock
folder = os.path.expanduser(cls.configuration.get(
"storage", "filesystem_folder"))
cls._makedirs_synced(folder)
lock_path = None
if cls.configuration.getboolean("storage", "filesystem_locking"):
lock_path = os.path.join(folder, ".Radicale.lock")
close_lock_file = cls.configuration.getboolean(
"storage", "filesystem_close_lock_file")
cls._lock = FileBackedRwLock(lock_path, close_lock_file)
# init cache lock
cls._cache_locks = {}
cls._cache_locks_lock = threading.Lock()
def __init__(self, path, principal=None, folder=None,
filesystem_path=None):
# DEPRECATED: Remove principal and folder attributes
if folder is None:
folder = self._get_collection_root_folder()
# Path should already be sanitized
self.path = sanitize_path(path).strip("/")
self._encoding = self.configuration.get("encoding", "stock")
# DEPRECATED: Use ``self._encoding`` instead
self.encoding = self._encoding
if filesystem_path is None:
filesystem_path = path_to_filesystem(folder, self.path)
self._filesystem_path = filesystem_path
self._props_path = os.path.join(
self._filesystem_path, ".Radicale.props")
self._meta_cache = None
self._etag_cache = None
self._item_cache_cleaned = False
@classmethod
def _get_collection_root_folder(cls):
filesystem_folder = os.path.expanduser(
cls.configuration.get("storage", "filesystem_folder"))
return os.path.join(filesystem_folder, "collection-root")
@contextmanager
def _atomic_write(self, path, mode="w", newline=None, sync_directory=True):
directory = os.path.dirname(path)
tmp = NamedTemporaryFile(
mode=mode, dir=directory, delete=False, prefix=".Radicale.tmp-",
newline=newline, encoding=None if "b" in mode else self._encoding)
try:
yield tmp
tmp.flush()
try:
self._fsync(tmp.fileno())
except OSError as e:
raise RuntimeError("Fsync'ing file %r failed: %s" %
(path, e)) from e
tmp.close()
os.replace(tmp.name, path)
except BaseException:
tmp.close()
os.remove(tmp.name)
raise
if sync_directory:
self._sync_directory(directory)
@staticmethod
def _find_available_file_name(exists_fn, suffix=""):
# Prevent infinite loop
for _ in range(1000):
file_name = random_uuid4() + suffix
if not exists_fn(file_name):
return file_name
# something is wrong with the PRNG
raise RuntimeError("No unique random sequence found")
@classmethod
def _fsync(cls, fd):
if cls.configuration.getboolean("storage", "filesystem_fsync"):
if os.name == "posix" and hasattr(fcntl, "F_FULLFSYNC"):
fcntl.fcntl(fd, fcntl.F_FULLFSYNC)
else:
os.fsync(fd)
@classmethod
def _sync_directory(cls, path):
"""Sync directory to disk.
This only works on POSIX and does nothing on other systems.
"""
if not cls.configuration.getboolean("storage", "filesystem_fsync"):
return
if os.name == "posix":
try:
fd = os.open(path, 0)
try:
cls._fsync(fd)
finally:
os.close(fd)
except OSError as e:
raise RuntimeError("Fsync'ing directory %r failed: %s" %
(path, e)) from e
@classmethod
def _makedirs_synced(cls, filesystem_path):
"""Recursively create a directory and its parents in a sync'ed way.
This method acts silently when the folder already exists.
"""
if os.path.isdir(filesystem_path):
return
parent_filesystem_path = os.path.dirname(filesystem_path)
# Prevent infinite loop
if filesystem_path != parent_filesystem_path:
# Create parent dirs recursively
cls._makedirs_synced(parent_filesystem_path)
# Possible race!
os.makedirs(filesystem_path, exist_ok=True)
cls._sync_directory(parent_filesystem_path)
@classmethod
def discover(cls, path, depth="0", child_context_manager=(
lambda path, href=None: contextlib.ExitStack())):
# Path should already be sanitized
sane_path = sanitize_path(path).strip("/")
attributes = sane_path.split("/") if sane_path else []
folder = cls._get_collection_root_folder()
# Create the root collection
cls._makedirs_synced(folder)
try:
filesystem_path = path_to_filesystem(folder, sane_path)
except ValueError as e:
# Path is unsafe
cls.logger.debug("Unsafe path %r requested from storage: %s",
sane_path, e, exc_info=True)
return
# Check if the path exists and if it leads to a collection or an item
if not os.path.isdir(filesystem_path):
if attributes and os.path.isfile(filesystem_path):
href = attributes.pop()
else:
return
else:
href = None
sane_path = "/".join(attributes)
collection = cls(sane_path)
if href:
yield collection.get(href)
return
yield collection
if depth == "0":
return
for href in collection.list():
with child_context_manager(sane_path, href):
yield collection.get(href)
for href in scandir(filesystem_path, only_dirs=True):
if not is_safe_filesystem_path_component(href):
if not href.startswith(".Radicale"):
cls.logger.debug("Skipping collection %r in %r", href,
sane_path)
continue
child_path = posixpath.join(sane_path, href)
with child_context_manager(child_path):
yield cls(child_path)
@classmethod
def verify(cls):
item_errors = collection_errors = 0
@contextlib.contextmanager
def exception_cm(path, href=None):
nonlocal item_errors, collection_errors
try:
yield
except Exception as e:
if href:
item_errors += 1
name = "item %r in %r" % (href, path.strip("/"))
else:
collection_errors += 1
name = "collection %r" % path.strip("/")
cls.logger.error("Invalid %s: %s", name, e, exc_info=True)
remaining_paths = [""]
while remaining_paths:
path = remaining_paths.pop(0)
cls.logger.debug("Verifying collection %r", path)
with exception_cm(path):
saved_item_errors = item_errors
collection = None
for item in cls.discover(path, "1", exception_cm):
if not collection:
collection = item
collection.get_meta()
continue
if isinstance(item, BaseCollection):
remaining_paths.append(item.path)
else:
cls.logger.debug("Verified item %r in %r",
item.href, path)
if item_errors == saved_item_errors:
collection.sync()
return item_errors == 0 and collection_errors == 0
@classmethod
def create_collection(cls, href, collection=None, props=None):
folder = cls._get_collection_root_folder()
# Path should already be sanitized
sane_path = sanitize_path(href).strip("/")
filesystem_path = path_to_filesystem(folder, sane_path)
if not props:
cls._makedirs_synced(filesystem_path)
return cls(sane_path)
parent_dir = os.path.dirname(filesystem_path)
cls._makedirs_synced(parent_dir)
# Create a temporary directory with an unsafe name
with TemporaryDirectory(
prefix=".Radicale.tmp-", dir=parent_dir) as tmp_dir:
# The temporary directory itself can't be renamed
tmp_filesystem_path = os.path.join(tmp_dir, "collection")
os.makedirs(tmp_filesystem_path)
self = cls(sane_path, filesystem_path=tmp_filesystem_path)
self.set_meta_all(props)
if collection:
if props.get("tag") == "VCALENDAR":
collection, = collection
items = []
for content in ("vevent", "vtodo", "vjournal"):
items.extend(
getattr(collection, "%s_list" % content, []))
items_by_uid = groupby(sorted(items, key=get_uid), get_uid)
vobject_items = {}
for uid, items in items_by_uid:
new_collection = vobject.iCalendar()
for item in items:
new_collection.add(item)
# href must comply to is_safe_filesystem_path_component
# and no file name collisions must exist between hrefs
href = self._find_available_file_name(
vobject_items.get, suffix=".ics")
vobject_items[href] = new_collection
self._upload_all_nonatomic(vobject_items)
elif props.get("tag") == "VADDRESSBOOK":
vobject_items = {}
for card in collection:
# href must comply to is_safe_filesystem_path_component
# and no file name collisions must exist between hrefs
href = self._find_available_file_name(
vobject_items.get, suffix=".vcf")
vobject_items[href] = card
self._upload_all_nonatomic(vobject_items)
# This operation is not atomic on the filesystem level but it's
# very unlikely that one rename operations succeeds while the
# other fails or that only one gets written to disk.
if os.path.exists(filesystem_path):
os.rename(filesystem_path, os.path.join(tmp_dir, "delete"))
os.rename(tmp_filesystem_path, filesystem_path)
cls._sync_directory(parent_dir)
return cls(sane_path)
def upload_all_nonatomic(self, vobject_items):
"""DEPRECATED: Use ``_upload_all_nonatomic``"""
return self._upload_all_nonatomic(vobject_items)
def _upload_all_nonatomic(self, vobject_items):
"""Upload a new set of items.
This takes a mapping of href and vobject items and
uploads them nonatomic and without existence checks.
"""
cache_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "item")
self._makedirs_synced(cache_folder)
for href, vobject_item in vobject_items.items():
if not is_safe_filesystem_path_component(href):
raise UnsafePathError(href)
try:
cache_content = self._item_cache_content(vobject_item)
_, _, _, text, _, _, _, _ = cache_content
except Exception as e:
raise ValueError(
"Failed to store item %r in temporary collection %r: %s" %
(href, self.path, e)) from e
with self._atomic_write(os.path.join(cache_folder, href), "wb",
sync_directory=False) as f:
pickle.dump(cache_content, f)
path = path_to_filesystem(self._filesystem_path, href)
with self._atomic_write(
path, newline="", sync_directory=False) as f:
f.write(text)
self._sync_directory(cache_folder)
self._sync_directory(self._filesystem_path)
@classmethod
def move(cls, item, to_collection, to_href):
if not is_safe_filesystem_path_component(to_href):
raise UnsafePathError(to_href)
os.replace(
path_to_filesystem(item.collection._filesystem_path, item.href),
path_to_filesystem(to_collection._filesystem_path, to_href))
cls._sync_directory(to_collection._filesystem_path)
if item.collection._filesystem_path != to_collection._filesystem_path:
cls._sync_directory(item.collection._filesystem_path)
# Move the item cache entry
cache_folder = os.path.join(item.collection._filesystem_path,
".Radicale.cache", "item")
to_cache_folder = os.path.join(to_collection._filesystem_path,
".Radicale.cache", "item")
cls._makedirs_synced(to_cache_folder)
try:
os.replace(os.path.join(cache_folder, item.href),
os.path.join(to_cache_folder, to_href))
except FileNotFoundError:
pass
else:
cls._makedirs_synced(to_cache_folder)
if cache_folder != to_cache_folder:
cls._makedirs_synced(cache_folder)
# Track the change
to_collection._update_history_etag(to_href, item)
item.collection._update_history_etag(item.href, None)
to_collection._clean_history_cache()
if item.collection._filesystem_path != to_collection._filesystem_path:
item.collection._clean_history_cache()
@classmethod
def _clean_cache(cls, folder, names, max_age=None):
"""Delete all ``names`` in ``folder`` that are older than ``max_age``.
"""
age_limit = time.time() - max_age if max_age is not None else None
modified = False
for name in names:
if not is_safe_filesystem_path_component(name):
continue
if age_limit is not None:
try:
# Race: Another process might have deleted the file.
mtime = os.path.getmtime(os.path.join(folder, name))
except FileNotFoundError:
continue
if mtime > age_limit:
continue
cls.logger.debug("Found expired item in cache: %r", name)
# Race: Another process might have deleted or locked the
# file.
try:
os.remove(os.path.join(folder, name))
except (FileNotFoundError, PermissionError):
continue
modified = True
if modified:
cls._sync_directory(folder)
def _update_history_etag(self, href, item):
"""Updates and retrieves the history etag from the history cache.
The history cache contains a file for each current and deleted item
of the collection. These files contain the etag of the item (empty
string for deleted items) and a history etag, which is a hash over
the previous history etag and the etag separated by "/".
"""
history_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "history")
try:
with open(os.path.join(history_folder, href), "rb") as f:
cache_etag, history_etag = pickle.load(f)
except (FileNotFoundError, pickle.UnpicklingError, ValueError) as e:
if isinstance(e, (pickle.UnpicklingError, ValueError)):
self.logger.warning(
"Failed to load history cache entry %r in %r: %s",
href, self.path, e, exc_info=True)
cache_etag = ""
# Initialize with random data to prevent collisions with cleaned
# expired items.
history_etag = binascii.hexlify(os.urandom(16)).decode("ascii")
etag = item.etag if item else ""
if etag != cache_etag:
self._makedirs_synced(history_folder)
history_etag = get_etag(history_etag + "/" + etag).strip("\"")
try:
# Race: Other processes might have created and locked the file.
with self._atomic_write(os.path.join(history_folder, href),
"wb") as f:
pickle.dump([etag, history_etag], f)
except PermissionError:
pass
return history_etag
def _get_deleted_history_hrefs(self):
"""Returns the hrefs of all deleted items that are still in the
history cache."""
history_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "history")
try:
for href in scandir(history_folder):
if not is_safe_filesystem_path_component(href):
continue
if os.path.isfile(os.path.join(self._filesystem_path, href)):
continue
yield href
except FileNotFoundError:
pass
def _clean_history_cache(self):
# Delete all expired cache entries of deleted items.
history_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "history")
self._clean_cache(history_folder, self._get_deleted_history_hrefs(),
max_age=self.configuration.getint(
"storage", "max_sync_token_age"))
def sync(self, old_token=None):
# The sync token has the form http://radicale.org/ns/sync/TOKEN_NAME
# where TOKEN_NAME is the md5 hash of all history etags of present and
# past items of the collection.
def check_token_name(token_name):
if len(token_name) != 32:
return False
for c in token_name:
if c not in "0123456789abcdef":
return False
return True
old_token_name = None
if old_token:
# Extract the token name from the sync token
if not old_token.startswith("http://radicale.org/ns/sync/"):
raise ValueError("Malformed token: %r" % old_token)
old_token_name = old_token[len("http://radicale.org/ns/sync/"):]
if not check_token_name(old_token_name):
raise ValueError("Malformed token: %r" % old_token)
# Get the current state and sync-token of the collection.
state = {}
token_name_hash = md5()
# Find the history of all existing and deleted items
for href, item in chain(
((item.href, item) for item in self.get_all()),
((href, None) for href in self._get_deleted_history_hrefs())):
history_etag = self._update_history_etag(href, item)
state[href] = history_etag
token_name_hash.update((href + "/" + history_etag).encode("utf-8"))
token_name = token_name_hash.hexdigest()
token = "http://radicale.org/ns/sync/%s" % token_name
if token_name == old_token_name:
# Nothing changed
return token, ()
token_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "sync-token")
token_path = os.path.join(token_folder, token_name)
old_state = {}
if old_token_name:
# load the old token state
old_token_path = os.path.join(token_folder, old_token_name)
try:
# Race: Another process might have deleted the file.
with open(old_token_path, "rb") as f:
old_state = pickle.load(f)
except (FileNotFoundError, pickle.UnpicklingError,
ValueError) as e:
if isinstance(e, (pickle.UnpicklingError, ValueError)):
self.logger.warning(
"Failed to load stored sync token %r in %r: %s",
old_token_name, self.path, e, exc_info=True)
# Delete the damaged file
try:
os.remove(old_token_path)
except (FileNotFoundError, PermissionError):
pass
raise ValueError("Token not found: %r" % old_token)
# write the new token state or update the modification time of
# existing token state
if not os.path.exists(token_path):
self._makedirs_synced(token_folder)
try:
# Race: Other processes might have created and locked the file.
with self._atomic_write(token_path, "wb") as f:
pickle.dump(state, f)
except PermissionError:
pass
else:
# clean up old sync tokens and item cache
self._clean_cache(token_folder, os.listdir(token_folder),
max_age=self.configuration.getint(
"storage", "max_sync_token_age"))
self._clean_history_cache()
else:
# Try to update the modification time
try:
# Race: Another process might have deleted the file.
os.utime(token_path)
except FileNotFoundError:
pass
changes = []
# Find all new, changed and deleted (that are still in the item cache)
# items
for href, history_etag in state.items():
if history_etag != old_state.get(href):
changes.append(href)
# Find all deleted items that are no longer in the item cache
for href, history_etag in old_state.items():
if href not in state:
changes.append(href)
return token, changes
def list(self):
for href in scandir(self._filesystem_path, only_files=True):
if not is_safe_filesystem_path_component(href):
if not href.startswith(".Radicale"):
self.logger.debug(
"Skipping item %r in %r", href, self.path)
continue
yield href
def get(self, href, verify_href=True):
item, metadata = self._get_with_metadata(href, verify_href=verify_href)
return item
def _item_cache_hash(self, raw_text):
_hash = md5()
_hash.update(left_encode_int(ITEM_CACHE_VERSION))
_hash.update(raw_text)
return _hash.hexdigest()
def _item_cache_content(self, vobject_item, cache_hash=None):
text = vobject_item.serialize()
if cache_hash is None:
cache_hash = self._item_cache_hash(text.encode(self._encoding))
etag = get_etag(text)
uid = get_uid_from_object(vobject_item)
name = vobject_item.name
tag, start, end = xmlutils.find_tag_and_time_range(vobject_item)
return cache_hash, uid, etag, text, name, tag, start, end
def _store_item_cache(self, href, vobject_item, cache_hash=None):
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache",
"item")
content = self._item_cache_content(vobject_item, cache_hash)
self._makedirs_synced(cache_folder)
try:
# Race: Other processes might have created and locked the
# file.
with self._atomic_write(os.path.join(cache_folder, href),
"wb") as f:
pickle.dump(content, f)
except PermissionError:
pass
return content
@contextmanager
def _acquire_cache_lock(self, ns=""):
if "/" in ns:
raise ValueError("ns must not include '/'")
with contextlib.ExitStack() as lock_stack:
with contextlib.ExitStack() as locks_lock_stack:
locks_lock_stack.enter_context(self._cache_locks_lock)
lock_id = ns + "/" + self.path
lock = self._cache_locks.get(lock_id)
if not lock:
cache_folder = os.path.join(self._filesystem_path,
".Radicale.cache")
self._makedirs_synced(cache_folder)
lock_path = None
if self.configuration.getboolean(
"storage", "filesystem_locking"):
lock_path = os.path.join(
cache_folder,
".Radicale.lock" + (".%s" % ns if ns else ""))
lock = FileBackedRwLock(lock_path)
self._cache_locks[lock_id] = lock
lock_stack.enter_context(lock.acquire_lock(
"w", lambda: locks_lock_stack.pop_all().close()))
try:
yield
finally:
with self._cache_locks_lock:
lock_stack.pop_all().close()
if not lock.in_use():
del self._cache_locks[lock_id]
def _load_item_cache(self, href, input_hash):
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache",
"item")
cache_hash = uid = etag = text = name = tag = start = end = None
try:
with open(os.path.join(cache_folder, href), "rb") as f:
cache_hash, *content = pickle.load(f)
if cache_hash == input_hash:
uid, etag, text, name, tag, start, end = content
except FileNotFoundError:
pass
except (pickle.UnpicklingError, ValueError) as e:
self.logger.warning(
"Failed to load item cache entry %r in %r: %s",
href, self.path, e, exc_info=True)
return cache_hash, uid, etag, text, name, tag, start, end
def _clean_item_cache(self):
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache",
"item")
self._clean_cache(cache_folder, (
href for href in scandir(cache_folder) if not
os.path.isfile(os.path.join(self._filesystem_path, href))))
def _get_with_metadata(self, href, verify_href=True):
"""Like ``get`` but additonally returns the following metadata:
tag, start, end: see ``xmlutils.find_tag_and_time_range``. If
extraction of the metadata failed, the values are all ``None``."""
if verify_href:
try:
if not is_safe_filesystem_path_component(href):
raise UnsafePathError(href)
path = path_to_filesystem(self._filesystem_path, href)
except ValueError as e:
self.logger.debug(
"Can't translate name %r safely to filesystem in %r: %s",
href, self.path, e, exc_info=True)
return None, None
else:
path = os.path.join(self._filesystem_path, href)
try:
with open(path, "rb") as f:
raw_text = f.read()
except (FileNotFoundError, IsADirectoryError):
return None, None
except PermissionError:
# Windows raises ``PermissionError`` when ``path`` is a directory
if (os.name == "nt" and
os.path.isdir(path) and os.access(path, os.R_OK)):
return None, None
raise
# The hash of the component in the file system. This is used to check,
# if the entry in the cache is still valid.
input_hash = self._item_cache_hash(raw_text)
cache_hash, uid, etag, text, name, tag, start, end = \
self._load_item_cache(href, input_hash)
vobject_item = None
if input_hash != cache_hash:
with contextlib.ExitStack() as lock_stack:
# Lock the item cache to prevent multpile processes from
# generating the same data in parallel.
# This improves the performance for multiple requests.
if self._lock.locked() == "r":
lock_stack.enter_context(self._acquire_cache_lock("item"))
# Check if another process created the file in the meantime
cache_hash, uid, etag, text, name, tag, start, end = \
self._load_item_cache(href, input_hash)
if input_hash != cache_hash:
try:
vobject_items = tuple(vobject.readComponents(
raw_text.decode(self._encoding)))
if len(vobject_items) != 1:
raise RuntimeError("Content contains %d components"
% len(vobject_items))
vobject_item = vobject_items[0]
check_and_sanitize_item(vobject_item, uid=uid,
tag=self.get_meta("tag"))
cache_hash, uid, etag, text, name, tag, start, end = \
self._store_item_cache(
href, vobject_item, input_hash)
except Exception as e:
raise RuntimeError("Failed to load item %r in %r: %s" %
(href, self.path, e)) from e
# Clean cache entries once after the data in the file
# system was edited externally.
if not self._item_cache_cleaned:
self._item_cache_cleaned = True
self._clean_item_cache()
last_modified = time.strftime(
"%a, %d %b %Y %H:%M:%S GMT",
time.gmtime(os.path.getmtime(path)))
return Item(
self, href=href, last_modified=last_modified, etag=etag,
text=text, item=vobject_item, uid=uid, name=name,
component_name=tag), (tag, start, end)
def get_multi2(self, hrefs):
# It's faster to check for file name collissions here, because
# we only need to call os.listdir once.
files = None
for href in hrefs:
if files is None:
# List dir after hrefs returned one item, the iterator may be
# empty and the for-loop is never executed.
files = os.listdir(self._filesystem_path)
path = os.path.join(self._filesystem_path, href)
if (not is_safe_filesystem_path_component(href) or
href not in files and os.path.lexists(path)):
self.logger.debug(
"Can't translate name safely to filesystem: %r", href)
yield (href, None)
else:
yield (href, self.get(href, verify_href=False))
def get_all(self):
# We don't need to check for collissions, because the the file names
# are from os.listdir.
return (self.get(href, verify_href=False) for href in self.list())
def get_all_filtered(self, filters):
tag, start, end, simple = xmlutils.simplify_prefilters(
filters, collection_tag=self.get_meta("tag"))
if not tag:
# no filter
yield from ((item, simple) for item in self.get_all())
return
for item, (itag, istart, iend) in (
self._get_with_metadata(href, verify_href=False)
for href in self.list()):
if tag == itag and istart < end and iend > start:
yield item, simple and (start <= istart or iend <= end)
def upload(self, href, vobject_item):
if not is_safe_filesystem_path_component(href):
raise UnsafePathError(href)
try:
cache_hash, uid, etag, text, name, tag, _, _ = \
self._store_item_cache(href, vobject_item)
except Exception as e:
raise ValueError("Failed to store item %r in collection %r: %s" %
(href, self.path, e)) from e
path = path_to_filesystem(self._filesystem_path, href)
with self._atomic_write(path, newline="") as fd:
fd.write(text)
# Clean the cache after the actual item is stored, or the cache entry
# will be removed again.
self._clean_item_cache()
item = Item(self, href=href, etag=etag, text=text, item=vobject_item,
uid=uid, name=name, component_name=tag)
# Track the change
self._update_history_etag(href, item)
self._clean_history_cache()
return item
def delete(self, href=None):
if href is None:
# Delete the collection
parent_dir = os.path.dirname(self._filesystem_path)
try:
os.rmdir(self._filesystem_path)
except OSError:
with TemporaryDirectory(
prefix=".Radicale.tmp-", dir=parent_dir) as tmp:
os.rename(self._filesystem_path, os.path.join(
tmp, os.path.basename(self._filesystem_path)))
self._sync_directory(parent_dir)
else:
self._sync_directory(parent_dir)
else:
# Delete an item
if not is_safe_filesystem_path_component(href):
raise UnsafePathError(href)
path = path_to_filesystem(self._filesystem_path, href)
if not os.path.isfile(path):
raise ComponentNotFoundError(href)
os.remove(path)
self._sync_directory(os.path.dirname(path))
# Track the change
self._update_history_etag(href, None)
self._clean_history_cache()
def get_meta(self, key=None):
# reuse cached value if the storage is read-only
if self._lock.locked() == "w" or self._meta_cache is None:
try:
try:
with open(self._props_path, encoding=self._encoding) as f:
self._meta_cache = json.load(f)
except FileNotFoundError:
self._meta_cache = {}
check_and_sanitize_props(self._meta_cache)
except ValueError as e:
raise RuntimeError("Failed to load properties of collection "
"%r: %s" % (self.path, e)) from e
return self._meta_cache.get(key) if key else self._meta_cache
def set_meta_all(self, props):
with self._atomic_write(self._props_path, "w") as f:
json.dump(props, f, sort_keys=True)
@property
def last_modified(self):
relevant_files = chain(
(self._filesystem_path,),
(self._props_path,) if os.path.exists(self._props_path) else (),
(os.path.join(self._filesystem_path, h) for h in self.list()))
last = max(map(os.path.getmtime, relevant_files))
return time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime(last))
@property
def etag(self):
# reuse cached value if the storage is read-only
if self._lock.locked() == "w" or self._etag_cache is None:
self._etag_cache = super().etag
return self._etag_cache
@classmethod
@contextmanager
def acquire_lock(cls, mode, user=None):
with cls._lock.acquire_lock(mode):
yield
# execute hook
hook = cls.configuration.get("storage", "hook")
if mode == "w" and hook:
folder = os.path.expanduser(cls.configuration.get(
"storage", "filesystem_folder"))
cls.logger.debug("Running hook")
debug = cls.logger.isEnabledFor(logging.DEBUG)
p = subprocess.Popen(
hook % {"user": shlex.quote(user or "Anonymous")},
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE if debug else subprocess.DEVNULL,
stderr=subprocess.PIPE if debug else subprocess.DEVNULL,
shell=True, universal_newlines=True, cwd=folder)
stdout_data, stderr_data = p.communicate()
if stdout_data:
cls.logger.debug("Captured stdout hook:\n%s", stdout_data)
if stderr_data:
cls.logger.debug("Captured stderr hook:\n%s", stderr_data)
if p.returncode != 0:
raise subprocess.CalledProcessError(p.returncode, p.args)
class FileBackedRwLock:
"""A readers-Writer lock that can additionally lock a file.
All requests are processed in FIFO order.
"""
def __init__(self, path=None, close_lock_file=True):
"""Initilize a lock.
``path`` the file that is used for locking (optional)
``close_lock_file`` close the lock file, when unlocked and no requests
are pending
"""
self._path = path
self._close_lock_file = close_lock_file
self._lock = threading.Lock()
self._waiters = []
self._lock_file = None
self._lock_file_locked = False
self._readers = 0
self._writer = False
def locked(self):
if self._writer:
return "w"
if self._readers:
return "r"
return ""
def in_use(self):
with self._lock:
return self._waiters or self._readers or self._writer
@contextmanager
def acquire_lock(self, mode, sync_callback=None):
def condition():
if mode == "r":
return not self._writer
else:
return not self._writer and self._readers == 0
# Use a primitive lock which only works within one process as a
# precondition for inter-process file-based locking
with self._lock:
if sync_callback:
sync_callback()
if self._waiters or not condition():
# Use FIFO for access requests
waiter = threading.Condition(lock=self._lock)
self._waiters.append(waiter)
while True:
waiter.wait()
if condition():
break
self._waiters.pop(0)
if mode == "r":
self._readers += 1
# Notify additional potential readers
if self._waiters:
self._waiters[0].notify()
else:
self._writer = True
if self._path and not self._lock_file_locked:
if not self._lock_file:
self._lock_file = open(self._path, "w+")
if os.name == "nt":
handle = msvcrt.get_osfhandle(self._lock_file.fileno())
flags = LOCKFILE_EXCLUSIVE_LOCK if mode == "w" else 0
overlapped = Overlapped()
if not lock_file_ex(handle, flags, 0, 1, 0, overlapped):
raise RuntimeError("Locking the storage failed "
"(can be disabled in the config): "
"%s" % ctypes.FormatError())
elif os.name == "posix":
_cmd = fcntl.LOCK_EX if mode == "w" else fcntl.LOCK_SH
try:
fcntl.flock(self._lock_file.fileno(), _cmd)
except OSError as e:
raise RuntimeError("Locking the storage failed "
"(can be disabled in the config): "
"%s" % e) from e
else:
raise RuntimeError("Locking the storage failed "
"(can be disabled in the config): "
"Unsupported operating system")
self._lock_file_locked = True
try:
yield
finally:
with self._lock:
if mode == "r":
self._readers -= 1
else:
self._writer = False
if self._lock_file_locked and self._readers == 0:
if os.name == "nt":
handle = msvcrt.get_osfhandle(self._lock_file.fileno())
overlapped = Overlapped()
if not unlock_file_ex(handle, 0, 1, 0, overlapped):
raise RuntimeError("Unlocking the storage failed: "
"%s" % ctypes.FormatError())
elif os.name == "posix":
try:
fcntl.flock(self._lock_file.fileno(),
fcntl.LOCK_UN)
except OSError as e:
raise RuntimeError("Unlocking the storage failed: "
"%s" % e) from e
else:
raise RuntimeError("Unlocking the storage failed: "
"Unsupported operating system")
if self._close_lock_file and not self._waiters:
self._lock_file.close()
self._lock_file = None
self._lock_file_locked = False
if self._waiters:
self._waiters[0].notify()
Radicale-2.1.11/radicale/tests/ 0000775 0000000 0000000 00000000000 13370020422 0016177 5 ustar 00root root 0000000 0000000 Radicale-2.1.11/radicale/tests/__init__.py 0000664 0000000 0000000 00000004266 13370020422 0020320 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2012-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Tests for Radicale.
"""
import logging
import os
import sys
from io import BytesIO
sys.path.insert(0, os.path.dirname(os.path.dirname(__file__)))
logger = logging.getLogger("radicale_test")
if not logger.hasHandlers():
handler = logging.StreamHandler(sys.stderr)
handler.setFormatter(logging.Formatter("%(levelname)s: %(message)s"))
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
class BaseTest:
"""Base class for tests."""
logger = logger
def request(self, method, path, data=None, **args):
"""Send a request."""
self.application._status = None
self.application._headers = None
self.application._answer = None
for key in args:
args[key.upper()] = args[key]
args["REQUEST_METHOD"] = method.upper()
args["PATH_INFO"] = path
if data:
data = data.encode("utf-8")
args["wsgi.input"] = BytesIO(data)
args["CONTENT_LENGTH"] = str(len(data))
self.application._answer = self.application(args, self.start_response)
return (
int(self.application._status.split()[0]),
dict(self.application._headers),
self.application._answer[0].decode("utf-8")
if self.application._answer else None)
def start_response(self, status, headers):
"""Put the response values into the current application."""
self.application._status = status
self.application._headers = headers
Radicale-2.1.11/radicale/tests/custom/ 0000775 0000000 0000000 00000000000 13370020422 0017511 5 ustar 00root root 0000000 0000000 Radicale-2.1.11/radicale/tests/custom/__init__.py 0000664 0000000 0000000 00000000000 13370020422 0021610 0 ustar 00root root 0000000 0000000 Radicale-2.1.11/radicale/tests/custom/auth.py 0000664 0000000 0000000 00000001746 13370020422 0021034 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Custom authentication.
Just check username for testing
"""
from radicale import auth
class Auth(auth.BaseAuth):
def is_authenticated(self, user, password):
return user == "tmp"
Radicale-2.1.11/radicale/tests/custom/rights.py 0000664 0000000 0000000 00000001651 13370020422 0021366 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright (C) 2017 Unrud
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Custom rights management.
"""
from radicale import rights
class Rights(rights.BaseRights):
def authorized(self, user, path, permission):
return path.strip("/") in ("tmp", "other")
Radicale-2.1.11/radicale/tests/custom/storage.py 0000664 0000000 0000000 00000002053 13370020422 0021527 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2012-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Custom storage backend.
Copy of filesystem storage backend for testing
"""
from radicale import storage
# TODO: make something more in this collection (and test it)
class Collection(storage.Collection):
"""Collection stored in a folder."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
Radicale-2.1.11/radicale/tests/helpers.py 0000664 0000000 0000000 00000002272 13370020422 0020216 0 ustar 00root root 0000000 0000000 # This file is part of Radicale Server - Calendar Server
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see .
"""
Radicale Helpers module.
This module offers helpers to use in tests.
"""
import os
EXAMPLES_FOLDER = os.path.join(os.path.dirname(__file__), "static")
def get_file_content(file_name):
try:
with open(os.path.join(EXAMPLES_FOLDER, file_name),
encoding="utf-8") as fd:
return fd.read()
except IOError:
print("Couldn't open the file %s" % file_name)
Radicale-2.1.11/radicale/tests/static/ 0000775 0000000 0000000 00000000000 13370020422 0017466 5 ustar 00root root 0000000 0000000 Radicale-2.1.11/radicale/tests/static/allprop.xml 0000664 0000000 0000000 00000000141 13370020422 0021655 0 ustar 00root root 0000000 0000000
Radicale-2.1.11/radicale/tests/static/broken-vcard.vcf 0000664 0000000 0000000 00000000374 13370020422 0022547 0 ustar 00root root 0000000 0000000 BEGIN:VCARD
VERSION:3.0
PRODID:-//Inverse inc.//SOGo Connector 1.0//EN
UID:C68582D2-2E60-0001-C2C0-000000000000.vcf
X-MOZILLA-HTML:FALSE
EMAIL;TYPE=work:test-misses-N-or-FN@example.com
X-RADICALE-NAME:C68582D2-2E60-0001-C2C0-000000000000.vcf
END:VCARD
Radicale-2.1.11/radicale/tests/static/broken-vevent.ics 0000664 0000000 0000000 00000000747 13370020422 0022763 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Radicale//NONSGML Radicale Server//EN
VERSION:2.0
BEGIN:VEVENT
CREATED:20160725T060147Z
LAST-MODIFIED:20160727T193435Z
DTSTAMP:20160727T193435Z
UID:040000008200E00074C5B7101A82E00800000000
SUMMARY:Broken ICS END of VEVENT missing by accident
STATUS:CONFIRMED
X-MOZ-LASTACK:20160727T193435Z
DTSTART;TZID=Europe/Budapest:20160727T170000
DTEND;TZID=Europe/Budapest:20160727T223000
CLASS:PUBLIC
X-LIC-ERROR:No value for LOCATION property. Removing entire property:
Radicale-2.1.11/radicale/tests/static/contact1.vcf 0000664 0000000 0000000 00000000126 13370020422 0021701 0 ustar 00root root 0000000 0000000 BEGIN:VCARD
VERSION:3.0
UID:contact1
N:Contact;;;;
FN:Contact
NICKNAME:test
END:VCARD
Radicale-2.1.11/radicale/tests/static/contact_multiple.vcf 0000664 0000000 0000000 00000000224 13370020422 0023532 0 ustar 00root root 0000000 0000000 BEGIN:VCARD
VERSION:3.0
UID:contact1
N:Contact1;;;;
FN:Contact1
END:VCARD
BEGIN:VCARD
VERSION:3.0
UID:contact2
N:Contact2;;;;
FN:Contact2
END:VCARD
Radicale-2.1.11/radicale/tests/static/event1-prime.ics 0000664 0000000 0000000 00000001622 13370020422 0022503 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20130902T150157Z
LAST-MODIFIED:20130902T150158Z
DTSTAMP:20130902T150158Z
UID:event1
SUMMARY:Event
ORGANIZER:mailto:unclesam@example.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=TENTATIVE;CN=Jane Doe:MAILTO:janedoe@example.com
ATTENDEE;ROLE=REQ-PARTICIPANT;DELEGATED-FROM="MAILTO:bob@host.com";PARTSTAT=ACCEPTED;CN=John Doe:MAILTO:johndoe@example.com
DTSTART;TZID=Europe/Paris:20140901T180000
DTEND;TZID=Europe/Paris:20140901T210000
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event1.ics 0000664 0000000 0000000 00000001622 13370020422 0021371 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20130902T150157Z
LAST-MODIFIED:20130902T150158Z
DTSTAMP:20130902T150158Z
UID:event1
SUMMARY:Event
ORGANIZER:mailto:unclesam@example.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=TENTATIVE;CN=Jane Doe:MAILTO:janedoe@example.com
ATTENDEE;ROLE=REQ-PARTICIPANT;DELEGATED-FROM="MAILTO:bob@host.com";PARTSTAT=ACCEPTED;CN=John Doe:MAILTO:johndoe@example.com
DTSTART;TZID=Europe/Paris:20130901T180000
DTEND;TZID=Europe/Paris:20130901T190000
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event2.ics 0000664 0000000 0000000 00000001616 13370020422 0021375 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20130902T150157Z
LAST-MODIFIED:20130902T150158Z
DTSTAMP:20130902T150158Z
UID:event2
SUMMARY:Event2
DTSTART;TZID=Europe/Paris:20130902T180000
DTEND;TZID=Europe/Paris:20130902T190000
RRULE:FREQ=WEEKLY
SEQUENCE:1
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20130910T170000
DTEND;TZID=Europe/Paris:20130910T180000
DTSTAMP:20140902T150158Z
SUMMARY:Event2
UID:event2
RECURRENCE-ID;TZID=Europe/Paris:20130909T180000
SEQUENCE:2
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event3.ics 0000664 0000000 0000000 00000001170 13370020422 0021371 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20130902T150157Z
LAST-MODIFIED:20130902T150158Z
DTSTAMP:20130902T150158Z
UID:event3
SUMMARY:Event3
DTSTART;TZID=Europe/Paris:20130903
DURATION:PT1H
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event4.ics 0000664 0000000 0000000 00000001161 13370020422 0021372 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20130902T150157Z
LAST-MODIFIED:20130902T150158Z
DTSTAMP:20130902T150158Z
UID:event4
SUMMARY:Event4
DTSTART;TZID=Europe/Paris:20130904T180000
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event5.ics 0000664 0000000 0000000 00000001152 13370020422 0021373 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20130902T150157Z
LAST-MODIFIED:20130902T150158Z
DTSTAMP:20130902T150158Z
UID:event5
SUMMARY:Event5
DTSTART;TZID=Europe/Paris:20130905
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event6.ics 0000664 0000000 0000000 00000001767 13370020422 0021410 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:event6
DTSTART;TZID=Europe/Paris:20170601T080000
DTEND;TZID=Europe/Paris:20170601T090000
CREATED:20170601T060000Z
DTSTAMP:20170601T060000Z
LAST-MODIFIED:20170601T060000Z
RRULE:FREQ=DAILY;UNTIL=20170602T060000Z
SUMMARY:event6
TRANSP:OPAQUE
X-MOZ-GENERATION:1
END:VEVENT
BEGIN:VEVENT
UID:event6
RECURRENCE-ID;TZID=Europe/Paris:20170602T080000
DTSTART;TZID=Europe/Paris:20170701T080000
DTEND;TZID=Europe/Paris:20170701T090000
CREATED:20170601T060000Z
DTSTAMP:20170601T060000Z
LAST-MODIFIED:20170601T060000Z
SEQUENCE:1
SUMMARY:event6
TRANSP:OPAQUE
X-MOZ-GENERATION:1
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event7.ics 0000664 0000000 0000000 00000002421 13370020422 0021375 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:event7
DTSTART;TZID=Europe/Paris:20170701T080000
DTEND;TZID=Europe/Paris:20170701T090000
CREATED:20170601T060000Z
DTSTAMP:20170601T060000Z
LAST-MODIFIED:20170601T060000Z
RRULE:FREQ=DAILY
SUMMARY:event7
TRANSP:OPAQUE
X-MOZ-GENERATION:1
END:VEVENT
BEGIN:VEVENT
UID:event7
RECURRENCE-ID;TZID=Europe/Paris:20170702T080000
DTSTART;TZID=Europe/Paris:20170702T080000
DTEND;TZID=Europe/Paris:20170702T090000
CREATED:20170601T060000Z
DTSTAMP:20170601T060000Z
LAST-MODIFIED:20170601T060000Z
SEQUENCE:1
SUMMARY:event7
TRANSP:OPAQUE
X-MOZ-GENERATION:1
END:VEVENT
BEGIN:VEVENT
UID:event7
RECURRENCE-ID;TZID=Europe/Paris:20170703T080000
DTSTART;TZID=Europe/Paris:20170601T080000
DTEND;TZID=Europe/Paris:20170601T090000
CREATED:20170601T060000Z
DTSTAMP:20170601T060000Z
LAST-MODIFIED:20170601T060000Z
SEQUENCE:1
SUMMARY:event7
TRANSP:OPAQUE
X-MOZ-GENERATION:1
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event8.ics 0000664 0000000 0000000 00000001306 13370020422 0021377 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:event8
DTSTART;TZID=Europe/Paris:20170601T080000
DTEND;TZID=Europe/Paris:20170601T090000
CREATED:20170601T060000Z
DTSTAMP:20170601T060000Z
LAST-MODIFIED:20170601T060000Z
RDATE;TZID=Europe/Paris:20170701T080000
SUMMARY:event8
TRANSP:OPAQUE
X-MOZ-GENERATION:1
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event9.ics 0000664 0000000 0000000 00000001234 13370020422 0021400 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
DTSTART;VALUE=DATE-TIME:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART;VALUE=DATE-TIME:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20170510T072956Z
UID:event9
SUMMARY:event9
DTSTART;VALUE=DATE-TIME;TZID=Europe/Paris:20170601T080000
DTEND;VALUE=DATE-TIME:20170601T080000Z
RRULE:FREQ=DAILY;UNTIL=20170602T060000Z
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event_multiple.ics 0000664 0000000 0000000 00000001252 13370020422 0023222 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:event
SUMMARY:Event
DTSTART;TZID=Europe/Paris:20130901T190000
DTEND;TZID=Europe/Paris:20130901T200000
END:VEVENT
BEGIN:VTODO
UID:todo
DTSTART;TZID=Europe/Paris:20130901T220000
DURATION:PT1H
SUMMARY:Todo
END:VTODO
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/event_timezone_seconds.ics 0000664 0000000 0000000 00000001311 13370020422 0024733 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//Mac OS X 10.13.4//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Moscow
BEGIN:STANDARD
TZOFFSETFROM:+023017
DTSTART:20010101T000000
TZNAME:GMT+3
TZOFFSETTO:+023017
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20180420T193555Z
UID:E96B9F38-8F70-4F1D-AAAC-2CD0BAC40551
DTEND;TZID=Europe/Moscow:20180420T130000
TRANSP:OPAQUE
X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC
SUMMARY:ня — 2
DTSTART;TZID=Europe/Moscow:20180420T120000
DTSTAMP:20180420T200353Z
SEQUENCE:0
BEGIN:VALARM
X-WR-ALARMUID:06071073-A112-40CA-83AA-C05F54736B36
UID:06071073-A112-40CA-83AA-C05F54736B36
TRIGGER;VALUE=DATE-TIME:19760401T005545Z
ACTION:NONE
END:VALARM
END:VEVENT
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/journal1.ics 0000664 0000000 0000000 00000001123 13370020422 0021716 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700101T000000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19700101T000000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VJOURNAL
UID:journal1
DTSTAMP;TZID=Europe/Paris:19940817T000000
SUMMARY:happy new year
DESCRIPTION: Happy new year 2000 !
END:VJOURNAL
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/journal2.ics 0000664 0000000 0000000 00000001170 13370020422 0021721 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VJOURNAL
UID:journal2
DTSTAMP:19950817T000000
DTSTART;TZID=Europe/Paris:20000101T100000
SUMMARY:happy new year
DESCRIPTION: Happy new year !
RRULE:FREQ=YEARLY
END:VJOURNAL
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/journal3.ics 0000664 0000000 0000000 00000001135 13370020422 0021723 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VJOURNAL
UID:journal3
DTSTAMP:19950817T000000
DTSTART;VALUE=DATE:20000101
SUMMARY:happy new year
DESCRIPTION: Happy new year 2001 !
END:VJOURNAL
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/journal4.ics 0000664 0000000 0000000 00000000705 13370020422 0021726 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/journal5.ics 0000664 0000000 0000000 00000000705 13370020422 0021727 0 ustar 00root root 0000000 0000000 BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
END:VCALENDAR
Radicale-2.1.11/radicale/tests/static/propfind1.xml 0000664 0000000 0000000 00000000244 13370020422 0022112 0 ustar 00root root 0000000 0000000