pax_global_header00006660000000000000000000000064136306213440014514gustar00rootroot0000000000000052 comment=a44eb747e466615a4c740959a5f35f6c97a4cb3c zsnapd-0.8.11h/000077500000000000000000000000001363062134400132525ustar00rootroot00000000000000zsnapd-0.8.11h/.gitignore000066400000000000000000000001611363062134400152400ustar00rootroot00000000000000# Ignore these pythonisms __pycache__/ *.pyc # Vim backup file *.swp .idea/* pkg/* src/* packages/* /debian/* *~ zsnapd-0.8.11h/LICENSE000066400000000000000000000020731363062134400142610ustar00rootroot00000000000000Copyright (c) 2014 Kenneth Henderick Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. zsnapd-0.8.11h/README.md000066400000000000000000000447251363062134400145450ustar00rootroot00000000000000zsnapd ====== ZFS Snapshot Daemon A rework of ZFS Snapshot Manager by Kenneth Henderick ZFS dataset configuration file /etc/zfssnapmanager.cfg should be upwards compatible with /etc/zsnapd/dataset.conf. Usage ----- All the functional code is in the scripts folder. Execute zsnapd, it deamonizes itself. Features -------- * Fully Python3 based * Laptop friendly as has built in connectivity test to check remote port reachability * Remote mode - snapshotting, script execution, and snapshot aging from central backup server. * Native Systemd daemon compitability via py-magcode-core python daemon and logging support library * Debug command line switch and stderr logging * Systemd journalctl logging. * Full standard Unix daemon support via py-magcode-core, with logging to syslog or logfile * Configuration is stored in configuration files with the ini file format. There is a template file, and a dataset file. * Triggers the configured actions based on time or a '.trigger' file present in the dataset's mountpoint, location read from /proc/mounts. * Can take snapshots (with a yyyymmdd timestamp format) * Can replicate snapshots to/from other nodes * Push based when the replication source has access to the replication target * Pull based when the replication source has no access to the replication target. Typically when you don't want to give all nodes access to the backup/replication target * Full clone replication mode, to copy sub folders and all ZFS properties * Cleans all snapshots with the yyyymmdd timestamp format based on a GFS schema (Grandfather, Father, Son). * Supports pre and post commands * Pre command is executed before any action is executed * Post command is executed after the actions are executed, but before cleaning * Has sshd remote execution filter script zsnapd-rcmd. See etc/zsnapd/zsnapd-rcmd.conf for configuration. Configuration ------------- The daemon configuration file is in /etc/zsnapd/process.conf in ini format and a sample is as follows: [DEFAULT] run_as_user = root [zsnapd] # Use following setting to check daemon reconfiguring daemon_canary = blau debug_mark = True # Both below in seconds sleep_time = 300 debug_sleep_time = 15 # dataset configuration file # dataset_config_file = /etc/zsnapd/datasets.conf # dataset_config_file = /etc/zfssnapmanager.cfg # Uncomment to set up syslog logging # see pydoc3 syslog and man 3 syslog for value names with 'LOG_' # prefix stripped #syslog_facility = DAEMON #syslog_level = INFO # Uncomment to set up file logging #log_file = /var/log/zsnapd.log #log_file_max_size_kbytes = 1024 #log_file_backup_count = 7 [zsnapd-cfgtest] log_level = DEBUG [zsnapd-trigger] log_level = DEBUG Adjust sleep_time (in seconds) to set interval zsnapd runs code. For 30 minute intervals, set to 1800 seconds. Command line arguments to zsnapd are: Usage: zsnapd [-dhv] [-c config_file] ZFS Snap Managment Daemon -c, --config-file set configuration file -d, --debug set debug level {0-3|none|normal|verbose|extreme} -h, --help help message -b, --memory-debug memory debug output -S, --systemd Run as a systemd daemon, no fork -v, --verbose verbose output -r, --rpdb2-wait Wait for rpdb2 (seconds) Note the default configuration file is /etc/zsnapd/process.conf, and systemd native mode is via the --systemd switch The dataset configuration file is located in /etc/zsnapd and is called dataset.conf. It's an .ini file containing a section per dataset/volume that needs to be managed. There is also a template .ini file called template.conf in the same directory zsnapd-cfgtest tests the data set conifugration file, and zsnapd-trigger writes out .trigger files based on the data set configuration. It takes either the mount point of the target dataset as an argument, or the full dataset name including storage pool. zsnapd-trigger can optionally do a connectivity test first before writing out the .trigger file. The same connectivty test is done before zsnapd attempts replication, and it uses the replicate_endpoint_host and replicate_endpoint_port settings for the dataset. Examples /etc/zsnapd/template.conf: [DEFAULT] replicate_endpoint_host = nas.local [backup-local] replicate_endpoint_port = 2345 replicate_endpoint_command = ssh -l backup -p {port} {host} compression = gzip time = 17:00 - 21:00 /2 [backup-other] replicate_endpoint_host = other.remote.server.org /etc/zsnapd/dataset.conf: [DEFAULT] time = trigger [zroot] template = backup-local mountpoint = / time = {template}, trigger snapshot = True schema = 7d3w11m5y [zpool/data] mountpoint = /mnt/data time = trigger snapshot = True schema = 5d0w0m0y preexec = echo "starting" | mail somebody@example.com postexec = echo "finished" | mail somebody@exampe.com [zpool/myzvol] mountpoint = None time = 21:00 snapshot = True schema = 7d3w0m0y [zpool/backups/data] template = backup-other mountpoint = /mnt/backups/data time = 08:00, 19:00-23:00/00:30 snapshot = False replicate_source = zpool/data schema = 7d3w11m4y replicate_full_clone = True buffer_size = 128M A summary of the different options: * mountpoint: Points to the location to which the dataset is mounted, None for volumes. Used only for triggers. Defaults to value in /proc/mounts if available. * do_trigger: Dataset is a candidate for devops triggers with 'zsnapd-trigger -t'. * time: Can be either a timestamp in 24h hh:mm notation after which a snapshot needs to be taken, a time range, 'trigger', or a comma separated list of such items. A time range consists of a start time, then a dash, an end time, with an optional interval separated by a '/'. The interval can be given in hours, or HH:MM format. If not given, it is 1 hour. Alone with the timestamps, 'trigger' indicates that it will take a snapshot as soon as a file with name '.trigger' is found in the dataset's mountpoint. This can be used in case data is for example rsynced to the dataset. As shown above, '{template}' is substituted for the time specification string from the template for that dataset in the dataset file. Thus the time setting for an individual dataset using a template can be augmented in its definition. * snapshot: Indicates whether a snapshot should be taken or not. It might be possible that only cleaning needs to be executed if this dataset is actually a replication target for another machine. * replicate_append_basename: Append last part of dataset name (after last '/') to target dataset name and receive mount point, joining with a '/'. * replicate2_append_basename: Append last part of dataset name (after last '/') to target dataset name and receive mount point, joining with a '/'. * replicate_append_fullname: Append dataset name without pool name to target datset name and receive mount point, joining with a '/'. * replicate2_append_fullname: Append dataset name without pool name to target datset name and receive mount point, joining with a '/'. * replicate_endpoint: Deprecated. Can be left empty if replicating on localhost (e.g. copying snapshots to other pool). Should be omitted if no replication is required. * replicate_endpoint_host: Can be left empty if replicating on localhost (e.g. copying snapshots to other pool). Should be omitted if no replication is required. * replicate2_endpoint_host: Can be left empty if replicating on localhost (e.g. copying snapshots to other pool). Should be omitted if no replication is required. * replicate_endpoint_port: port that has to be remotely accessed * replicate2_endpoint_port: port that has to be remotely accessed * replicate_endpoint_command: Command template for remote access. Takes two keys {port} and {host} * replicate2_endpoint_command: Command template for remote access. Takes two keys {port} and {host} * replicate_target: The target to which the snapshots should be send. Should be omitted if no replication is required or a replication_source is specified. * replicate2_target: The target to which the snapshots should be send. Should be omitted if no replication is required or a replication_source is specified. * replicate_source: The source from which to pull the snapshots to receive onto the local dataset. Should be omitted if no replication is required or a replication_target is specified. * replicate_full_clone: Full clone of dataset and all sub ordinate datasets and properties * replicate2_full_clone: Full clone of dataset and all sub ordinate datasets and properties * replicate_receive_save: If transfer fails create a save point for resuming transfer * replicate2_receive_save: If transfer fails create a save point for resuming transfer * replicate_receive_mountpoint: Specify mount point value for received dataset (on remote if pushed). * replicate2_receive_mountpoint: Specify mount point value for received remote dataset. * replicate_receive_no_mountpoint: Remove mountpoint from received properties. Defaults to True if replicate_send_properties or replicate_full_clone is set. * replicate_receive_no_mountpoint: Remove mountpoint from received properties. Defaults to True if replicate_send_properties or replicate_full_clone is set. * replicate2_receive_no_mountpoint: Remove mountpoint from received properties. Defaults to True if replicate_send_properties or replicate_full_clone is set. * replicate_receive_umount: Don't mount received dataset. Defaults to True if replicate_send_properties or replicate_full_clone is set. * replicate2_receive_umount: Don't mount received dataset. Defaults to True if replicate_send_properties or replicate_full_clone is set. * replicate_send_compression: zfs send using compressed data from disk * replicate2_send_compression: zfs send using compressed data from disk * replicate_send_raw: zfs send using raw data from disk * replicate2_send_raw: zfs send using raw data from disk * replicate_send_properties: zfs send sends all properties of dataset * replicate2_send_properties: zfs send sends all properties of dataset * replicate_full_clone: Full clone of dataset and all sub ordinate datasets and properties * replicate2_full_clone: Full clone of dataset and all sub ordinate datasets and properties * buffer_size: Give mbuffer buffer size in units of k, M, and G - kilobytes, Megabytes, and Gigabytes respectively. * buffer2_size: Give mbuffer buffer size in units of k, M, and G - kilobytes, Megabytes, and Gigabytes respectively. * compression: Indicates the compression program to pipe remote replicated snapshots through (for use in low-bandwidth setups.) The compression utility should accept standard compression flags (`-c` for standard output, `-d` for decompress.) * compression2: Indicates the compression program to pipe remote replicated snapshots through (for use in low-bandwidth setups.) The compression utility should accept standard compression flags (`-c` for standard output, `-d` for decompress.) * schema: In case the snapshots should be cleaned, this is the schema the manager will use to clean. * local_schema: For local snapshot cleaning/aging when dataset is receptical for remote source when snapshots are pulled * remote_schema: For remote snapshot cleaning/aging when remote target is receptical for backup when snapshots are pushed * remote2_schema: For remote snapshot cleaning/aging when remote target is receptical for backup when snapshots are pushed * preexec: A command that will be executed, before snapshot/replication. Should be omitted if nothing should be executed * postexec: A command that will be executed, after snapshot/replication, but before the cleanup. Should be omitted if nothing should be executed * clean_all: Clean/age all snapshots in dataset - default is False - ie zsnapd only * local_clean_all: Setting for local dataset when replicating source is remote * remote_clean_all: Setting for remove dataset when replicating source is local * remote2_clean_all: Setting for remove dataset when replicating source is local * all_snapshots: Replicate all snapshots in dataset - Default is True - ie all snapshots in dataset * replicate_all: Deprecated. Use all_snapshots * log_commands: Per dataset log all commands executed for the dataset to DEBUG. For checking what the program is doing exactly, helpful for auditing and security. Naming convention ----------------- This script's snapshot will always given a timestamp (format yyyymmddhhmm) as name. For pool/tank an example snapshot name could be pool/tank@201312311323. The daemon is still compatible with the olderyyyymmdd snapshot aging convention, and will replicate and age them. (Internally, the snapshot 'handles' are now created from the snapshot creation time (using Unix timestamp seconds) - this means that manual snapshot names are covered too.) All snapshots are currently used for replication (both snapshots taken by the script as well as snapshots taken by other means (other scripts or manually), regardless of their name. However, the script will only clean snapshots with the timestamp naming convention. In case you don't want a snapshot to be cleaned by the script, just make sure it has any other name, not matching this convention. Buckets ------- The system will use "buckets" to apply the GFS schema. From every bucket, the oldest snapshot will be kept. At any given time the script is executed, it will place the snapshots in their buckets, and then clean out all buckets. Bucket schema ------------- For example, the schema '7d3w11m4y' means: * 7 daily buckets (starting from today) * 3 weekly buckets (7 days a week) * 11 monthly buckets (30 days a month) * 4 yearly buckets (12 * 30 days a year) This wraps up to 5 years (where a year is 12 * 30 days - so not mapping to a real year) Other schema's are possible. One could for example only be intrested in keeping only the snapshots for last week, in which the schema '7d0w0m0y' would be given. Any combination is possible. Since from each bucket, the oldest snapshot is kept, snapshots will seem to "roll" trough the buckets. The number of 'keep' days before aging can be given, as well as hourly buckets before the days kick in, ie '2k24h7d3w11m4y': * 2 keep days starting from midnight, no snapshots deleted * 24 hourly buckets starting from midnight in 2 days * 7 daily buckets * 3 weekly buckets (7 days a week) * 11 monthly buckets (30 days a month) * 4 yearly buckets (12 * 30 days a year) Remote Execution Security ------------------------- Sudo with a backup user on the remote was considered, but after reviewing the sshd ForceCommand mechanism for remote excution, this was chosen as far easier and superior. Thus, zsnapd-rcmd, the remote sshd command checker for ZFS Snapshot Daemon zsnapd-rcmd is the security plugin command for sshd that implements ForceCommand functionality, or the command functionality in the .ssh/authorized_keys file (See the sshd_config(8) and sshd(8) man pages respectively). It executes commands from the SSH_ORIGINAL_COMMAND variable after checking them against a list of configured regular expressions. Edit the zsnapd-rcmd.conf files in /etc/zsnapd, to set up the check regexps for the remote preexec, postexec, and replicate_postexec commands. Settings are also available for 10 extra remote commands, labeled rcmd_aux0 - rcmd_aux9 Read the sshd(8) manpage on the ForceCommand setting, and the sshd(8) manpage on the /root/.ssh/authorized_keys file, command entry for the remote pub key for zsnapd access. Example .ssh/authorized_keys entry (single line - unline wrap it): no-pty,no-agent-forwarding,no-X11-forwarding,no-port-forwarding, command="/usr/sbin/zsnapd-rcmd" ssh-rsa AAAABBBBBBBBCCCCCCCCCCCCC DDDDDDDD== root@blah.org Hint: command line arguments can be given, such as a different config file, and debug level: no-pty,no-agent-forwarding,no-X11-forwarding,no-port-forwarding, command="/usr/sbin/zsnapd-rcmd -c /etc/zsnapd/my-rcmd.conf --debug 3" ssh-rsa AAAABBBBBBBBCCCCCCCCCCCCCDDDDDDDD== root@blah.org Examples -------- The examples directory contains 3 example configuration files, almost identical as my own 'production' setup. * A non-ZFS device (router), rsyncing its filesystem to an NFS shared dataset. * A laptop, having a single root ZFS setup, containing 2 normal filesystems and a ZFS dataset * A local NAS with lots of data and the replication target of most systems * A remote NAS (used as normal NAS by these people) used with two-way replication as offsite backup setup. Dependencies ------------ This python program/script has a few dependencies. When using the Archlinux AUR, these will be installed automatically. * zfs * python3 * openssh * mbuffer * python3-magcode-core >= 1.5.4 - on pypi.org * python3-psutil * python3-setproctitle Logging ------- The script is logging into systemd journals, and /var/log/syslog License ------- This program/script is licensed under MIT, which basically means you can do anything you want with it. You can find the license text in the 'LICENSE' file. If you like the software or if you're using it, feel free to leave a star as a toke of appreciation. Warning ------- As with any script deleting snapshots, use with caution. Make sure to test the script on a dummy dataset first when you use it directly from the repo. This to ensure no unexpected things will happen. The releases should be working fine, as I use these on my own environment, and for customers. In case you find a bug, feel free to create a bugreport and/or fork and send a pull-request in case you fixed the bug yourself. Packages -------- ZFS Snapshot Manager is available in the following distributions: * ArchLinux: https://aur.archlinux.org/packages/zfs-snap-manager (AUR) * The PKGBUILD and install scripts are now available through the AUR git repo zsnapd is available in following distributions: * Debian: http://packages.debian.org as part of the main repostitory * Ubuntu (eventually) ZFS --- From Wikipedia: ZFS is a combined file system and logical volume manager designed by Sun Microsystems. The features of ZFS include protection against data corruption, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs. ZFS was originally implemented as open-source software, licensed under the Common Development and Distribution License (CDDL). The ZFS name is registered as a trademark of Oracle Corporation. ZFS Snapshot Manager and zsnapd are standalone projects, and is not affiliated with ZFS or Oracle Corporation. zsnapd-0.8.11h/TODO000066400000000000000000000004201363062134400137360ustar00rootroot00000000000000TODO list for zsnapd ==================== * independent time events from main loop * event driven snapshotting * event driven cleaning * event driven replication/cleaning * use DBM file triggers/global loop for above to prevent serialisation issues and a load pile up? zsnapd-0.8.11h/etc/000077500000000000000000000000001363062134400140255ustar00rootroot00000000000000zsnapd-0.8.11h/etc/zsnapd/000077500000000000000000000000001363062134400153245ustar00rootroot00000000000000zsnapd-0.8.11h/etc/zsnapd/dataset.conf000066400000000000000000000016251363062134400176240ustar00rootroot00000000000000; ; Example configuration for a laptop. Please edit and update as needed ; ; See /usr/share/doc/zfs-snap-manager/README.md.gz for the documentation ; on this configuration file. ; Situation: ; This laptop is the main working computer. It's trusted and has access to ; every other node (since it's used to manage them anyway. Therefore (and ; because it's not always online), it's allowed to push its snapshots. The ; laptop takes snapshots of 3 datasets and replicate all of them (push-based) ; Normal snapshots: Snapshot at a certain time, replicate them (push-based) ; and cleans them out [DEFAULT] # Default settings for all datasets go here ;time = trigger ;[zroot] ;template = laptop-client ;mountpoint = / ; ;[zroot/home] ;template = laptop-client ;mountpoint = /home ; ;; The dataset zroot/windows is a ZFS volume (it has no mountpoint) ; ;[zroot/windows] ;template = laptop-client ;mountpoint = zsnapd-0.8.11h/etc/zsnapd/process.conf000066400000000000000000000012731363062134400176540ustar00rootroot00000000000000[DEFAULT] run_as_user = root [zsnapd] # Use following setting to check daemon reconfiguring daemon_canary = blau debug_mark = True # Both below in seconds sleep_time = 300 debug_sleep_time = 15 # # dataset configuration file # dataset_config_file = /etc/zsnapd/datasets.conf # dataset_config_file = /etc/zfssnapmanager.cfg # # Uncomment to set up syslog logging # see pydoc3 syslog and man 3 syslog for value names with 'LOG_' # prefix stripped #syslog_facility = DAEMON #log_level = INFO # # Uncomment to set up file logging #log_file = /var/log/zsnapd/zsnapd.log #log_file_max_size_kbytes = 1024 #log_file_backup_count = 7 [zsnapd-cfgtest] log_level = DEBUG [zsnapd-trigger] log_level = DEBUG zsnapd-0.8.11h/etc/zsnapd/template.conf000066400000000000000000000004261363062134400200100ustar00rootroot00000000000000[DEFAULT] # Add default template settings here ;snapshot = True ;[laptop-client] ;replicate_endpoint_host = nas.local ;replicate_endpoint_command = /usr/bin/ssh -i /root/.ssh/id_zfs_rsa -l zfs-backup -p {port} {host} ;replicate_target = zraid/backups/laptop ;schema = 7d0w0m0y zsnapd-0.8.11h/etc/zsnapd/zsnapd-rcmd.conf000066400000000000000000000072241363062134400204220ustar00rootroot00000000000000[DEFAULT] [zsnapd-rcmd] # Uncomment to set up syslog logging # see pydoc3 syslog and man 3 syslog for value names with 'LOG_' # prefix stripped syslog_facility = AUTH log_level = INFO # # Uncomment to set up file logging #log_file = /var/log/zsnapd/zsnapd-rcmd.log #log_file_max_size_kbytes = 1024 #log_file_backup_count = 3 # Shell used to execute commands rshell = /bin/rbash #rshell_path = /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin # Regex processing settings regex_comp_prog = (gzip|bzip2|xz|lzma|lz4) regex_compress = (\s*\|\s*%(regex_comp_prog)s -c\s*){0,1} regex_decompress = (\s*%(regex_comp_prog)s -cd\s*\|\s*){0,1} regex_dataset = [-_:./a-zA-Z0-9]+ regex_mountpoint = [-_./ a-zA-Z0-9]+ regex_snapshot = %(regex_dataset)s@[-_:.a-zA-Z0-9]+ regex_send_args = (-[LecpwR]{1,6} ){0,1} regex_receive_args = (-[su]{1,2} ){0,1}(-x mountpoint ){0,1}(-o "mountpoint=%(regex_mountpoint)s" ){0,1} regex_incr_delta = (-[iI] %(regex_snapshot)s ){0,1} regex_resume_args = (-t 1-[0-9a-f]+-[0-9a-f]+-[0-9a-f]+){0,1} regex_mbuffer_common = -s [0-9]{1,7}[bBkM] -m [0-9]{1,12}[kMG] regex_mbuffer_push = (\s*mbuffer -q -v 0 %(regex_mbuffer_common)s\s*\|\s*){0,1} regex_mbuffer_pull = (\s*\|\s*mbuffer -q -v 0 %(regex_mbuffer_common)s\s*){0,1} regex_grep_filter_dataset = (\|\s*grep \^%(regex_dataset)s@\s*){0,1} # SECURITY Make SURE each rcmd_ filter starts with ^ and ends with $ to make sure of absolute matches. # SECURITY Also don't use .* in a regex, asthat matches everything! # The following flags can be used to turn off/on checks for above # regex_error_on_^ = True # regex_error_on_.* = True # regex_error_on_$ = True # Commands cannot be absolute pathed because of use of rbash. Add directory to rshell_path above # Commenting out setting will turn off permission for that command rcmd_zfs_get_snapshots = ^zfs list -pH -s creation -o name,creation -t snapshot\s*%(regex_grep_filter_dataset)s\|\|\s*true$ rcmd_zfs_get_snapshots2 = ^zfs list -pH -s creation -o name,creation -t snapshot\s*%(regex_dataset)s\s*\|\|\s*true$ rcmd_zfs_get_datasets = ^(zfs list -H|zfs list -pH -o name,mountpoint)$ rcmd_zfs_snapshot = ^zfs snapshot %(regex_snapshot)s$ rcmd_zfs_replicate_push = ^%(regex_mbuffer_push)s%(regex_decompress)szfs receive %(regex_receive_args)s-F %(regex_dataset)s$ rcmd_zfs_replicate_pull = ^zfs send %(regex_send_args)s%(regex_incr_delta)s%(regex_snapshot)s%(regex_compress)s%(regex_mbuffer_pull)s$ rcmd_zfs_replicate_pull2 = ^zfs send %(regex_send_args)s%(regex_resume_args)s%(regex_compress)s%(regex_mbuffer_pull)s$ rcmd_zfs_holds = ^zfs list -H -r -d 1 -t snapshot -o name %(regex_dataset)s \| xargs -d \"\\n\" zfs holds -H$ rcmd_zfs_is_held = ^zfs holds %(regex_snapshot)s$ rcmd_zfs_hold = ^zfs hold zsm %(regex_snapshot)s$ rcmd_zfs_release = ^zfs release zsm %(regex_snapshot)s\s*\|\|\s*true$ rcmd_zfs_get_size = ^zfs send -nv %(regex_send_args)s%(regex_incr_delta)s%(regex_snapshot)s$ rcmd_zfs_get_size2 = ^zfs send -nv %(regex_send_args)s%(regex_resume_args)s$ rcmd_zfs_destroy = ^zfs destroy %(regex_snapshot)s$ rcmd_zfs_receive_abort = ^zfs receive -A %(regex_dataset)s$ rcmd_zfs_get_receive_resume_token = ^zfs get receive_resume_token -pHo value %(regex_dataset)s\s*\|\|\s*true$ # Uncomment below when you want to run one of these on this host # Commands cannot be absolute pathed because of use of rbash. Add directory to rshell_path above # Commenting out setting will turn off permission for that command #rcmd_preexec = ^true$ #rcmd_postexec = ^true$ #rcmd_replicate_postexec = ^true$ #rcmd_aux0 = ^true$ #rcmd_aux1 = ^true$ #rcmd_aux2 = ^true$ #rcmd_aux3 = ^true$ #rcmd_aux4 = ^true$ #rcmd_aux5 = ^true$ #rcmd_aux6 = ^true$ #rcmd_aux7 = ^true$ #rcmd_aux8 = ^true$ #rcmd_aux9 = ^true$ zsnapd-0.8.11h/examples/000077500000000000000000000000001363062134400150705ustar00rootroot00000000000000zsnapd-0.8.11h/examples/laptop.cfg000066400000000000000000000016661363062134400170610ustar00rootroot00000000000000; Situation: ; This laptop is the main working computer. It's trusted and has access to every other node (since it's used to ; manage them anyway. Therefore (and because it's not always online), it's allowed to push its snapshots. ; The laptop takes snapshots of 3 datasets and replicate all of them (push-based) ; Normal snapshots: Snapshot at a certain time, replicate them (push-based) and cleans them out [zroot] mountpoint = / time = 21:00 snapshot = True replicate_endpoint = ssh nas.local replicate_target = zraid/backups/laptop schema = 7d0w0m0y [zroot/home] mountpoint = /home time = 21:00 snapshot = True replicate_endpoint = ssh nas.local replicate_target = zraid/backups/laptop/home schema = 7d0w0m0y ; The dataset zroot/windows is a ZFS volume (it has no mountpoint) [zroot/windows] mountpoint = time = 21:00 snapshot = True replicate_endpoint = ssh nas.local replicate_target = zraid/backups/laptop/windows schema = 7d0w0m0y zsnapd-0.8.11h/examples/nas.cfg000066400000000000000000000055311363062134400163360ustar00rootroot00000000000000; Situation: ; This is the local NAS. It stores a lot of data and is replication target for all other computers. However, some datasets ; are very important, and are replicated to one of the other computers (a remote NAS) ; This NAS takes snapshots from some datasets, receives incomming replication (from the laptop), pushes its own ; snapshots to the remote NAS (which has no access to this NAS) and pulls snapshots from the remote NAS to himself ; The zroot is locally replicated to this dataset, so this dataset is cleaning only (at a time after ; the replication took place) [zraid/backups/nas] mountpoint = /mnt/zraid/backups/nas time = 10:00 snapshot = False schema = 7d3w11m4y ; Normal snapshots: These datasets contain the pushed remote snapshots from the laptop and are thus cleaning only (at a ; time after the replication took place) [zraid/backups/laptop] mountpoint = /mnt/zraid/backups/laptop time = 10:00 snapshot = False schema = 7d3w11m4y [zraid/backups/laptop/home] mountpoint = /mnt/zraid/backups/laptop/home time = 10:00 snapshot = False schema = 7d3w11m4y [zraid/backups/laptop/windows] mountpoint = None time = 10:00 snapshot = False schema = 7d3w11m4y ; A remote NAS (the NAS of e.g. a friend) is replicated to this NAS (pull-based), so cleaning only [zraid/backups/remotenas/documents] mountpoint = /mnt/zraid/backups/remotenas/documents time = 10:00 replicate_endpoint = ssh remotenas.local replicate_source = zroot/data/documents snapshot = False schema = 7d3w11m4y [zraid/backups/remotenas/pictures] mountpoint = /mnt/zraid/backups/remotenas/pictures time = 10:00 replicate_endpoint = ssh remotenas.local replicate_source = zroot/data/pictures snapshot = False schema = 7d3w11m4y [zraid/backups/remotenas/zroot] mountpoint = /mnt/zraid/backups/remotenas/zroot time = 10:00 replicate_endpoint = ssh remotenas.local replicate_source = zroot snapshot = False schema = 7d3w11m4y ; The router is rsyncing its filesystem to an NFS shared dataset on this NAS because it's not running ZFS. It places ; a trigger file on the NFS share once completed [zraid/backups/router] mountpoint = /mnt/zraid/backups/router time = trigger snapshot = True schema = 7d3w11m4y ; Local snapshots, replicated to remote NAS [zraid/family] mountpoint = /mnt/zraid/family time = 05:00 replicate_endpoint = ssh remotenas.local replicate_target = zroot/backups/nas/family snapshot = True schema = 7d3w11m4y [zraid/private] mountpoint = /mnt/zraid/private time = 05:00 replicate_endpoint = ssh remotenas.local replicate_target = zroot/backups/nas/private snapshot = True schema = 7d3w11m4y ; Local snapshots, not replicated [zraid/varia] mountpoint = /mnt/zraid/varia time = 05:00 snapshot = True schema = 7d3w11m4y ; Root filesystem, locally replicated [zroot] mountpoint = / time = 05:00 snapshot = True replicate_endpoint = replicate_target = zraid/backups/nas schema = 3d0w0m0y zsnapd-0.8.11h/examples/remotenas.cfg000066400000000000000000000022471363062134400175530ustar00rootroot00000000000000; Situation: ; This is a NAS stored at a remote locaten (e.g. with a friend). It's used as a NAS in that location as well, so ; contains important data as well which is locally snapshotted. Some datasets are pull-based replicated to the other ; NAS, so it takes no extra configuration here except snapshotting and cleaning. ; Snapshots that will be replicated pull-based: Only snapshotting (before the pull-based replication takes place) ; and cleaning [zroot] mountpoint = / time = 02:00 snapshot = True schema = 7d0w0m0y [zroot/data/documents] mountpoint = /mnt/documents time = 02:00 snapshot = True schema = 7d0w0m0y [zroot/data/pictures] mountpoint = /mnt/pictures time = 02:00 snapshot = True schema = 7d0w0m0y ; Normal, local snapshots (which have identical configuration as above) [zroot/data/music] mountpoint = /mnt/music time = 02:00 snapshot = True schema = 7d0w0m0y ; Incoming snapshots, cleaning only (after the replication took place) [zroot/backups/nas/family] mountpoint = /mnt/backups/nas/family time = 10:00 snapshot = False schema = 7d3w5m0y [zroot/backups/nas/private] mountpoint = /mnt/backups/nas/private time = 10:00 snapshot = False schema = 7d3w5m0y zsnapd-0.8.11h/packaging/000077500000000000000000000000001363062134400151765ustar00rootroot00000000000000zsnapd-0.8.11h/packaging/archlinux/000077500000000000000000000000001363062134400171735ustar00rootroot00000000000000zsnapd-0.8.11h/scripts/000077500000000000000000000000001363062134400147415ustar00rootroot00000000000000zsnapd-0.8.11h/scripts/clean.py000066400000000000000000000156361363062134400164100ustar00rootroot00000000000000# Copyright (c) 2014-2017 Kenneth Henderick # Copyright (c) 2019 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ Provides functionality for cleaning up old ZFS snapshots """ import re import time from datetime import datetime from collections import OrderedDict from magcode.core.globals_ import log_info from magcode.core.globals_ import log_debug from magcode.core.globals_ import log_error from scripts.zfs import ZFS from scripts.globals_ import CLEANER_REGEX from scripts.globals_ import SNAPSHOTNAME_REGEX class Cleaner(object): """ Cleaner class, containing all methods for cleaning up ZFS snapshots """ logger = None # The manager will fill this object @staticmethod def clean(dataset, snapshots, schema, endpoint='', local_dataset='', all_snapshots=False, return_no_keep=True, log_command=False): local_dataset = local_dataset if local_dataset else dataset now = time.localtime() midnight = time.mktime(time.strptime('{0}-{1}-{2}'.format(now.tm_year, now.tm_mon, now.tm_mday) , '%Y-%m-%d')) # Parsing schema match = re.match(CLEANER_REGEX, schema) if not match: log_info('[{0}] - Got invalid schema for dataset {0}: {1}'.format(local_dataset, dataset, schema)) return matchinfo = match.groupdict() settings = {} for key in list(matchinfo.keys()): settings[key] = int(matchinfo[key] if matchinfo[key] is not None else 0) settings['keep'] = settings.get('keep', 0) base_time = midnight - settings['keep']*86400 # Loading snapshots snapshot_list = [] for snapshot in snapshots: snapshotname = snapshots[snapshot]['name'] if (not all_snapshots and re.match(SNAPSHOTNAME_REGEX, snapshotname) is None): # If required, only clean zsnapd snapshots continue held = False if ZFS.is_held(dataset, snapshotname, endpoint, log_command=log_command): log_debug('[{0}] - Keeping {1}@{2} - held snapshot' .format(local_dataset, dataset, snapshotname)) held = True snapshot_ctime = snapshots[snapshot]['creation'] snapshot_age = (base_time - snapshot_ctime)/3600 snapshot_age = int(snapshot_age) if snapshot_age >= 0 else -1 snapshot_list.append({'name': snapshotname, 'handle': snapshot, 'time': datetime.fromtimestamp(snapshot_ctime), 'age': snapshot_age, 'held': held}) buckets = {} counter = -1 for i in range(settings['hours']): counter += 1 buckets[counter] = [] for i in range(settings['days']): counter += (1 * 24) buckets[counter] = [] for i in range(settings['weeks']): counter += (7 * 24) buckets[counter] = [] for i in range(settings['months']): counter += (30 * 24) buckets[counter] = [] for i in range(settings['years']): counter += (30 * 12 * 24) buckets[counter] = [] will_delete = False end_of_life_snapshots = [] kept_flag = False for snapshot in snapshot_list: if snapshot['age'] <= 0: log_debug('[{0}] - Ignoring and keeping {1}@{2} - too fresh' .format(local_dataset, dataset, snapshot['name'])) kept_flag = True continue possible_keys = [] for key in buckets: if snapshot['age'] <= key: possible_keys.append(key) if possible_keys: buckets[min(possible_keys)].append(snapshot) else: will_delete = True end_of_life_snapshots.append(snapshot) # Return from procedure if no scripts found to keep if (return_no_keep and not kept_flag): return to_delete = {} to_keep = {} for key in buckets: oldest = None if len(buckets[key]) == 1: oldest = buckets[key][0] else: for snapshot in buckets[key]: if oldest is None: oldest = snapshot elif snapshot['age'] > oldest['age']: oldest = snapshot else: will_delete = True to_delete[key] = to_delete.get(key, []) + [snapshot] to_keep[key] = oldest to_delete[key] = to_delete.get(key, []) if will_delete is True: log_info('[{0}] - Cleaning {1}'.format(local_dataset, dataset)) keys = list(to_delete.keys()) keys.sort() for key in keys: for snapshot in to_delete[key]: if snapshot['held']: log_info('[{0}] - Skipping held {1}@{2}'.format(local_dataset, dataset, snapshot['name'])) continue log_info('[{0}] - Destroying {1}@{2}'.format(local_dataset, dataset, snapshot['name'])) ZFS.destroy(dataset, snapshot['name'], endpoint, log_command=log_command) snapshots.pop(snapshot['handle']) for snapshot in end_of_life_snapshots: if snapshot['held']: log_info('[{0}] - Skipping held {1}@{2}'.format(local_dataset, dataset, snapshot['name'])) continue log_info('[{0}] - Destroying {1}@{2}'.format(local_dataset, dataset, snapshot['name'])) ZFS.destroy(dataset, snapshot['name'], endpoint, log_command=log_command) snapshots.pop(snapshot['handle']) if will_delete is True: log_info('[{0}] - Cleaning {1} complete'.format(local_dataset, dataset)) zsnapd-0.8.11h/scripts/config.py000066400000000000000000001015211363062134400165600ustar00rootroot00000000000000#!/usr/bin/python3 # Copyright (c) 2014-2017 Kenneth Henderick # Copyright (c) 2019 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ Processes and reads in configuration """ import os import os.path import sys import re import errno import time import configparser from subprocess import SubprocessError from magcode.core.globals_ import * from magcode.core.utility import MagCodeConfigError from magcode.core.utility import get_numeric_setting from scripts.globals_ import CLEANER_REGEX from scripts.globals_ import DEFAULT_BUFFER_SIZE from scripts.globals_ import TRIGGER_FILENAME from scripts.zfs import ZFS TEMPLATE_KEY = r'{template}' TRIGGER_STR = r'trigger' TMP_STRVAL_REGEX = r'(' + TRIGGER_STR + '|' + TEMPLATE_KEY + r')' TMP_HR_REGEX = r'([0-1]*\d|2[0-3])' TMP_HRMIN_REGEX = TMP_HR_REGEX + r':([0-5]\d)' TM_STRVAL_REGEX = r'^' + TMP_STRVAL_REGEX + r'$' TM_HRMIN_REGEX = r'^' + TMP_HRMIN_REGEX + r'$' TMP_RANGE_REGEX = TMP_HRMIN_REGEX + r'\s*-\s*' + TMP_HRMIN_REGEX + r'(\s*/\s*(' + TMP_HRMIN_REGEX + r'|' + TMP_HR_REGEX + r')){0,1}' TM_RANGE_REGEX = r'^' + TMP_RANGE_REGEX + r'$' TMP_HRMINSTRVAL_REGEX = r'(' + TMP_STRVAL_REGEX + r'|' + TMP_HRMIN_REGEX + r')' TM_HRMINSTRVAL_REGEX = r'^' + TMP_HRMINSTRVAL_REGEX + r'$' TMP_HRMINRANGESTRVAL_REGEX = r'(' + TMP_STRVAL_REGEX + r'|' + TMP_HRMIN_REGEX + r'|' + TMP_RANGE_REGEX + r')' TM_HRMINRANGESTRVAL_REGEX = r'^' + TMP_HRMINRANGESTRVAL_REGEX + r'$' TMP_HRMINCOMMA_REGEX = r'(' + TMP_HRMIN_REGEX + r'\s*,\s*){1,}' + TMP_HRMIN_REGEX TM_HRMINCOMMA_REGEX = r'^' + TMP_HRMINCOMMA_REGEX + r'$' TMP_HRMINRANGESTRVALCOMMA_REGEX = r'(' + TMP_HRMINRANGESTRVAL_REGEX + r'\s*,\s*){1,}' + TMP_HRMINRANGESTRVAL_REGEX TM_HRMINRANGESTRVALCOMMA_REGEX = r'^' + TMP_HRMINRANGESTRVALCOMMA_REGEX + r'$' ds_name_syntax = r'^[-_:.a-zA-Z0-9][-_:./a-zA-Z0-9]*$' ds_name_reserved_regex = r'^(log|DEFAULT|(c[0-9]|log/|mirror|raidz|raidz1|raidz2|raidz3|spare).*)$' template_name_syntax = r'^[a-zA-Z0-9][-_:.a-zA-Z0-9]*$' BOOLEAN_REGEX = r'^([tT]rue|[fF]alse|[oO]n|[oO]ff|0|1)$' PATH_REGEX = r'[-_./ ~a-zA-Z0-9]+' MOUNTPOINT_REGEX = r'^(None|legacy|/|/' + PATH_REGEX + r')$' SHELLCMD_REGEX = r'^[-_./~a-zA-Z0-9 :@|=$"' + r"'" + r']+$' SHELLFORMAT_REGEX = r'^[-_./~a-zA-Z0-9 :@|{}]+$' NETCMD_REGEX = r'^[-_./~a-zA-Z0-9 :@|]*$' HOST_REGEX = r'^[0-9a-zA-Z\[][-_.:a-zA-Z0-9\]]*$' PORT_REGEX = r'^[0-9]{1,5}$' USER_REGEX = r'^[a-zA-Z][-_.a-zA-Z0-9]*$' BUFFER_SIZE_REGEX = r'[0-9]{1,12}[kMG]' ds_syntax_dict = {'snapshot': BOOLEAN_REGEX, 'replicate': BOOLEAN_REGEX, 'time': None, 'mountpoint': MOUNTPOINT_REGEX, 'do_trigger': BOOLEAN_REGEX, 'preexec': SHELLCMD_REGEX, 'postexec': SHELLCMD_REGEX, 'log_commands': BOOLEAN_REGEX, 'replicate_all': BOOLEAN_REGEX, 'all_snapshots': BOOLEAN_REGEX, 'replicate_append_basename': BOOLEAN_REGEX, 'replicate_append_fullname': BOOLEAN_REGEX, 'replicate_full_clone': BOOLEAN_REGEX, 'replicate_receive_save': BOOLEAN_REGEX, 'replicate_receive_no_mountpoint': BOOLEAN_REGEX, 'replicate_receive_mountpoint': MOUNTPOINT_REGEX, 'replicate_receive_umount': BOOLEAN_REGEX, 'replicate_send_compression': BOOLEAN_REGEX, 'replicate_send_properties': BOOLEAN_REGEX, 'replicate_send_raw': BOOLEAN_REGEX, 'replicate_postexec': SHELLCMD_REGEX, 'replicate_target': ds_name_syntax, 'replicate_source': ds_name_syntax, 'replicate_endpoint': NETCMD_REGEX, 'replicate_endpoint_host': HOST_REGEX, 'replicate_endpoint_port': PORT_REGEX, 'replicate_endpoint_command': SHELLFORMAT_REGEX, 'replicate2_append_fullname': BOOLEAN_REGEX, 'replicate2_append_basename': BOOLEAN_REGEX, 'replicate2_full_clone': BOOLEAN_REGEX, 'replicate2_receive_save': BOOLEAN_REGEX, 'replicate2_receive_no_mountpoint': BOOLEAN_REGEX, 'replicate2_receive_mountpoint': MOUNTPOINT_REGEX, 'replicate2_receive_umount': BOOLEAN_REGEX, 'replicate2_send_compression': BOOLEAN_REGEX, 'replicate2_send_properties': BOOLEAN_REGEX, 'replicate2_send_raw': BOOLEAN_REGEX, 'replicate2_target': ds_name_syntax, 'replicate2_endpoint': NETCMD_REGEX, 'replicate2_endpoint_host': HOST_REGEX, 'replicate2_endpoint_port': PORT_REGEX, 'replicate2_endpoint_command': SHELLFORMAT_REGEX, 'buffer_size': BUFFER_SIZE_REGEX, 'compression': PATH_REGEX, 'compression2': PATH_REGEX, 'schema': CLEANER_REGEX, 'local_schema': CLEANER_REGEX, 'remote_schema': CLEANER_REGEX, 'remote2_schema': CLEANER_REGEX, 'clean_all': BOOLEAN_REGEX, 'local_clean_all': BOOLEAN_REGEX, 'remote_clean_all': BOOLEAN_REGEX, 'remote2_clean_all': BOOLEAN_REGEX, 'template': template_name_syntax, } DEFAULT_ENDPOINT_PORT = 22 DEFAULT_ENDPOINT_CMD = 'ssh -p {port} {host}' DATE_SPEC = '%Y%m%d ' ZFS_MOUNTPOINT_NONE = ('legacy', 'none') def _check_time_syntax(section_name, item, time_spec, checking_template=False): """ Function called to check time spec syntax """ if (',' in time_spec): if (re.match(TM_HRMINRANGESTRVALCOMMA_REGEX, time_spec) is None): log_error("[{0}] {1} - value '{2}' invalid. Must be of form 'HH:MM, HH:MM, HH:MM-HH:MM/[HH:MM|HH|H], {3}, {4}, ...'." .format(section_name, item, time_spec, TEMPLATE_KEY, TRIGGER_STR)) return False else: if (re.match(TM_HRMINRANGESTRVAL_REGEX, time_spec) is None): log_error("[{0}] {1} - value '{2}' invalid. Must be of form 'HH:MM', 'HH:MM-HH:MM/[HH:MM|HH|H]', '{3}' or '{4}'." .format(section_name, item, time_spec, TEMPLATE_KEY, TRIGGER_STR)) return False if time_spec.find(TEMPLATE_KEY) != -1: if checking_template: log_error("[{0}] {1} - value '{2}' invalid. Templates can't have '{3}' as part of the time specifier." .format(section_name, item, time_spec, TEMPLATE_KEY)) return False else: lfind = time_spec.find(TEMPLATE_KEY) rfind = time_spec.rfind(TEMPLATE_KEY) if lfind != rfind: log_error("[{0}] {1} - value '{2}' invalid. More than one '{3}' found.".format(section_name, item, time_spec, TEMPLATE_KEY)) return False return True ds_syntax_dict['time'] = _check_time_syntax class MeterTime(object): """ Manages the passing of time on a daily cycle, and the parsing of time strings for that cycle """ def __init__(self, dataset='', time_spec='', mountpoint=''): """ Initialise class """ hysteresis_time = int(get_numeric_setting('startup_hysteresis_time', float)) self.prev_secs = int(time.time()) - hysteresis_time self.dataset = dataset self.mountpoint = mountpoint self.time_spec = time_spec self.date = self._midnight_date() # Do this before calling _parse_timespec(), as that routine sets it! self.trigger_flag = False self.time_list = self._parse_timespec(self.time_spec) if self.time_spec else [] def __repr__(self): return '{0}'.format(self.time_spec) def __iter__(self): yield from self.time_list def __call__(self, time_spec, section_name, item): return (self._parse_timespec(time_spec, section_name, item, syntax_check=True)) def _midnight_date(self): date = time.strftime(DATE_SPEC, time.localtime()) return(int(time.mktime(time.strptime(date + '00:00', DATE_SPEC + '%H:%M')))) def _parse_timespec(self, time_spec, section_name=None, item=None, syntax_check=False): """ Parse a time spec """ def parse_hrmin(time_spec): return(int(time.mktime(time.strptime(date + time_spec, DATE_SPEC + '%H:%M')))) def parse_range(time_spec): tm_list = [] parse = time_spec.split('-') if ('/' in parse[1]): parse = [parse[0]] + parse[1].split('/') parse = [ts.strip() for ts in parse] tm_start = parse_hrmin(parse[0]) tm_stop = parse_hrmin(parse[1]) if (tm_stop < tm_start): if (section_name and item): log_error("[{0}] {1} - '{2}' - '{3}' before '{4}', should be after." .format(section_name, item, time_spec, parse[1], parse[0])) parse_flag = False return([]) if (len(parse) > 2): int_parse = parse[2] if ( ':' in int_parse): int_parse = int_parse.split(':') tm_int = int(int_parse[0]) * 3600 + int(int_parse[1]) * 60 else: tm_int = int(int_parse) * 3600 else: tm_int = 3600 tm_next = tm_start while (tm_next < tm_stop): tm_list.append(tm_next) tm_next += tm_int tm_list.append(tm_stop) return(tm_list) def parse_spec(time_spec): if (time_spec == TRIGGER_STR): self.trigger_flag = True if syntax_check: return[1,] return([]) if (time_spec == TEMPLATE_KEY): if syntax_check: return([1,]) return([]) if re.match(TM_HRMIN_REGEX, time_spec): return ([parse_hrmin(time_spec)]) if re.match(TM_RANGE_REGEX, time_spec): return(parse_range(time_spec)) raise Exception('Parsing time specs, should not have got here!') parse_flag = True date = time.strftime(DATE_SPEC, time.localtime()) time_list = [] spec_list = time_spec.split(',') spec_list = [ts.strip() for ts in spec_list] for ts in spec_list: time_list = time_list + parse_spec(ts) time_list.sort() if parse_flag: return(time_list) else: return([]) def is_trigger(self): return self.trigger_flag def do_run(self, now): """ Check if time has passed for a dataset, or for a .trigger file """ # Reinitialise time_list now_date = self._midnight_date() if (now_date > self.date): # Now a new day, reinitialise time_list self.date = now_date self.time_list = self._parse_timespec(self.time_spec) if self.time_spec else [] # Trigger file if self.is_trigger(): # We wait until we find a trigger file in the filesystem trigger_filename = '{0}/{1}'.format(self.mountpoint, TRIGGER_FILENAME) if os.path.exists(trigger_filename): log_info("[{0}] - trigger file '{1}' found".format(self.dataset, trigger_filename)) os.remove(trigger_filename) self.prev_secs = now return True # Check for Time passed prev_secs = self.prev_secs for inst in self.time_list: if ( prev_secs < inst <= now): log_info('[{0}] - time passed has passed'.format(self.dataset)) self.prev_secs = now return True self.prev_secs = now return False class Config(object): @staticmethod def _check_section_syntax(section, section_name, checking_template=False): result = True for item in section.keys(): try: value_syntax = ds_syntax_dict[item] except KeyError as ex: log_error("[{0}] - item '{1}' is not a valid dataset keyword.".format(section_name, item)) result = False continue if (not ds_syntax_dict[item]): continue value = section[item] if type(ds_syntax_dict[item]) == type(_check_time_syntax): if not ds_syntax_dict[item](section_name, item, value, checking_template): result = False continue if (not re.match(ds_syntax_dict[item], value)): log_error("[{0}] {1} - value '{2}' invalid. Must match regex '{3}'.".format(section_name, item, value, ds_syntax_dict[item])) result = False if item in ('replicate_source', 'replicate_target'): if re.match(ds_name_reserved_regex, value): log_error("[{0}] {1} - value '{2}' invalid. Must not start with a ZFS reserved keyword.".format(section_name, item, value)) result = False return result @staticmethod def _check_template_syntax(template_config): """ Checks the syntax of the template file """ result = True # Check syntax of DEFAULT section if not Config._check_section_syntax(template_config.defaults(), 'DEFAULT'): result = False for template in template_config.sections(): # Check name syntax of each dataset group if not re.match(template_name_syntax, template): log_error("Template name '{0}' is invalid.".format(template)) result = False # Check syntax of each dataset group if not Config._check_section_syntax(template_config[template], template, checking_template=True): result = False return result @staticmethod def _check_dataset_syntax (ds_config): """ Checks the dataset syntax of read in items """ result = True for dataset in ds_config.sections(): if (not re.match(ds_name_syntax, dataset) or re.match(ds_name_reserved_regex, dataset)): log_error("Dataset name '{0}' is invalid.".format(dataset)) result = False if not Config._check_section_syntax(ds_config[dataset], dataset): result = False if (ds_config.has_option(dataset, 'replicate_endpoint_host') or ds_config.has_option(dataset, 'replicate_endpoint')): if (not ds_config.has_option(dataset, 'replicate_target') and not ds_config.has_option('replicate_source')): log_error("Dataset '{0}' is configured for replication but no replicate_source or replicate_target is specified.".format(dataset)) result = False return result @staticmethod def read_ds_config(): """ Read dataset configuration """ def read_config(filename, dirname=None, default_dict=None): file_ = open(filename) config = configparser.ConfigParser() if default_dict: config.read_dict(default_dict) config.read_file(file_) if dirname: for root, dirs, files in os.walk(dirname): file_list = [os.path.join(root, name) for name in files] config.read(file_list) file_.close() return config def check_ds_config_clash(setting_name1, setting_name2, fallback1=False, fallback2=False): nonlocal invalid_config try: setting1 = ds_config.get(dataset, setting_name1) setting1_set = True except configparser.NoOptionError: setting1 = fallback1 setting1_set = False try: setting2 = ds_config.get(dataset, setting_name2) setting2_set = True except configparser.NoOptionError: setting2 = fallback2 setting2_set = False if ((setting1_set and setting1) and (setting2_set and setting2)): log_error("[{0}] - '{1}' and '{2}' can't be set at the same time.".format(dataset, setting_name1, setting_name2)) invalid_config = True return (setting1, setting2) ds_settings = {} ds_dict = {} template_dict = {} try: template_filename = settings['template_config_file'] template_dirname = settings['template_config_dir'] template_config = read_config(template_filename, template_dirname) if not Config._check_template_syntax(template_config): raise MagCodeConfigError("Invalid dataset syntax in config file/dir '{0}' or '{1}'" .format(template_filename, template_dirname)) def get_sect_dict(config, section): res_dict = {} for item in config[section]: if item in ('snapshot', ): res_dict[item] = config.getboolean(section, item) else: res_dict[item] = config.get(section, item) return res_dict template_dict = {template_section:get_sect_dict(template_config, template_section) for template_section in template_config.sections()} ds_filename = settings['dataset_config_file'] ds_dirname = settings['dataset_config_dir'] ds_config = read_config(ds_filename, ds_dirname) invalid_config = not bool(Config._check_dataset_syntax(ds_config)) if invalid_config: raise MagCodeConfigError("Invalid dataset syntax in config file/dir '{0}' or '{1}'" .format(ds_filename, ds_dirname)) # Assemble default ds_dict ds_dict = {} for ds in ds_config.sections(): ds_template = ds_config.get(ds, 'template', fallback=None) if (ds_template and ds_template in template_dict): ds_dict[ds] = template_dict.get(ds_template, None) # Destroy ds_config and re read it del ds_config ds_config = read_config(ds_filename, ds_dirname, ds_dict) datasets = ZFS.get_datasets() for dataset in ds_config.sections(): # Calculate mountpoint zfs_mountpoint = None if dataset in datasets: zfs_mountpoint = settings['zfs_proc_mounts'].get(dataset, None) zfs_mountpoint = zfs_mountpoint if zfs_mountpoint not in ZFS_MOUNTPOINT_NONE else None mountpoint = ds_config.get(dataset, 'mountpoint', fallback=zfs_mountpoint) # Work out time_spec time_spec = ds_config.get(dataset, 'time') if time_spec.find(TEMPLATE_KEY) != -1: try: time_spec = time_spec.replace(TEMPLATE_KEY, ds_dict[dataset]['time']) except KeyError: log_error("[{0}] - template section or template time setting does not exist.".format(dataset)) invalid_config = True continue test_time = MeterTime() if not test_time(time_spec, dataset, 'time'): del test_time invalid_config = True continue del test_time # Deal with deprecated settings old_setting_repl_all = ds_config.getboolean(dataset, 'replicate_all', fallback=True) ds_settings[dataset] = {'mountpoint': mountpoint, 'time': MeterTime(dataset, time_spec, mountpoint), 'all_snapshots': ds_config.getboolean(dataset, 'all_snapshots', fallback=old_setting_repl_all), 'snapshot': ds_config.getboolean(dataset, 'snapshot'), 'do_trigger': ds_config.getboolean(dataset, 'do_trigger', fallback=False), 'replicate': None, 'replicate2': None, 'schema': ds_config.get(dataset, 'schema'), 'local_schema': ds_config.get(dataset, 'local_schema', fallback=None), 'remote_schema': ds_config.get(dataset, 'remote_schema', fallback=None), 'remote2_schema': ds_config.get(dataset, 'remote2_schema', fallback=None), 'clean_all': ds_config.get(dataset, 'clean_all', fallback=False), 'local_clean_all': ds_config.get(dataset, 'local_clean_all', fallback=None), 'remote_clean_all': ds_config.get(dataset, 'remote_clean_all', fallback=None), 'remote2_clean_all': ds_config.get(dataset, 'remote2_clean_all', fallback=None), 'preexec': ds_config.get(dataset, 'preexec', fallback=None), 'postexec': ds_config.get(dataset, 'postexec', fallback=None), 'replicate_postexec': ds_config.get(dataset, 'replicate_postexec', fallback=None), 'log_commands': ds_config.getboolean(dataset, 'log_commands', fallback=False)} if (ds_settings[dataset]['local_schema'] is None): ds_settings[dataset]['local_schema'] = ds_settings[dataset]['schema'] if (ds_settings[dataset]['local_clean_all'] is None): ds_settings[dataset]['local_clean_all'] = ds_settings[dataset]['clean_all'] if (ds_settings[dataset]['remote_clean_all'] is None): ds_settings[dataset]['remote_clean_all'] = ds_settings[dataset]['clean_all'] if (ds_settings[dataset]['remote2_clean_all'] is None): ds_settings[dataset]['remote2_clean_all'] = ds_settings[dataset]['clean_all'] if ((ds_config.has_option(dataset, 'replicate_endpoint_host') or ds_config.has_option(dataset, 'replicate_endpoint')) and (ds_config.has_option(dataset, 'replicate_target') or ds_config.has_option(dataset, 'replicate_source'))): host = ds_config.get(dataset, 'replicate_endpoint_host', fallback='') port = ds_config.get(dataset, 'replicate_endpoint_port', fallback=DEFAULT_ENDPOINT_PORT) if ds_config.has_option(dataset, 'replicate_endpoint_host'): command = ds_config.get(dataset, 'replicate_endpoint_command', fallback=DEFAULT_ENDPOINT_CMD) if host: endpoint = command.format(port=port, host=host) else: endpoint = '' else: endpoint = ds_config.get(dataset, 'replicate_endpoint') full_clone = ds_config.getboolean(dataset, 'replicate_full_clone', fallback=False) send_properties = ds_config.getboolean(dataset, 'replicate_send_properties', fallback=False) append_basename, append_fullname = check_ds_config_clash('replicate_append_basename', 'replicate_append_fullname') receive_no_mountpoint, receive_mountpoint = check_ds_config_clash('replicate_receive_no_mountpoint', 'replicate_receive_mountpoint', fallback1=(full_clone or send_properties), fallback2='') if (receive_no_mountpoint and receive_mountpoint): # Setting a receive_mountpoint overrides a full_clone receive_no_mountpoint receive_no_mountpoint = False append_name = '' if (append_fullname and dataset.find('/') != -1): append_name = dataset[dataset.find('/'):] if (append_basename and dataset.rfind('/') != -1): append_name = dataset[dataset.rfind('/'):] if (receive_mountpoint): receive_mountpoint = str(receive_mountpoint) + append_name target = ds_config.get(dataset, 'replicate_target', fallback=None) if target: target += append_name ds_settings[dataset]['replicate'] = {'endpoint': endpoint, 'target': target, 'source': ds_config.get(dataset, 'replicate_source', fallback=None), 'all_snapshots': ds_config.getboolean(dataset, 'all_snapshots', fallback=old_setting_repl_all), 'compression': ds_config.get(dataset, 'compression', fallback=None), 'full_clone': full_clone, 'receive_save': ds_config.getboolean(dataset, 'replicate_receive_save', fallback=False), 'receive_no_mountpoint': receive_no_mountpoint, 'receive_mountpoint': receive_mountpoint, 'receive_umount': ds_config.getboolean(dataset, 'replicate_receive_umount', fallback=(full_clone or send_properties)), 'send_compression': ds_config.getboolean(dataset, 'replicate_send_compression', fallback=False), 'send_properties': send_properties, 'send_raw': ds_config.getboolean(dataset, 'replicate_send_raw', fallback=False), 'buffer_size': ds_config.get(dataset, 'buffer_size', fallback=DEFAULT_BUFFER_SIZE), 'log_commands': ds_config.getboolean(dataset, 'log_commands', fallback=False), 'endpoint_host': host, 'endpoint_port': port} if ((ds_config.has_option(dataset, 'replicate2_endpoint_host') or ds_config.has_option(dataset, 'replicate2_endpoint')) and (ds_config.has_option(dataset, 'replicate2_target'))): host = ds_config.get(dataset, 'replicate2_endpoint_host', fallback='') port = ds_config.get(dataset, 'replicate2_endpoint_port', fallback=DEFAULT_ENDPOINT_PORT) if ds_config.has_option(dataset, 'replicate2_endpoint_host'): command = ds_config.get(dataset, 'replicate2_endpoint_command', fallback=ds_config.get(dataset, 'replicate_endpoint_command', fallback=DEFAULT_ENDPOINT_CMD)) if host: endpoint = command.format(port=port, host=host) else: endpoint = '' else: endpoint = ds_config.get(dataset, 'replicate2_endpoint') full_clone = ds_config.getboolean(dataset, 'replicate2_full_clone', fallback=False) send_properties = ds_config.getboolean(dataset, 'replicate2_send_properties', fallback=False) append_basename, append_fullname = check_ds_config_clash('replicate2_append_basename', 'replicate2_append_fullname') receive_no_mountpoint, receive_mountpoint = check_ds_config_clash('replicate2_receive_no_mountpoint', 'replicate2_receive_mountpoint', fallback1=(full_clone or send_properties), fallback2='') if (receive_no_mountpoint and receive_mountpoint): # Setting a receive_mountpoint overrides a full_clone receive_no_mountpoint receive_no_mountpoint = False append_name = '' if (append_fullname and dataset.find('/') != -1): append_name = dataset[dataset.find('/'):] if (append_basename and dataset.rfind('/') != -1): append_name = dataset[dataset.rfind('/'):] if (receive_mountpoint): receive_mountpoint = str(receive_mountpoint) + append_name target = ds_config.get(dataset, 'replicate2_target', fallback=None) if target: target += append_name ds_settings[dataset]['replicate2'] = {'endpoint': endpoint, 'target': target, 'source': None, 'all_snapshots': ds_config.getboolean(dataset, 'all_snapshots', fallback=old_setting_repl_all), 'compression': ds_config.get(dataset, 'compression2', fallback=None), 'full_clone': full_clone, 'append_basename': ds_config.get(dataset, 'replicate2_append_basename', fallback=False), 'append_fullname': ds_config.get(dataset, 'replicate2_append_fullname', fallback=False), 'receive_save': ds_config.getboolean(dataset, 'replicate2_receive_save', fallback=False), 'receive_no_mountpoint': receive_no_mountpoint, 'receive_mountpoint': receive_mountpoint, 'receive_umount': ds_config.getboolean(dataset, 'replicate2_receive_umount', fallback=(full_clone or send_properties)), 'send_compression': ds_config.getboolean(dataset, 'replicate2_send_compression', fallback=False), 'send_properties': send_properties, 'send_raw': ds_config.getboolean(dataset, 'replicate2_send_raw', fallback=False), 'buffer_size': ds_config.get(dataset, 'buffer2_size', fallback=DEFAULT_BUFFER_SIZE), 'log_commands': ds_config.getboolean(dataset, 'log_commands', fallback=False), 'endpoint_host': host, 'endpoint_port': port} if invalid_config: raise MagCodeConfigError("Invalid dataset syntax in config file/dir '{0}', '{1}', '{2}', or '{3}'" .format(template_filename, template_dirname, ds_filename, ds_dirname)) # Handle file opening and read errors except (IOError,OSError) as e: log_error('Exception while parsing configuration file: {0}'.format(str(e))) if (e.errno == errno.EPERM or e.errno == errno.EACCES): systemd_exit(os.EX_NOPERM, SDEX_NOPERM) else: systemd_exit(os.EX_IOERR, SDEX_GENERIC) # Handle all configuration file parsing errors except configparser.Error as e: log_error('Exception while parsing configuration file: {0}'.format(str(e))) systemd_exit(os.EX_CONFIG, SDEX_CONFIG) except MagCodeConfigError as e: log_error(str(e)) systemd_exit(os.EX_CONFIG, SDEX_CONFIG) # Handle errors running zfs list -pH etc except (RuntimeError, SubprocessError) as e: log_error(str(e)) systemd_exit(os.EX_SOFTWARE, SDEX_GENERIC) return ds_settings zsnapd-0.8.11h/scripts/globals_.py000066400000000000000000000044271363062134400171040ustar00rootroot00000000000000""" Globals file for zsnapd """ from magcode.core.globals_ import settings # Constants for use in program CLEANER_REGEX = r'^((?P[0-9]+)k){0,1}((?P[0-9]+)h){0,1}(?P[0-9]+)d(?P[0-9]+)w(?P[0-9]+)m(?P[0-9]+)y$' SNAPSHOTNAME_REGEX = r'^(\d{4})(1[0-2]|0[1-9])(0[1-9]|[1-2]\d|3[0-1])(([0-1]\d|2[0-3])([0-5]\d)){0,1}$' SNAPSHOTNAME_FMTSPEC = '%Y%m%d%H%M' TRIGGER_FILENAME = '.trigger' DEFAULT_BUFFER_SIZE = '512M' # settings for where files are settings['config_dir'] = '/etc/zsnapd' settings['log_dir'] = '/var/log/zsnapd' settings['run_dir'] = '/run' settings['config_file'] = settings['config_dir'] + '/' + 'process.conf' # Zsnapd only uses one daemon settings['pid_file'] = settings['run_dir'] + '/' + 'zsnapd.pid' #settings['log_file'] = settings['log_dir'] \ # + '/' + settings['process_name'] + '.log' settings['log_file'] = '' settings['panic_log'] = settings['log_dir'] \ + '/' + settings['process_name'] + '-panic.log' settings['syslog_facility'] = '' # zsnapd.py # Dataset config file settings['dataset_config_file'] = settings['config_dir'] \ + '/' + 'dataset.conf' settings['dataset_config_dir'] = settings['config_dir'] \ + '/' + 'dataset.conf.d' # Template config file settings['template_config_file'] = settings['config_dir'] \ + '/' + 'template.conf' settings['template_config_dir'] = settings['config_dir'] \ + '/' + 'template.conf.d' # Print debug mark settings['debug_mark'] = False # Number of seconds we wait while looping in main loop... settings['sleep_time'] = 300 # seconds settings['debug_sleep_time'] = 15 # seconds settings['startup_hysteresis_time'] = 15 # seconds settings['connect_retry_wait'] = 3 # seconds settings['zfs_proc_not_mounts'] = ('/var/lib/lxd/devices',) def read_proc_mounts(): zfs_mnts = {} try: for mnt in open('/proc/self/mounts'): mnt = mnt.split(' ') if (len(mnt) < 3): continue if (mnt[2] != 'zfs'): continue if zfs_mnts.get(mnt[0], None): continue if (mnt[1].startswith(settings['zfs_proc_not_mounts'])): continue zfs_mnts[mnt[0]] = mnt[1] except: pass return zfs_mnts settings['zfs_proc_mounts'] = read_proc_mounts() zsnapd-0.8.11h/scripts/globals_rcmd.py000066400000000000000000000041701363062134400177450ustar00rootroot00000000000000""" Globals file for zsnapd """ from magcode.core.globals_ import settings # settings for where files are settings['config_dir'] = '/etc/zsnapd' settings['log_dir'] = '/var/log' settings['run_dir'] = '/run' settings['config_file'] = settings['config_dir'] + '/' + 'zsnapd-rcmd.conf' #settings['log_file'] = settings['log_dir'] \ # + '/' + settings['process_name'] + '.log' settings['log_file'] = '' settings['syslog_facility'] = '' # Defaults for zsnapd-rcmd settings['rshell'] = '/bin/rbash' settings['rshell_path'] = '/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin' settings['regex_error_on_^'] = True settings['regex_error_on_.*'] = True settings['regex_error_on_$'] = True settings['regex_comp_prog'] = '' settings['regex_compress'] = '' settings['regex_decompress'] = '' settings['regex_dataset'] = '' settings['regex_mountpoint'] = '' settings['regex_receive_args'] = '' settings['regex_snapshot'] = '' settings['regex_send_args'] = '' settings['regex_incr_delta'] = '' settings['regex_resume_args'] = '' settings['regex_mbuffer_common'] = '' settings['regex_mbuffer_push'] = '' settings['regex_mbuffer_pull'] = '' settings['regex_grep_filter_dataset'] = '' settings['rcmd_zfs_get_snapshots'] = '' settings['rcmd_zfs_get_snapshots2'] = '' settings['rcmd_zfs_get_datasets'] = '' settings['rcmd_zfs_snapshot'] = '' settings['rcmd_zfs_replicate_push'] = '' settings['rcmd_zfs_replicate_pull'] = '' settings['rcmd_zfs_replicate_pull2'] = '' settings['rcmd_zfs_holds'] = '' settings['rcmd_zfs_is_held'] = '' settings['rcmd_zfs_hold'] = '' settings['rcmd_zfs_release'] = '' settings['rcmd_zfs_get_size'] = '' settings['rcmd_zfs_get_size2'] = '' settings['rcmd_zfs_destroy'] = '' settings['rcmd_zfs_receive_abort'] = '' settings['rcmd_zfs_get_receive_resume_token'] = '' settings['rcmd_preexec'] = '' settings['rcmd_postexec'] = '' settings['rcmd_replicate_postexec'] = '' settings['rcmd_aux0'] = '' settings['rcmd_aux1'] = '' settings['rcmd_aux2'] = '' settings['rcmd_aux3'] = '' settings['rcmd_aux4'] = '' settings['rcmd_aux5'] = '' settings['rcmd_aux6'] = '' settings['rcmd_aux7'] = '' settings['rcmd_aux8'] = '' settings['rcmd_aux9'] = '' zsnapd-0.8.11h/scripts/helper.py000066400000000000000000000052671363062134400166040ustar00rootroot00000000000000# Copyright (c) 2014-2017 Kenneth Henderick # Copyright (c) 2019 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ Provides basic helper functionality """ import re import sys from subprocess import Popen, PIPE from magcode.core.globals_ import debug_extreme from magcode.core.globals_ import log_debug from magcode.core.globals_ import log_info from magcode.core.globals_ import log_error class Helper(object): """ Contains generic helper functionality """ @staticmethod def run_command(command, cwd, endpoint='', log_command=False, filter_error=''): """ Executes a command, returning the output. If the command fails, it raises """ if endpoint == '': command = command else: command = "{0} '{1}'".format(endpoint, command) if log_command: log_debug("Executing command: '{0}'".format(command)) elif debug_extreme(): log_debug("Executing command: '{0}'".format(command)) pattern = re.compile(r'[^\n\t@ a-zA-Z0-9_\\.:/\-]+') process = Popen(command, shell=True, cwd=cwd, stdout=PIPE, stderr=PIPE) out, err = process.communicate() # Clean up output if (sys.version_info.major >= 3): out = out.decode(encoding='utf-8') err = err.decode(encoding='utf-8') err = err.strip() return_code = process.poll() if return_code != 0: if (not filter_error or err.find(filter_error) == -1): raise RuntimeError('{0} failed with return value {1} and error message: {2}'.format(command, return_code, err)) return re.sub(pattern, '', out) zsnapd-0.8.11h/scripts/manager.py000077500000000000000000000660751363062134400167460ustar00rootroot00000000000000#!/usr/bin/python3 # Copyright (c) 2014-2017 Kenneth Henderick # Copyright (c) 2019 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ Provides the overall functionality """ import time import os import re from collections import OrderedDict from socket import gethostname from magcode.core.globals_ import * from magcode.core.utility import connect_test_address from magcode.core.utility import get_numeric_setting from scripts.zfs import ZFS from scripts.clean import Cleaner from scripts.helper import Helper from scripts.globals_ import SNAPSHOTNAME_FMTSPEC from scripts.globals_ import SNAPSHOTNAME_REGEX from scripts.globals_ import TRIGGER_FILENAME PROC_FAILURE = 0 PROC_EXECUTED = 1 PROC_CHANGED = 2 class IsConnected(object): """ Test object class for caching endpoint connectivity and testing for it as well """ def __init__(self): self.unconnected_list = [] self.connected_list = [] def _test_connected(self, host, port): connect_retry_wait = get_numeric_setting('connect_retry_wait', float) exc_msg = '' for t in range(3): try: # Transform any hostname to an IP address connect_test_address(host, port) break except(IOError, OSError) as exc: exc_msg = str(exc) time.sleep(connect_retry_wait) continue else: if self.local_dataset: log_info("[{0}] - Can't reach endpoint '{1}:{2}' - {3}" .format(self.local_dataset, host, port, exc_msg)) else: log_error("Can't reach endpoint '{0}:{1}' - {2}".format(host, port, exc_msg)) return False return True def test_unconnected(self, replicate_param, local_dataset=''): """ Check that endpoint is unconnected """ self.local_dataset = local_dataset if (replicate_param and replicate_param['endpoint_host']): host = replicate_param['endpoint_host'] port = replicate_param['endpoint_port'] if ((host, port) in self.unconnected_list): return(True) if ((host, port) not in self.connected_list): if self._test_connected(host, port): self.connected_list.append((host, port)) # Go and write trigger else: self.unconnected_list.append((host, port)) return(True) return(False) class Manager(object): """ Manages the ZFS snapshotting process """ @staticmethod def touch_trigger(ds_settings, test_reachable, do_trigger, *args): """ Runs around creating .trigger files for datasets with time = trigger """ result = True datasets = ZFS.get_datasets() ds_candidates = [ds.rstrip('/') for ds in args if ds[0] != '/'] mnt_candidates = [m.rstrip('/') for m in args if m[0] == '/'] do_trigger_candidates = [ds for ds in ds_settings if ds_settings[ds]['do_trigger']] trigger_mnts_dict = {ds_settings[ds]['mountpoint']:ds for ds in ds_settings if ds_settings[ds]['time'].is_trigger()} if len(ds_candidates): for candidate in ds_candidates: if candidate not in datasets: log_error("Dataset '{0}' does not exist.".format(candidate)) sys.exit(os.EX_DATAERR) if candidate not in ds_settings: log_error("Dataset '{0}' is not configured fo zsnapd.".format(candidate)) sys.exit(os.EX_DATAERR) if len(mnt_candidates): for candidate in mnt_candidates: if candidate not in trigger_mnts_dict: log_error("Trigger mount '{0}' not configured for zsnapd".format(candidate)) sys.exit(os.EX_DATAERR) if trigger_mnts_dict[candidate] not in datasets: log_error("Dataset '{0}' for trigger mount {1} does not exist.".format(candidate, trigger_mnts_dict[candidate])) sys.exit(os.EX_DATAERR) ds_candidates.append(trigger_mnts_dict[candidate]) # If no candidates given on comman line, process all those with do_trigger set if (do_trigger and not ds_candidates): ds_candidates = do_trigger_candidates # If do_trigger, only process those datasets with do_trigger set elif (do_trigger and ds_candidates): for ds in ds_candidates: if (settings['verbose'] and not ds_settings[ds]['do_trigger']): log_info("Dataset '{0}' 'do_trigger' not set - skipping.".format(ds)) ds_candidates = [ ds for ds in ds_candidates if ds_settings[ds]['do_trigger']] if (not len(ds_candidates)): log_error("No datasets configured for triggers or given on command line.") sys.exit(os.EX_NOINPUT) # Check ds_candidates for is_trigger and mnt point for ds in ds_candidates: if (not ds_settings[ds]['time'].is_trigger() and settings['verbose']): log_info("Dataset '{0}' is not configured for triggers - skipping.".format(ds)) if (not ds_settings[ds]['mountpoint'] and settings['verbose']): log_info("Dataset '{0}' does not have a mountpoint configured - skipping.".format(ds)) is_connected = IsConnected() for dataset in datasets: if dataset in ds_settings: if (len(ds_candidates) and dataset not in ds_candidates): continue try: dataset_settings = ds_settings[dataset] take_snapshot = dataset_settings['snapshot'] is True replicate = dataset_settings['replicate'] is not None clean = bool(dataset_settings['schema']) if take_snapshot is True or replicate is True or clean is True: if dataset_settings['time'].is_trigger() and dataset_settings['mountpoint']: # Check endpoint for trigger is connected if test_reachable and is_connected.test_unconnected(dataset_settings['replicate']): continue # Trigger file testing and creation trigger_filename = '{0}/{1}'.format(dataset_settings['mountpoint'], TRIGGER_FILENAME) if os.path.exists(trigger_filename): continue if (not os.path.isdir(dataset_settings['mountpoint'])): log_error("Directory '{0}' does not exist.".format(dataset_settings['mountpoint'])) result = False continue trigger_file = open(trigger_filename, 'wt') trigger_file.close() except Exception as ex: log_error('Exception: {0}'.format(str(ex))) del is_connected return result @staticmethod def snapshot(dataset, snapshots, now, local_dataset='', endpoint='', log_command=False): local_dataset = dataset if not local_dataset else local_dataset result = PROC_EXECUTED this_time = time.strftime(SNAPSHOTNAME_FMTSPEC, time.localtime(now)) # Take this_time's snapshotzfs log_info('[{0}] - Taking snapshot {1}@{2}'.format(local_dataset, dataset, this_time)) try: ZFS.snapshot(dataset, this_time, endpoint=endpoint, log_command=log_command) except Exception as ex: # if snapshot fails move onto next one log_error('[{0}] - Exception: {1}'.format(local_dataset, str(ex))) return PROC_FAILURE else: snapshots.update({this_time:{'name': this_time, 'creation': now}}) log_info('[{0}] - Taking snapshot {1}@{2} complete'.format(local_dataset, dataset, this_time)) result = PROC_CHANGED return result @staticmethod def new_hold(dataset, snap_name, endpoint='', log_command=False): result = PROC_EXECUTED holds = ZFS.holds(dataset, endpoint=endpoint, log_command=log_command) ZFS.hold(dataset, snap_name, endpoint=endpoint, log_command=log_command, may_exist=True) if snap_name in holds: holds.remove(snap_name) for hold in holds: ZFS.release(dataset, hold, endpoint=endpoint, log_command=log_command) result = PROC_CHANGED return result @staticmethod def replicate(src_dataset, src_snapshots, dst_dataset, dst_snapshots, replicate_settings): result = PROC_EXECUTED push = replicate_settings['target'] is not None replicate_dirN = 'push' if push else 'pull' src_endpoint = '' if push else replicate_settings['endpoint'] dst_endpoint = replicate_settings['endpoint'] if push else '' src_host = gethostname().split('.')[0] if push else replicate_settings['endpoint_host'] dst_host = gethostname().split('.')[0] if not push else replicate_settings['endpoint_host'] local_dataset = src_dataset if push else dst_dataset full_clone = replicate_settings['full_clone'] receive_save = replicate_settings['receive_save'] receive_no_mountpoint = replicate_settings['receive_no_mountpoint'] receive_mountpoint = replicate_settings['receive_mountpoint'] receive_umount = replicate_settings['receive_umount'] send_compression = replicate_settings['send_compression'] send_properties = replicate_settings['send_properties'] send_raw = replicate_settings['send_raw'] all_snapshots = replicate_settings['all_snapshots'] buffer_size = replicate_settings['buffer_size'] compression = replicate_settings['compression'] log_command = replicate_settings['log_commands'] extra_args = {'full_clone': full_clone, 'all_snapshots': all_snapshots, 'receive_no_mountpoint': receive_no_mountpoint, 'receive_umount': receive_umount, 'receive_save': receive_save, 'receive_mountpoint': receive_mountpoint, 'send_compression': send_compression, 'send_properties': send_properties, 'buffer_size': buffer_size, 'compression': compression, 'send_raw': send_raw, 'log_command': log_command } # Get any receive_resume_tokens receive_resume_token = '' if push: receive_resume_token = ZFS.get_receive_resume_token(dst_dataset, endpoint=dst_endpoint, log_command=log_command) else: receive_resume_token = ZFS.get_receive_resume_token(src_dataset, endpoint=src_endpoint, log_command=log_command) if receive_resume_token: log_info('[{0}] - Resuming replicating [{1}]:{2} to [{3}]:{4}'.format(local_dataset, src_host, src_dataset, dst_host, dst_dataset)) size = ZFS.get_size(src_dataset, None, None, src_endpoint, receive_resume_token, **extra_args) log_info('[{0}] - {1}@??? > {1}@??? ({2})'.format(local_dataset, src_dataset, size)) ZFS.replicate(src_dataset, None, None, dst_dataset, replicate_settings['endpoint'], receive_resume_token, direction=replicate_dirN, **extra_args) # Recalculate dst data sets if push: dst_endpoint = replicate_settings['endpoint'] else: dst_endpoint = '' new_dst_snapshots = ZFS.get_snapshots2(dst_dataset, dst_endpoint, log_command=log_command, all_snapshots=all_snapshots) snapshot = list(new_dst_snapshots)[-1] snap_name = new_dst_snapshots[snapshot]['name'] Manager.new_hold(src_dataset, snap_name, endpoint=src_endpoint, log_command=log_command) Manager.new_hold(dst_dataset, snap_name, endpoint=dst_endpoint, log_command=log_command) dst_snapshots.clear() dst_snapshots.update(new_dst_snapshots) log_info('[{0}] - Resumed replicatiion [{1}]:{2} to [{3}]:{4} complete'.format(local_dataset, src_host, src_dataset, dst_host, dst_dataset)) result= PROC_CHANGED return result log_info('[{0}] - Replicating [{1}]:{2} to [{3}]:{4}'.format(local_dataset, src_host, src_dataset, dst_host, dst_dataset)) last_common_snapshot = None index_last_common_snapshot = None # Search for the last src snapshot that is available in dst for snapshot in src_snapshots: if snapshot in dst_snapshots: last_common_snapshot = snapshot index_last_common_snapshot = list(src_snapshots).index(snapshot) if last_common_snapshot is not None: # There's a common snapshot snaps_to_send = list(src_snapshots)[index_last_common_snapshot:] # Remove first element as it is already at other end snaps_to_send.pop(0) previous_snapshot = last_common_snapshot if full_clone or all_snapshots: prevsnap_name = src_snapshots[previous_snapshot]['name'] snapshot = list(src_snapshots)[-1] snap_name = src_snapshots[snapshot]['name'] # There is a snapshot on this host that is not yet on the other side. size = ZFS.get_size(src_dataset, prevsnap_name, snap_name, endpoint=src_endpoint, **extra_args) log_info('[{0}] - {1}@{2} > {1}@{3} ({4})'.format(local_dataset, src_dataset, prevsnap_name, snap_name, size)) ZFS.replicate(src_dataset, prevsnap_name, snap_name, dst_dataset, replicate_settings['endpoint'], direction=replicate_dirN, **extra_args) Manager.new_hold(src_dataset, snap_name, endpoint=src_endpoint, log_command=log_command) Manager.new_hold(dst_dataset, snap_name, endpoint=dst_endpoint, log_command=log_command) for snapshot in snaps_to_send: dst_snapshots.update({snapshot:src_snapshots[snapshot]}) result = PROC_CHANGED else: for snapshot in snaps_to_send: prevsnap_name = src_snapshots[previous_snapshot]['name'] snap_name = src_snapshots[snapshot]['name'] # There is a snapshot on this host that is not yet on the other side. size = ZFS.get_size(src_dataset, prevsnap_name, snap_name, endpoint=src_endpoint, **extra_args) log_info('[{0}] - {1}@{2} > {1}@{3} ({4})'.format(local_dataset, src_dataset, prevsnap_name, snap_name, size)) ZFS.replicate(src_dataset, prevsnap_name, snap_name, dst_dataset, replicate_settings['endpoint'], direction=replicate_dirN, **extra_args) Manager.new_hold(src_dataset, snap_name, endpoint=src_endpoint, log_command=log_command) Manager.new_hold(dst_dataset, snap_name, endpoint=dst_endpoint, log_command=log_command) previous_snapshot = snapshot dst_snapshots.update({snapshot:src_snapshots[snapshot]}) result = PROC_CHANGED elif len(src_snapshots) > 0: # No remote snapshot, full replication snapshot = list(src_snapshots)[-1] snap_name = src_snapshots[snapshot]['name'] size = ZFS.get_size(src_dataset, None, snap_name, endpoint=src_endpoint, **extra_args) log_info(' {0}@ > {0}@{1} ({2})'.format(src_dataset, snap_name, size)) ZFS.replicate(src_dataset, None, snap_name, dst_dataset, replicate_settings['endpoint'], direction=replicate_dirN, **extra_args) Manager.new_hold(src_dataset, snap_name, endpoint=src_endpoint, log_command=log_command) ZFS.hold(dst_dataset, snap_name, endpoint=dst_endpoint, log_command=log_command) if full_clone: for snapshot in src_snapshots: dst_snapshots.update({snapshot:src_snapshots[snapshot]}) else: dst_snapshots.update({snapshot:src_snapshots[snapshot]}) result = PROC_CHANGED log_info('[{0}] - Replicating [{1}]:{2} to [{3}]:{4} complete'.format(local_dataset, src_host, src_dataset, dst_host, dst_dataset)) return result @staticmethod def run(ds_settings, sleep_time): """ Executes a single run where certain datasets might or might not be snapshotted """ snapshots = ZFS.get_snapshots() datasets = ZFS.get_datasets() is_connected = IsConnected() for dataset in datasets: if dataset not in ds_settings: continue # Evaluate per dataset to make closer to actual snapshot time # Can wander due to large volume transfers and replications now = int(time.time()) this_time = time.strftime(SNAPSHOTNAME_FMTSPEC, time.localtime(now)) try: dataset_settings = ds_settings[dataset] take_snapshot = dataset_settings['snapshot'] is True replicate = dataset_settings['replicate'] is not None replicate2 = dataset_settings['replicate2'] is not None clean = bool(dataset_settings['schema']) # Decide whether we need to handle this dataset if not take_snapshot and not replicate and not replicate2 and not clean: continue replicate_settings = dataset_settings['replicate'] replicate2_settings = dataset_settings['replicate2'] full_clone = replicate_settings['full_clone'] if replicate else False full_clone2 = replicate2_settings['full_clone'] if replicate2 else False log_command = dataset_settings['log_commands'] local_snapshots = snapshots.get(dataset, OrderedDict()) # Manage what snapshots we operate on - everything or zsnapd only if (not dataset_settings['all_snapshots'] and not full_clone): for snapshot in local_snapshots: snapshotname = local_snapshots[snapshot]['name'] if (re.match(SNAPSHOTNAME_REGEX, snapshotname)): continue local_snapshots.pop(snapshot) meter_time = dataset_settings['time'] if not meter_time.do_run(now): continue push = replicate_settings['target'] is not None if replicate else True if push: # Pre exectution command if dataset_settings['preexec'] is not None: Helper.run_command(dataset_settings['preexec'], '/', log_command=log_command) result = PROC_FAILURE if (take_snapshot is True and this_time not in local_snapshots): result = Manager.snapshot(dataset, local_snapshots, now, log_command=log_command) # Clean snapshots if one has been taken - clean will not execute # if no snapshot taken Cleaner.clean(dataset, local_snapshots, dataset_settings['schema'], log_command=log_command, all_snapshots=dataset_settings['clean_all']) # Execute postexec command if result and dataset_settings['postexec'] is not None: Helper.run_command(dataset_settings['postexec'], '/', log_command=log_command) # Replicating, if required result = PROC_FAILURE result2 = PROC_FAILURE if (replicate is True): # If network replicating, check connectivity here test_unconnected = is_connected.test_unconnected(replicate_settings, local_dataset=dataset) if test_unconnected: log_info("[{0}] - Skipping as '{1}:{2}' unreachable" .format(dataset, replicate_settings['endpoint_host'], replicate_settings['endpoint_port'])) continue remote_dataset = replicate_settings['target'] remote_snapshots = ZFS.get_snapshots2(remote_dataset, replicate_settings['endpoint'], log_command=log_command, all_snapshots=dataset_settings['all_snapshots']) result = Manager.replicate(dataset, local_snapshots, remote_dataset, remote_snapshots, replicate_settings) # Clean snapshots remotely if one has been taken - only kept snapshots will allow aging if (dataset_settings['remote_schema']): Cleaner.clean(remote_dataset, remote_snapshots, dataset_settings['remote_schema'], log_command=log_command, all_snapshots=dataset_settings['remote_clean_all']) if (replicate2 is True): # If network replicating, check connectivity here test_unconnected = is_connected.test_unconnected(replicate2_settings, local_dataset=dataset) if test_unconnected: log_info("[{0}] - Skipping as '{1}:{2}' unreachable" .format(dataset, replicate2_settings['endpoint_host'], replicate2_settings['endpoint_port'])) continue remote_dataset = replicate2_settings['target'] remote_snapshots = ZFS.get_snapshots2(remote_dataset, replicate2_settings['endpoint'], log_command=log_command, all_snapshots=dataset_settings['all_snapshots']) result2 = Manager.replicate(dataset, local_snapshots, remote_dataset, remote_snapshots, replicate2_settings) # Clean snapshots remotely if one has been taken - only kept snapshots will allow aging if (dataset_settings['remote2_schema']): Cleaner.clean(remote_dataset, remote_snapshots, dataset_settings['remote2_schema'], log_command=log_command, all_snapshots=dataset_settings['remote2_clean_all']) # Post execution command if ((result or result2) and dataset_settings['replicate_postexec'] is not None): Helper.run_command(dataset_settings['replicate_postexec'], '/', log_command=log_command) else: # Pull logic for remote site # Replicating, if required # If network replicating, check connectivity here test_unconnected = is_connected.test_unconnected(replicate_settings, local_dataset=dataset) if test_unconnected: log_warn("[{$0}] - Skipping as '{1}:{2}' unreachable" .format(dataset, replicate_settings['endpoint_host'], replicate_settings['endpoint_port'])) continue remote_dataset = replicate_settings['target'] if push else replicate_settings['source'] remote_datasets = ZFS.get_datasets(replicate_settings['endpoint'], log_command=log_command) if remote_dataset not in remote_datasets: log_error("[{0}] - remote dataset '{1}' does not exist".format(dataset, remote_dataset)) continue remote_snapshots = ZFS.get_snapshots2(remote_dataset, replicate_settings['endpoint'], log_command=log_command, all_snapshots=dataset_settings['all_snapshots']) endpoint = replicate_settings['endpoint'] if (take_snapshot is True and this_time not in remote_snapshots): # Only execute everything here if needed # Remote Pre exectution command if dataset_settings['preexec'] is not None: Helper.run_command(dataset_settings['preexec'], '/', endpoint=endpoint, log_command=log_command) # Take remote snapshot result = PROC_FAILURE result = Manager.snapshot(remote_dataset, remote_snapshots, now, endpoint=endpoint, local_dataset=dataset, log_command=log_command) # Clean remote snapshots if one has been taken - only kept snapshots will aging to happen Cleaner.clean(remote_dataset, remote_snapshots, dataset_settings['schema'], log_command=log_command, endpoint=endpoint, local_dataset=dataset, all_snapshots=dataset_settings['clean_all']) # Execute remote postexec command if result and dataset_settings['postexec'] is not None: Helper.run_command(dataset_settings['postexec'], '/', endpoint=endpoint, log_command=log_command) if (replicate is True): result = PROC_FAILURE result = Manager.replicate(remote_dataset, remote_snapshots, dataset, local_snapshots, replicate_settings) # Clean snapshots locally if one has been taken - only kept snapshots will allow aging #if not replicate_settings['full_clone']: Cleaner.clean(dataset, local_snapshots, dataset_settings['local_schema'], log_command=log_command, all_snapshots=dataset_settings['local_clean_all']) # Post execution command if (result and dataset_settings['replicate_postexec'] is not None): Helper.run_command(dataset_settings['replicate_postexec'], '/', endpoint=endpoint, log_command=log_command) except Exception as ex: log_error('[{0}] - Exception: {1}'.format(dataset, str(ex))) # Clean up del is_connected zsnapd-0.8.11h/scripts/zfs.py000066400000000000000000000337121363062134400161230ustar00rootroot00000000000000# Copyright (c) 2014-2017 Kenneth Henderick # Copyright (c) 2019 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ Provides basic ZFS functionality """ import time import re from collections import OrderedDict from magcode.core.globals_ import log_debug from magcode.core.globals_ import log_info from magcode.core.globals_ import log_error from magcode.core.globals_ import debug_verbose from scripts.globals_ import SNAPSHOTNAME_REGEX from scripts.globals_ import SNAPSHOTNAME_FMTSPEC from scripts.globals_ import DEFAULT_BUFFER_SIZE from scripts.helper import Helper class ZFS(object): """ Contains generic ZFS functionality """ @staticmethod def get_snapshots(dataset='', endpoint='', all_snapshots=True, log_command=False): """ Retreives a list of snapshots """ if endpoint == '': command = 'zfs list -pH -s creation -o name,creation -t snapshot{0}{1} || true' else: command = '{0} \'zfs list -pH -s creation -o name,creation -t snapshot{1} || true\'' if dataset == '': dataset_filter = '' else: dataset_filter = ' | grep ^{0}@'.format(dataset) output = Helper.run_command(command.format(endpoint, dataset_filter), '/', log_command=log_command) snapshots = {} for line in filter(len, output.split('\n')): parts = list(filter(len, line.split('\t'))) datasetname = parts[0].split('@')[0] creation = int(parts[1]) snapshot = time.strftime(SNAPSHOTNAME_FMTSPEC, time.localtime(creation)) snapshotname = parts[0].split('@')[1] if (not all_snapshots and re.match(SNAPSHOTNAME_REGEX, snapshotname) is None): # If required, only read in zsnapd snapshots continue if datasetname not in snapshots: snapshots[datasetname] = OrderedDict() snapshots[datasetname].update({snapshot:{'name': snapshotname, 'creation': creation}}) return snapshots @staticmethod def get_snapshots2(dataset, endpoint='', all_snapshots=True, log_command=False): """ Retreives a list of snapshots from a dataset """ command = 'zfs list -pH -s creation -o name,creation -t snapshot {1} || true' if endpoint: command = '{0} \'' + command + '\'' output = Helper.run_command(command.format(endpoint, dataset), '/', log_command=log_command) snapshots = OrderedDict() for line in filter(len, output.split('\n')): parts = list(filter(len, line.split('\t'))) creation = int(parts[1]) snapshot = time.strftime(SNAPSHOTNAME_FMTSPEC, time.localtime(creation)) snapshotname = parts[0].split('@')[1] if (not all_snapshots and re.match(SNAPSHOTNAME_REGEX, snapshotname) is None): # If required, only read in zsnapd snapshots continue snapshots[snapshot] = {'name': snapshotname, 'creation': creation} return snapshots @staticmethod def get_datasets(endpoint='', log_command=False): """ Retreives all datasets """ if endpoint == '': command = 'zfs list -pH -o name,mountpoint' else: command = "{0} 'zfs list -pH -o name,mountpoint'" output = Helper.run_command(command.format(endpoint), '/', log_command=log_command) datasets = {} for line in filter(len, output.split('\n')): parts = list(filter(len, line.split('\t'))) datasets[parts[0]] = {'name': parts[0], 'mountpoint': parts[1]} return datasets @staticmethod def snapshot(dataset, name, endpoint='', log_command=False): """ Takes a snapshot """ if endpoint == '': command = 'zfs snapshot {0}@{1}'.format(dataset, name) else: command = "{0} 'zfs snapshot {1}@{2}'".format(endpoint, dataset, name) Helper.run_command(command, '/', log_command=log_command) @staticmethod def abort_interrupted_receive(dataset, endpoint='', log_command=False, no_save=False): """ Abort an interrupted receive """ filter_error = 'does not have any resumable receive state to abort' if no_save else '' if endpoint == '': command = 'zfs receive -A {0}'.format(dataset) else: command = "{0} 'zfs receive -A {1}'".format(endpoint, dataset) Helper.run_command(command, '/', log_command=log_command, filter_error=filter_error) @staticmethod def get_receive_resume_token(dataset, endpoint='', log_command=False): """ Retreives a resume token """ if endpoint == '': command = 'zfs get receive_resume_token -pHo value {0} || true'.format(dataset) else: command = "{0} 'zfs get receive_resume_token -pHo value {1} || true'".format(endpoint, dataset) output = Helper.run_command(command, '/', log_command=log_command) receive_resume_token = '' for line in filter(len, output.split('\n')): receive_resume_token = line return receive_resume_token if receive_resume_token != '-' else '' @staticmethod def replicate(dataset, base_snapshot, last_snapshot, target, endpoint='', receive_resume_token='', direction='push', buffer_size=DEFAULT_BUFFER_SIZE, compression=None, receive_mountpoint='', full_clone=False, all_snapshots=True, send_compression=False, send_properties=False, send_raw=False, receive_no_mountpoint=False, receive_umount=False, receive_save=False, log_command=False): """ Replicates a dataset towards a given endpoint/target (push) Replicates a dataset from a given endpoint to a local target (pull) """ delta = '' if base_snapshot is not None: if (not full_clone and not all_snapshots): delta = '-i {0}@{1} '.format(dataset, base_snapshot) else: delta = '-I {0}@{1} '.format(dataset, base_snapshot) send_args = '' if send_compression: send_args += 'Lec' if send_raw: send_args += 'w' if not receive_resume_token: if send_properties: send_args += 'p' if full_clone: send_args += 'R' if send_args: send_args = '-' + send_args send_args += ' ' receive_args = '' if receive_save: receive_args = 's' if receive_umount: receive_args += 'u' if receive_args: receive_args = '-' + receive_args receive_args += ' ' if receive_no_mountpoint: receive_args += '-x mountpoint ' if receive_mountpoint: receive_args += '-o "mountpoint={0}" '.format(receive_mountpoint) if compression is not None: compress = '| {0} -c'.format(compression) decompress = '| {0} -cd'.format(compression) else: compress = '' decompress = '' if debug_verbose(): # Log these commands if verbose debug log_command = True # Work out zfs send command if receive_resume_token: zfs_send_cmd = 'zfs send {0}-t ' + receive_resume_token else: zfs_send_cmd = 'zfs send {0}{1}{2}@{3}' if endpoint == '': # We're replicating to a local target command = zfs_send_cmd + ' | zfs receive {4}-F {5}' command = command.format(send_args, delta, dataset, last_snapshot, receive_args, target) Helper.run_command(command, '/', log_command=log_command) else: if direction == 'push': # We're replicating to a remote server command = zfs_send_cmd + ' {4} | mbuffer -q -v 0 -s 128k -m {5} | {6} \'mbuffer -q -v 0 -s 128k -m {5} {7} | zfs receive {8}-F {9}\'' command = command.format(send_args, delta, dataset, last_snapshot, compress, buffer_size, endpoint, decompress, receive_args, target) Helper.run_command(command, '/', log_command=log_command) elif direction == 'pull': # We're pulling from a remote server command = '{5} \'' + zfs_send_cmd + ' {4} | mbuffer -q -v 0 -s 128k -m {6}\' | mbuffer -q -v 0 -s 128k -m {6} {7} | zfs receive {8}-F {9}' command = command.format(send_args, delta, dataset, last_snapshot, compress, endpoint, buffer_size, decompress, receive_args, target) Helper.run_command(command, '/', log_command=log_command) @staticmethod def holds(target, endpoint='', log_command=False): command = 'zfs list -H -r -d 1 -t snapshot -o name {1} | xargs -d "\\n" zfs holds -H' if endpoint != '': command = '{0} \'' + command + '\'' command = command.format(endpoint, target) output = Helper.run_command(command, '/', log_command=log_command) holds = [] for line in filter(len, output.split('\n')): parts = list(filter(len, line.split('\t'))) if parts[1] != 'zsm': continue snapshotname = parts[0].split('@')[1] holds.append(snapshotname) holds.sort() return holds @staticmethod def is_held(target, snapshot, endpoint='', log_command=False): if endpoint == '': command = 'zfs holds {0}@{1}'.format(target, snapshot) return 'zsm' in Helper.run_command(command, '/', log_command=log_command) command = '{0} \'zfs holds {1}@{2}\''.format(endpoint, target, snapshot) return 'zsm' in Helper.run_command(command, '/', log_command=log_command) @staticmethod def hold(target, snapshot, endpoint='', log_command=False, may_exist=False): filter_error = 'tag already exists' if may_exist else '' if endpoint == '': command = 'zfs hold zsm {0}@{1}'.format(target, snapshot) Helper.run_command(command, '/', log_command=log_command, filter_error=filter_error) else: command = '{0} \'zfs hold zsm {1}@{2}\''.format(endpoint, target, snapshot) Helper.run_command(command, '/', log_command=log_command, filter_error=filter_error) @staticmethod def release(target, snapshot, endpoint='', log_command=False): if endpoint == '': command = 'zfs release zsm {0}@{1} || true'.format(target, snapshot) Helper.run_command(command, '/', log_command=log_command) else: command = '{0} \'zfs release zsm {1}@{2} || true\''.format(endpoint, target, snapshot) Helper.run_command(command, '/', log_command=log_command) @staticmethod def get_size(dataset, base_snapshot, last_snapshot, endpoint='', receive_resume_token='', buffer_size=DEFAULT_BUFFER_SIZE, compression=None, receive_mountpoint='', full_clone=False, all_snapshots=True, receive_no_mountpoint=False, receive_umount=False, receive_save=False, send_compression=False, send_properties=False, send_raw=False, log_command=False): """ Executes a dry-run zfs send to calculate the size of the delta. """ delta = '' if base_snapshot is not None: if (not full_clone and not all_snapshots): delta = '-i {0}@{1} '.format(dataset, base_snapshot) else: delta = '-I {0}@{1} '.format(dataset, base_snapshot) send_args = '' if send_compression: send_args += 'Lec' if send_raw: send_args += 'w' if not receive_resume_token: if send_properties: send_args += 'p' if full_clone: send_args += 'R' if send_args: send_args = '-' + send_args send_args += ' ' # Work out zfs send command if receive_resume_token: zfs_send_cmd = 'zfs send -nv {0}-t ' + receive_resume_token else: zfs_send_cmd = 'zfs send -nv {0}{1}{2}@{3}' if endpoint == '': command = zfs_send_cmd else: command = '{4} \'' + zfs_send_cmd + '\'' command = command.format(send_args, delta, dataset, last_snapshot, endpoint) command = '{0} 2>&1 | grep \'estimated size is\''.format(command) output = Helper.run_command(command, '/', log_command=log_command) size = output.strip().split(' ')[-1] if size[-1].isdigit(): return '{0}B'.format(size) return '{0}iB'.format(size) @staticmethod def destroy(dataset, snapshot, endpoint='', log_command=False): """ Destroyes a dataset """ if endpoint == '': command = 'zfs destroy {0}@{1}'.format(dataset, snapshot) else: command = "{0} 'zfs destroy {1}@{2}'".format(endpoint, dataset, snapshot) Helper.run_command(command, '/', log_command=log_command) zsnapd-0.8.11h/scripts/zsnapd.py000066400000000000000000000104421363062134400166130ustar00rootroot00000000000000#!/usr/bin/env python3 # Copyright (c) 2018 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import os import os.path import errno import sys import pwd import time import copy import signal import gc import json import psutil # A bit of nice stuff to set up ps output as much as we can... try: from setproctitle import getproctitle setproctitle_support = True except ImportError: setproctitle_support = False from magcode.core.process import ProcessDaemon from magcode.core.process import SignalHandler from magcode.core.globals_ import * from magcode.core.utility import get_numeric_setting from magcode.core.utility import get_boolean_setting # import this to set up config file settings etc import scripts.globals_ from scripts.manager import Manager from scripts.config import Config USAGE_MESSAGE = "Usage: %s [-dhv] [-c config_file]" COMMAND_DESCRIPTION = "ZFS Snap Managment Daemon" class ZsnapdProcess(ProcessDaemon): """ Process Main Daemon class """ def __init__(self, *args, **kwargs): super().__init__(usage_message=USAGE_MESSAGE, command_description=COMMAND_DESCRIPTION, *args, **kwargs) def main_process(self): """ Main process for zfssnapd """ if (settings['rpdb2_wait']): # a wait to attach with rpdb2... log_info('Waiting for rpdb2 to attach.') time.sleep(float(settings['rpdb2_wait'])) log_info('program starting.') log_debug("The daemon_canary is: '{0}'".format(settings['daemon_canary'])) # Do a nice output message to the log pwnam = pwd.getpwnam(settings['run_as_user']) if setproctitle_support: gpt_output = getproctitle() else: gpt_output = "no getproctitle()" log_debug("PID: {0} process name: '{1}' daemon: '{2}' User: '{3}' UID: {4} GID {5}".format( os.getpid(), gpt_output, self.i_am_daemon(), pwnam.pw_name, os.getuid(), os.getgid())) if (settings['memory_debug']): # Turn on memory debugging log_info('Turning on GC memory debugging.') gc.set_debug(gc.DEBUG_LEAK) # Create a Process object so that we can check in on ourself resource # wise self.proc_monitor = psutil.Process(pid=os.getpid()) # Initialise a few nice things for the loop debug_mark = get_boolean_setting('debug_mark') sleep_time = int(get_numeric_setting('sleep_time', float)) debug_sleep_time = int(get_numeric_setting('debug_sleep_time', float)) sleep_time = debug_sleep_time if debug() else sleep_time # Initialise Manager stuff ds_settings = Config.read_ds_config() # Process Main Loop while (self.check_signals()): try: Manager.run(ds_settings, sleep_time) except Exception as ex: log_error('Exception: {0}'.format(str(ex))) if debug_mark: log_debug("----MARK---- sleep({0}) seconds ----".format(sleep_time)) self.main_sleep(sleep_time) log_info('Exited main loop - process terminating normally.') sys.exit(os.EX_OK) if (__name__ is "__main__"): exit_code = ZsnapdProcess(sys.argv, len(sys.argv)) sys.exit(exit_code) zsnapd-0.8.11h/scripts/zsnapd_cfgtest.py000066400000000000000000000041711363062134400203340ustar00rootroot00000000000000#!/usr/bin/env python3 # Copyright (c) 2018 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import os import sys # A bit of nice stuff to set up ps output as much as we can... try: from setproctitle import getproctitle setproctitle_support = True except ImportError: setproctitle_support = False from magcode.core.process import Process from magcode.core.globals_ import * # import this to set up config file settings etc import scripts.globals_ from scripts.manager import Manager from scripts.config import Config USAGE_MESSAGE = "Usage: %s [-hv] [-c config_file]" COMMAND_DESCRIPTION = "ZFS Snap Managment Daemon configuration tester" class ZsnapdCfgtestProcess(Process): def __init__(self, *args, **kwargs): """ Clean up command line argument list """ super().__init__(usage_message=USAGE_MESSAGE, command_description=COMMAND_DESCRIPTION, *args, **kwargs) def main_process(self): """ zsnapd-cfgtest main process """ # Test configuration ds_settings = Config.read_ds_config() sys.exit(os.EX_OK) zsnapd-0.8.11h/scripts/zsnapd_rcmd.py000066400000000000000000000166101363062134400176230ustar00rootroot00000000000000#!/usr/bin/env python3 # Copyright (c) 2018 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import os import sys import re # A bit of nice stuff to set up ps output as much as we can... try: from setproctitle import getproctitle setproctitle_support = True except ImportError: setproctitle_support = False from magcode.core.logging import setup_logging from magcode.core.logging import reconfigure_logging from magcode.core.logging import setup_syslog_logging from magcode.core.logging import setup_file_logging from magcode.core.logging import remove_daemon_stderr_logging from magcode.core.process import BooleanCmdLineArg from magcode.core.process import Process from magcode.core.globals_ import * # import this to set up config file settings etc import scripts.globals_rcmd USAGE_MESSAGE = "Usage: %s [-htv] [-c config_file]" COMMAND_DESCRIPTION = "ZFS Snap Daemon remote command shell for sshd" class TestingCmdLineArg(BooleanCmdLineArg): """ Process testing command Line setting """ def __init__(self): BooleanCmdLineArg.__init__(self, short_arg='t', long_arg='testing', help_text="Test mode - exit without execing command", settings_key = 'testing_arg', settings_default_value = False, settings_set_value = True) class ZsnapdRCmdProcess(Process): def __init__(self, *args, **kwargs): """ Clean up command line argument list """ super().__init__(usage_message=USAGE_MESSAGE, command_description=COMMAND_DESCRIPTION, *args, **kwargs) self.cmdline_arg_list.append(TestingCmdLineArg()) def main_process(self): """ zsnapd-rcmd main process """ # Configure extra logging abilities reconfigure_logging() setup_syslog_logging() setup_file_logging() remove_daemon_stderr_logging() # Load configuration rshell = settings['rshell'] allowed_cmd_regex_dict = { 'rcmd_zfs_get_snapshots': settings['rcmd_zfs_get_snapshots'], 'rcmd_zfs_get_snapshots2': settings['rcmd_zfs_get_snapshots2'], 'rcmd_zfs_get_datasets': settings['rcmd_zfs_get_datasets'], 'rcmd_zfs_snapshot': settings['rcmd_zfs_snapshot'], 'rcmd_zfs_replicate_push': settings['rcmd_zfs_replicate_push'], 'rcmd_zfs_replicate_pull': settings['rcmd_zfs_replicate_pull'], 'rcmd_zfs_replicate_pull2': settings['rcmd_zfs_replicate_pull2'], 'rcmd_zfs_holds': settings['rcmd_zfs_holds'], 'rcmd_zfs_is_held': settings['rcmd_zfs_is_held'], 'rcmd_zfs_hold': settings['rcmd_zfs_hold'], 'rcmd_zfs_release': settings['rcmd_zfs_release'], 'rcmd_zfs_get_size': settings['rcmd_zfs_get_size'], 'rcmd_zfs_get_size2': settings['rcmd_zfs_get_size2'], 'rcmd_zfs_destroy': settings['rcmd_zfs_destroy'], 'rcmd_zfs_recieve_abort': settings['rcmd_zfs_receive_abort'], 'rcmd_zfs_get_receive_resume_token': settings['rcmd_zfs_get_receive_resume_token'], 'rcmd_preexec': settings['rcmd_preexec'], 'rcmd_postexec': settings['rcmd_postexec'], 'rcmd_replicate_postexec': settings['rcmd_replicate_postexec'], 'rcmd_aux0': settings['rcmd_aux0'], 'rcmd_aux1': settings['rcmd_aux1'], 'rcmd_aux2': settings['rcmd_aux2'], 'rcmd_aux3': settings['rcmd_aux3'], 'rcmd_aux4': settings['rcmd_aux4'], 'rcmd_aux5': settings['rcmd_aux5'], 'rcmd_aux6': settings['rcmd_aux6'], 'rcmd_aux7': settings['rcmd_aux7'], 'rcmd_aux8': settings['rcmd_aux8'], 'rcmd_aux9': settings['rcmd_aux9'], } regex_error_flag = False for key in allowed_cmd_regex_dict: regex = allowed_cmd_regex_dict[key] if not regex: # Skip blank settings continue if (settings['regex_error_on_^'] and regex[0] != '^'): regex_error_flag = True log_error("SECURITY - {0} regex '{1}' does not begin with '^'".format(key, regex) ) if (settings['regex_error_on_.*'] and regex.find('.*') >= 0): regex_error_flag = True log_error("SECURITY - {0} regex '{1}' contains '.*'".format(key, regex) ) if (settings['regex_error_on_$'] and regex[-1] != '$'): regex_error_flag = True log_error("SECURITY - {0} regex '{1}' does not end with '$'".format(key, regex) ) if regex_error_flag: log_error('Exiting and not processing because of bad regex(es)!') print('SECURITY - command rejected', file=sys.stderr) sys.exit(os.EX_NOPERM) # Process command try: orig_cmd = os.environ["SSH_ORIGINAL_COMMAND"] log_debug("SSH_ORIGINAL_COMMAND is: '{0}'".format(orig_cmd)) except KeyError: log_error('SSH_ORIGINAL_COMMAND - environment variable not found.') print('SECURITY - command rejected', file=sys.stderr) sys.exit(os.EX_NOPERM) allowed = False for regex in allowed_cmd_regex_dict.values(): if not regex: # Skip blank settings continue match = re.match(regex, orig_cmd) if match: log_debug(" MATCH: regex: '{0}'".format(regex)) allowed = True break if debug_verbose(): log_debug(" nomatch: regex: '{0}'".format(regex)) if not allowed: log_error("Command rejected: '{0}'".format(orig_cmd)) print('SECURITY - command rejected', file=sys.stderr) sys.exit(os.EX_NOPERM) log_info("Command accepted: '{0}'".format(orig_cmd)) # Execute command using rshell argv = [rshell, '-c'] argv.append(orig_cmd) env = { 'PATH': settings['rshell_path'], } log_debug("Execing os.execve(argv[0]={0}, argv={1}, env={2})".format(argv[0], argv, env)) if settings['testing_arg']: sys.exit(os.EX_OK) os.execve(argv[0], argv, env) zsnapd-0.8.11h/scripts/zsnapd_trigger.py000066400000000000000000000074671363062134400203530ustar00rootroot00000000000000#!/usr/bin/env python3 # Copyright (c) 2018 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import os import sys # A bit of nice stuff to set up ps output as much as we can... try: from setproctitle import getproctitle setproctitle_support = True except ImportError: setproctitle_support = False from magcode.core.process import Process from magcode.core.process import BooleanCmdLineArg from magcode.core.globals_ import * # import this to set up config file settings etc import scripts.globals_ from scripts.manager import Manager from scripts.config import Config USAGE_MESSAGE = "Usage: %s [-hrv] [-c config_file] [mnt-point-or-dataset, mnt-point-or-dataset, ...]" COMMAND_DESCRIPTION = "ZFS Snap Daemon trigger utility" class ReachableCmdLineArg(BooleanCmdLineArg): """ Process reachable endpoint flag """ def __init__(self): BooleanCmdLineArg.__init__(self, short_arg='r', long_arg='reachable', help_text="Test if replication endpoint can be TCP connected to", settings_key = 'reachable_arg', settings_default_value = False, settings_set_value = True) class DoTriggerCmdLineArg(BooleanCmdLineArg): """ Process do_trigger trigger candidate flag """ def __init__(self): BooleanCmdLineArg.__init__(self, short_arg='t', long_arg='do-trigger', help_text="Create do_trigger flagged triggers for datasets", settings_key = 'do_trigger_arg', settings_default_value = False, settings_set_value = True) class ZsnapdTriggerProcess(Process): def __init__(self, *args, **kwargs): """ Clean up command line argument list """ super().__init__(usage_message=USAGE_MESSAGE, command_description=COMMAND_DESCRIPTION, *args, **kwargs) self.cmdline_arg_list.append(DoTriggerCmdLineArg()) self.cmdline_arg_list.append(ReachableCmdLineArg()) def parse_argv_left(self, argv_left): """ Handle any arguments left after processing all switches """ self.argv_left = [] if (len(argv_left) != 0): self.argv_left = argv_left def main_process(self): """ zsnapd-trigger main process """ self.check_if_root() # Read configuration ds_settings = Config.read_ds_config() # Process triggers if not(Manager.touch_trigger(ds_settings, settings['reachable_arg'], settings['do_trigger_arg'], *self.argv_left)): sys.exit(os.EX_CONFIG) sys.exit(os.EX_OK) zsnapd-0.8.11h/setup.py000066400000000000000000000025751363062134400147750ustar00rootroot00000000000000#!/usr/bin/env python3 # Copyright (c) 2018 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from distutils.core import setup setup(name='zsnapd', version='0.8.9a', description='ZFS Snapshot Daemon', author='Matthew Grant', author_email='matt@mattgrant.net.nz', url='http://mattgrant.net.nz/software/zsnapd', packages=['zsnapd',]) zsnapd-0.8.11h/system/000077500000000000000000000000001363062134400145765ustar00rootroot00000000000000zsnapd-0.8.11h/system/zsnapd.service000066400000000000000000000003111363062134400174520ustar00rootroot00000000000000[Unit] Description=ZFS Snapshot Daemon [Service] Type=simple User=root Group=root ExecStart=/usr/share/zsnapd/zsnapd --systemd ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target zsnapd-0.8.11h/tools/000077500000000000000000000000001363062134400144125ustar00rootroot00000000000000zsnapd-0.8.11h/tools/distribute.py000077500000000000000000000037361363062134400171560ustar00rootroot00000000000000#!/usr/bin/python2 # Copyright (c) 2015 Kenneth Henderick # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ Functionality to distribute the scripts to the live location and restart the service * For testing purposes only * Do not use without understanding the consequences * Running untested code might light your house on fire * Use with caution * Seriously, use with caution """ import os from subprocess import check_output if __name__ == '__main__': this_directory = os.path.dirname(os.path.abspath(__file__)) print 'Copying files to local node...' commands = [] for filename in ['clean.py', 'helper.py', 'manager.py', 'zfs.py']: commands.append('cp {0}/../scripts/{1} /usr/lib/zfs-snap-manager/{1}'.format(this_directory, filename)) commands.append('rm -f /usr/lib/zfs-snap-manager/{0}c'.format(filename)) commands.append('systemctl restart zfs-snap-manager') for command in commands: check_output(command, shell=True) zsnapd-0.8.11h/zsnapd000077500000000000000000000026041363062134400145010ustar00rootroot00000000000000#!/usr/bin/env python3 # Copyright (c) 2018 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """Stub file for zfssnapd daemon. File system location of this file determines the first entry on sys.path, thus its placement, and symlinks from /usr/local/sbin. """ from scripts.zsnapd import ZsnapdProcess # Do the business process = ZsnapdProcess() process.main() zsnapd-0.8.11h/zsnapd-cfgtest000077500000000000000000000026311363062134400161360ustar00rootroot00000000000000#!/usr/bin/env python3 # Copyright (c) 2018 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """Stub file for zsnapd-cfgtest. File system location of this file determines the first entry on sys.path, thus its placement, and symlinks from /usr/local/sbin. """ from scripts.zsnapd_cfgtest import ZsnapdCfgtestProcess # Do the business process = ZsnapdCfgtestProcess() process.main() zsnapd-0.8.11h/zsnapd-rcmd000077500000000000000000000026201363062134400154220ustar00rootroot00000000000000#!/usr/bin/env python3 # Copyright (c) 2018 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """Stub file for zsnapd-cfgtest. File system location of this file determines the first entry on sys.path, thus its placement, and symlinks from /usr/local/sbin. """ from scripts.zsnapd_rcmd import ZsnapdRCmdProcess # Do the business process = ZsnapdRCmdProcess() process.main() zsnapd-0.8.11h/zsnapd-trigger000077500000000000000000000026311363062134400161420ustar00rootroot00000000000000#!/usr/bin/env python3 # Copyright (c) 2018 Matthew Grant # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """Stub file for zsnapd-cfgtest. File system location of this file determines the first entry on sys.path, thus its placement, and symlinks from /usr/local/sbin. """ from scripts.zsnapd_trigger import ZsnapdTriggerProcess # Do the business process = ZsnapdTriggerProcess() process.main()