rtslib-3.0.pre4.1~g1b33ceb/COPYING0000664000000000000000000002363712443074135013234 0ustar Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. rtslib-3.0.pre4.1~g1b33ceb/debian/changelog0000644000000000000000000000030412443074135015255 0ustar rtslib (3.0.pre4.1~g1b33ceb) unstable; urgency=low * Generated from git commit 1b33ceb05ed2fbf68b3c3fa1c6daeba69d5e96fb. -- Marc Fleischmann Sat, 13 Dec 2014 09:33:17 -0800 rtslib-3.0.pre4.1~g1b33ceb/debian/compat0000644000000000000000000000000212443074135014604 0ustar 7 rtslib-3.0.pre4.1~g1b33ceb/debian/control0000644000000000000000000000162112443074135015011 0ustar Source: rtslib Section: python Priority: optional Standards-Version: 3.9.2 Homepage: https://github.com/Datera/rtslib Maintainer: Jerome Martin Build-Depends: debhelper(>= 7.0.50~), python(>= 2.6.6-3~), python-ipaddr, python-netifaces, python-configobj, python-pyparsing, python-epydoc, texlive-latex-base, texlive-latex-extra, texlive-latex-recommended, lmodern, ghostscript, texlive-fonts-recommended Package: python-rtslib Architecture: all Depends: ${python:Depends}, ${misc:Depends}, python-ipaddr, python-netifaces, python-configobj, python-pyparsing Provides: ${python:Provides} Conflicts: rtsadmin-frozen Description: Python API to the Linux Kernel's SCSI Target subsystem (LIO) Provides Python object mappings to LIO and TCM SCSI Target subsystems and fabric modules, like storage objects, SCSI targets and LUNs. . Part of the Linux Kernel SCSI Target's userspace management tools rtslib-3.0.pre4.1~g1b33ceb/debian/copyright0000644000000000000000000000106512443074135015343 0ustar Copyright (c) 2011-2014 by Datera, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. rtslib-3.0.pre4.1~g1b33ceb/debian/python-rtslib.doc-base0000644000000000000000000000062412443074135017625 0ustar Document: python-rtslib Title: python-rtslib online documentation Author: Jerome Martin Abstract: API reference documentation for python-rtslib Section: System/Administration Format: HTML Index: /usr/share/doc/python-rtslib/doc/html/index.html Files: /usr/share/doc/python-rtslib/doc/html/*.html Format: PDF Files: /usr/share/doc/python-rtslib/doc/pdf/rtslib_API_Documentation.pdf.gz rtslib-3.0.pre4.1~g1b33ceb/debian/python-rtslib.docs0000644000000000000000000000004312443074135017073 0ustar README.md COPYING specs/*.txt doc/ rtslib-3.0.pre4.1~g1b33ceb/debian/python-rtslib.install0000644000000000000000000000014712443074135017616 0ustar lib/rtslib usr/share/pyshared specs/*.spec var/target/fabric policy/*.lio var/target/policy rtslib-3.0.pre4.1~g1b33ceb/debian/rules0000755000000000000000000000126712443074135014474 0ustar #!/usr/bin/make -f build_dir = build install_dir = debian/tmp %: dh $@ --with python2 override_dh_auto_clean: # manually clean any *.pyc files rm -rf rtslib/*.pyc [ ! -d doc ] || rm -rf doc override_dh_auto_build: python setup.py build --build-base $(build_dir) test -d doc || mkdir doc mkdir -p doc/pdf epydoc --no-sourcecode --pdf -n rtslib --exclude configobj rtslib/*.py mv pdf/api.pdf doc/pdf/rtslib_API_Documentation.pdf mkdir -p doc/html epydoc --no-sourcecode --html -n rtslib --exclude configobj rtslib/*.py mv html doc/ override_dh_auto_install: python setup.py install --no-compile --install-purelib \ $(install_dir)/lib --install-scripts $(install_dir)/bin rtslib-3.0.pre4.1~g1b33ceb/policy/backstore_fileio.lio0000664000000000000000000000163312443074135017501 0ustar storage fileio disk %str { wwn %str path %str size %bytes buffered %bool(yes) attribute { block_size %int(512) emulate_3pc %bool(yes) emulate_caw %bool(yes) emulate_dpo %bool(no) emulate_fua_read %bool(no) emulate_fua_write %bool(yes) emulate_model_alias %bool(no) emulate_rest_reord %bool(no) emulate_tas %bool(yes) emulate_tpu %bool(no) emulate_tpws %bool(no) emulate_ua_intlck_ctrl %bool(no) emulate_write_cache %bool(yes) enforce_pr_isids %bool(yes) fabric_max_sectors %int(8192) is_nonrot %bool(no) max_unmap_block_desc_count %int(1) max_unmap_lba_count %int(8192) max_write_same_len %int(4096) optimal_sectors %int(8192) queue_depth %int(128) unmap_granularity %int(1) unmap_granularity_alignment %int(0) } } rtslib-3.0.pre4.1~g1b33ceb/policy/backstore_iblock.lio0000664000000000000000000000156112443074135017475 0ustar storage iblock disk %str { wwn %str path %str attribute { block_size %int(512) emulate_3pc %bool(yes) emulate_caw %bool(yes) emulate_dpo %bool(no) emulate_fua_read %bool(no) emulate_fua_write %bool(yes) emulate_model_alias %bool(no) emulate_rest_reord %bool(no) emulate_tas %bool(yes) emulate_tpu %bool(no) emulate_tpws %bool(no) emulate_ua_intlck_ctrl %bool(no) emulate_write_cache %bool(no) enforce_pr_isids %bool(yes) fabric_max_sectors %int(8192) is_nonrot %bool(yes) max_unmap_block_desc_count %int(0) max_unmap_lba_count %int(0) max_write_same_len %int(65535) optimal_sectors %int(8192) queue_depth %int(128) unmap_granularity %int(0) unmap_granularity_alignment %int(0) } } rtslib-3.0.pre4.1~g1b33ceb/policy/backstore_pscsi.lio0000664000000000000000000000137712443074135017360 0ustar storage pscsi disk %str { path %str attribute { emulate_3pc %bool(yes) emulate_caw %bool(yes) emulate_dpo %bool(no) emulate_fua_read %bool(no) emulate_model_alias %bool(no) emulate_rest_reord %bool(no) emulate_tas %bool(yes) emulate_tpu %bool(no) emulate_tpws %bool(no) emulate_ua_intlck_ctrl %bool(no) emulate_write_cache %bool(no) enforce_pr_isids %bool(yes) fabric_max_sectors %int(8192) is_nonrot %bool(yes) max_unmap_block_desc_count %int(0) max_unmap_lba_count %int(0) max_write_same_len %int(65535) queue_depth %int(128) unmap_granularity %int(0) unmap_granularity_alignment %int(0) } } rtslib-3.0.pre4.1~g1b33ceb/policy/backstore_ramdisk.lio0000664000000000000000000000160012443074135017656 0ustar storage rd_mcp disk %str { wwn %str size %bytes nullio %bool(no) attribute { block_size %int(512) emulate_3pc %bool(yes) emulate_caw %bool(yes) emulate_dpo %bool(no) emulate_fua_read %bool(no) emulate_fua_write %bool(yes) emulate_model_alias %bool(no) emulate_rest_reord %bool(no) emulate_tas %bool(yes) emulate_tpu %bool(no) emulate_tpws %bool(no) emulate_ua_intlck_ctrl %bool(no) emulate_write_cache %bool(no) enforce_pr_isids %bool(yes) fabric_max_sectors %int(8192) is_nonrot %bool(no) max_unmap_block_desc_count %int(0) max_unmap_lba_count %int(0) max_write_same_len %int(0) optimal_sectors %int(8192) queue_depth %int(128) unmap_granularity %int(0) unmap_granularity_alignment %int(0) } } rtslib-3.0.pre4.1~g1b33ceb/policy/fabric_ib_srpt.lio0000664000000000000000000000057212443074135017146 0ustar fabric ib_srpt { target %srpt_wwn { acl %str { mapped_lun %int { target_lun @(-3 lun) write_protect %bool(no) } } attribute { srp_max_rdma_size %int(65536) srp_max_rsp_size %int(256) srp_sq_size %int(4096) } lun %int backend %backend } } rtslib-3.0.pre4.1~g1b33ceb/policy/fabric_iscsi.lio0000664000000000000000000000465012443074135016617 0ustar fabric iscsi { discovery_auth { enable %bool(yes) mutual_password %str("") mutual_userid %str("") password %str("") userid %str("") } target %iqn tpgt %int { enable %bool(yes) portal %ipport acl %str { attribute { dataout_timeout %int(3) dataout_timeout_retries %int(5) default_erl %erl(0) nopin_response_timeout %int(30) nopin_timeout %int(15) random_datain_pdu_offsets %bool(no) random_datain_seq_offsets %bool(no) random_r2t_offsets %bool(no) } auth { password %str("") password_mutual %str("") userid %str("") userid_mutual %str("") } mapped_lun %int { target_lun @(-3 lun) write_protect %bool(no) } } auth { password %str("") password_mutual %str("") userid %str("") userid_mutual %str("") } attribute { authentication %bool(no) default_erl %erl(0) demo_mode_discovery %bool(yes) cache_dynamic_acls %bool(no) default_cmdsn_depth %int(16) demo_mode_write_protect %bool(no) generate_node_acls %bool(no) login_timeout %int(15) netif_timeout %int(2) prod_mode_write_protect %bool(no) } lun %int backend %backend parameter { AuthMethod %str(CHAP) DataDigest %str("CRC32C,None") DataPDUInOrder %bool(yes) DataSequenceInOrder %bool(yes) DefaultTime2Retain %int(20) DefaultTime2Wait %int(2) ErrorRecoveryLevel %bool(no) FirstBurstLength %int(65536) HeaderDigest %str("CRC32C,None") IFMarkInt %str("2048~65535") IFMarker %bool(no) ImmediateData %bool(yes) InitialR2T %bool(yes) MaxBurstLength %int(262144) MaxConnections %int(1) MaxOutstandingR2T %int(1) MaxRecvDataSegmentLength %int(8192) MaxXmitDataSegmentLength %int(262144) OFMarkInt %str("2048~65535") OFMarker %bool(no) TargetAlias %str("LIO Target") } } } rtslib-3.0.pre4.1~g1b33ceb/policy/fabric_loopback.lio0000664000000000000000000000014512443074135017272 0ustar fabric loopback { target %naa { nexus_wwn %naa lun %int backend %backend } } rtslib-3.0.pre4.1~g1b33ceb/policy/fabric_qla2xxx.lio0000664000000000000000000000431612443074135017113 0ustar fabric qla2xxx { target %qla2xxx_wwn { acl %str { attribute { dataout_timeout %int(3) dataout_timeout_retries %int(5) default_erl %erl(0) nopin_response_timeout %int(30) nopin_timeout %int(15) random_datain_pdu_offsets %bool(no) random_datain_seq_offsets %bool(no) random_r2t_offsets %bool(no) } auth { password %str("") password_mutual %str("") userid %str("") userid_mutual %str("") } mapped_lun %int { target_lun @(-3 lun) write_protect %bool(no) } } auth { password %str("") password_mutual %str("") userid %str("") userid_mutual %str("") } attribute { authentication %bool(no) default_erl %erl(0) demo_mode_discovery %bool(yes) cache_dynamic_acls %bool(no) default_cmdsn_depth %int(16) demo_mode_write_protect %bool(no) generate_node_acls %bool(no) login_timeout %int(15) netif_timeout %int(2) prod_mode_write_protect %bool(no) } lun %int backend %backend parameter { AuthMethod %str(CHAP) DataDigest %str("CRC32C,None") DataPDUInOrder %bool(yes) DataSequenceInOrder %bool(yes) DefaultTime2Retain %int(20) DefaultTime2Wait %int(2) ErrorRecoveryLevel %bool(no) FirstBurstLength %int(65536) HeaderDigest %str("CRC32C,None") IFMarkInt %str("2048~65535") IFMarker %bool(no) ImmediateData %bool(yes) InitialR2T %bool(yes) MaxBurstLength %int(262144) MaxConnections %int(1) MaxOutstandingR2T %int(1) MaxRecvDataSegmentLength %int(8192) MaxXmitDataSegmentLength %int(262144) OFMarkInt %str("2048~65535") OFMarker %bool(no) TargetAlias %str("LIO Target") } } } rtslib-3.0.pre4.1~g1b33ceb/policy/fabric_tcm_fc.lio0000664000000000000000000000431012443074135016731 0ustar fabric tcm_fc { target %fc_wwn { acl %str { attribute { dataout_timeout %int(3) dataout_timeout_retries %int(5) default_erl %erl(0) nopin_response_timeout %int(30) nopin_timeout %int(15) random_datain_pdu_offsets %bool(no) random_datain_seq_offsets %bool(no) random_r2t_offsets %bool(no) } auth { password %str("") password_mutual %str("") userid %str("") userid_mutual %str("") } mapped_lun %int { target_lun @(-3 lun) write_protect %bool(no) } } auth { password %str("") password_mutual %str("") userid %str("") userid_mutual %str("") } attribute { authentication %bool(no) default_erl %erl(0) demo_mode_discovery %bool(yes) cache_dynamic_acls %bool(no) default_cmdsn_depth %int(16) demo_mode_write_protect %bool(no) generate_node_acls %bool(no) login_timeout %int(15) netif_timeout %int(2) prod_mode_write_protect %bool(no) } lun %int backend %backend parameter { AuthMethod %str(CHAP) DataDigest %str("CRC32C,None") DataPDUInOrder %bool(yes) DataSequenceInOrder %bool(yes) DefaultTime2Retain %int(20) DefaultTime2Wait %int(2) ErrorRecoveryLevel %bool(no) FirstBurstLength %int(65536) HeaderDigest %str("CRC32C,None") IFMarkInt %str("2048~65535") IFMarker %bool(no) ImmediateData %bool(yes) InitialR2T %bool(yes) MaxBurstLength %int(262144) MaxConnections %int(1) MaxOutstandingR2T %int(1) MaxRecvDataSegmentLength %int(8192) MaxXmitDataSegmentLength %int(262144) OFMarkInt %str("2048~65535") OFMarker %bool(no) TargetAlias %str("LIO Target") } } } rtslib-3.0.pre4.1~g1b33ceb/policy/fabric_vhost.lio0000664000000000000000000000015312443074135016642 0ustar fabric vhost { target %naa tpgt %int { nexus_wwn %naa lun %int backend %backend } }rtslib-3.0.pre4.1~g1b33ceb/python-rtslib.spec0000664000000000000000000000313212443074135015657 0ustar %define oname rtslib Name: python-rtslib License: Apache License 2.0 Group: System Environment/Libraries Summary: A framework to implement simple but nice CLIs. Version: 3.0.pre4.1~g1b33ceb Release: 1%{?dist} URL: http://www.risingtidesystems.com/git/ Source: %{oname}-%{version}.tar.gz BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-rpmroot BuildArch: noarch BuildRequires: python-devel, epydoc, python-configobj, python-netifaces, python-ipaddr, pyparsing Requires: python-configobj, python-netifaces, python-ipaddr, pyparsing Vendor: Datera, Inc. %description API for RisingTide Systems generic SCSI target. %prep %setup -q -n %{oname}-%{version} %build %{__python} setup.py build mkdir -p doc epydoc --no-sourcecode --html -n %{oname} --exclude configobj %{oname}/*.py mv html doc/ %install rm -rf %{buildroot} %{__python} setup.py install --skip-build --root %{buildroot} --prefix usr mkdir -p %{buildroot}/var/target/fabric cp specs/*.spec %{buildroot}/var/target/fabric mkdir -p %{buildroot}/var/target/policy cp policy/*.lio %{buildroot}/var/target/policy mkdir -p %{buildroot}/usr/share/doc/python-rtslib-doc-%{version} cp -r doc/* specs/*.txt %{buildroot}/usr/share/doc/python-rtslib-doc-%{version}/ %clean rm -rf %{buildroot} %files %defattr(-,root,root,-) %{python_sitelib} /var/target /usr/share/doc/python-rtslib-doc-%{version} %doc COPYING README.md %changelog * Sat Dec 13 2014 Marc Fleischmann 3.0.pre4.1~g1b33ceb-1 - Generated from git commit 1b33ceb05ed2fbf68b3c3fa1c6daeba69d5e96fb. rtslib-3.0.pre4.1~g1b33ceb/README.md0000664000000000000000000000620412443074135013447 0ustar # RTSLib RTSLib is a Python library that provides the API to the Linux Kernel SCSI Target subsystem, its backend storage objects subsystem as well as third-party Target Fabric Modules. It is part of LIO. RTSLib allows direct manipulation of all SCSI Target objects like storage objects, SCSI targets, TPGs, LUNs and ACLs. It is part of the Linux Kernel's SCSI Target's userspace management tools. ## Usage scenarios RTSLib is used as the foundation for targetcli, the Linux Kernel's SCSI Target configuration CLI and shell, in embedded storage systems and appliances, commercial NAS and SAN systems as well as a tool for sysadmins writing their own scripts to configure the SCSI Target subsystem. ## Installation RTSLib is currently part of several Linux distributions, either under the `rtslib` name or `python-rtslib`. In most cases, simply installing the version packaged by your favorite Linux distribution is the best way to get it running. ## Building from source The packages are very easy to build and install from source as long as you're familiar with your Linux Distribution's package manager: 1. Clone the github repository for RTSLib using `git clone https://github.com/Datera/rtslib.git`. 2. Make sure build dependencies are installed. To build RTSLib, you will need: * GNU Make. * python 2.6 or 2.7 * A few python libraries: pyparsing, ipaddr, netifaces, configobj, python-epydoc * A working LaTeX installation and ghostscript for building the documentation, for example texlive-latex. * Your favorite distribution's package developement tools, like rpm for Redhat-based systems or dpkg-dev and debhelper for Debian systems. 3. From the cloned git repository, run `make deb` to generate a Debian package, or `make rpm` for a Redhat package. 4. The newly built packages will be generated in the `dist/` directory. 5. To cleanup the repository, use `make clean` or `make cleanall` which also removes `dist/*` files. ## Documentation The RTSLib packages do ship with a full API documentation in both HTML and PDF formats, typically in `/usr/share/doc/python-rtslib/doc/`. Depending on your Linux distribution, the documentation might be shipped in a separate package. An other good source of information is the http://linux-iscsi.org wiki, offering many resources such as (not necessarily up-to-date) copies of the RTSLib API Reference Guide (HTML at http://linux-iscsi.org/Doc/rtslib/html or PDF at http://linux-iscsi.org/Doc/rtslib/rtslib-API-reference.pdf), and the Targetcli User's Guide at http://linux-iscsi.org/wiki/targetcli. ## Mailing-list All contributions, suggestions and bugfixes are welcome! To report a bug, submit a patch or simply stay up-to-date on the Linux SCSI Target developments, you can subscribe to the Linux Kernel SCSI Target development mailing-list by sending an email message containing only `subscribe target-devel` to The archives of this mailing-list can be found online at http://dir.gmane.org/gmane.linux.scsi.target.devel ## Author LIO was developed by Datera, Inc. http://www.datera.io The original author and current maintainer is Jerome Martin rtslib-3.0.pre4.1~g1b33ceb/rtslib/config_filters.py0000664000000000000000000000706412443074135017043 0ustar ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' from config_tree import NO_VALUE def get_composed_filter(*filters): ''' Returns a node filter that is the composition of all filter functions passed as arguments. Filters will be applied in the order they appear. ''' def composed_filter(node_in): for node_filter in filters: node_out = node_filter(node_in) if node_out is None: break else: node_in = node_out return node_out return composed_filter def get_filter_on_type(allowed_types): ''' Returns a node filter that only let nodes whose type is in the allowed_types list to pass through. ''' def filter_on_type(node_in): if node_in.data['type'] in allowed_types: return node_in return filter_on_type def get_reverse_filter(node_filter): ''' Returns a new filter that lets throught all nodes normally filtered out by node_filter, and filters out the one normally passed. This should be useful only with filters that pass nodes through without modifying them. ''' def reverse_filter(node_in): if node_filter(node_in) is None: return node_in return reverse_filter def filter_no_default(node_in): ''' A filter that lets all nodes through, except attributes with a default value and attribute groups containing only such attributes. ''' node_out = node_in if node_in.data['type'] == 'attr' \ and node_in.data['key'][1] != NO_VALUE \ and node_in.data['key'][1] == node_in.data['val_dfl']: node_out = None elif node_in.data['type'] == 'group': node_out = None for attr in node_in.nodes: if filter_no_default(attr) is not None: node_out = node_in break return node_out filter_only_default = get_reverse_filter(filter_no_default) def filter_no_missing(node_in): ''' A filter that lets all nodes through, except required attributes missing a value. ''' node_out = node_in if node_in.data['type'] == 'attr' \ and node_in.data['key'][1] is NO_VALUE: node_out = None return node_out def filter_only_missing(node_in): ''' A filter that only let through obj and groups containing attributes with missing values, as well as those attributes. ''' # FIXME Breaks dump node_out = None if node_in.data['type'] == 'attr' \ and node_in.data['key'][1] is NO_VALUE: node_out = node_in return node_out def filter_only_required(node_in): ''' A filter that only lets through required attribute nodes, aka those attributes without a default value in LIO configuration policy. ''' if node_in.data['type'] == 'attr' \ and node_in.data.get('val_dfl') is None: return node_in rtslib-3.0.pre4.1~g1b33ceb/rtslib/config_live.py0000664000000000000000000006370612443074135016337 0ustar ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import logging from rtslib.config_tree import NO_VALUE from rtslib.config import dump_value, ConfigError from rtslib.utils import convert_bytes_to_human, convert_human_to_bytes from rtslib import (RTSRoot, Target, FabricModule, LUN, MappedLUN, NetworkPortal, TPG, NodeACL, FileIOBackstore, FileIOStorageObject, IBlockBackstore, IBlockStorageObject, PSCSIBackstore, PSCSIStorageObject, RDMCPBackstore, RDMCPStorageObject, RTSLibError) # TODO There seems to be a bug in LIO, affecting both this API and rtslib: # when a tpg does not contain any objects, it cannot be removed. _rtsroot = None _indent = ' '*4 DEBUG = False if DEBUG: logging.basicConfig() log = logging.getLogger('Config') log.setLevel(logging.DEBUG) else: log = logging.getLogger('Config') log.setLevel(logging.INFO) def _b2h(b): return convert_bytes_to_human(b) def get_root(): global _rtsroot if _rtsroot is None: _rtsroot = RTSRoot() return _rtsroot def _list_live_group_attrs(rts_obj): ''' Returns a list of all group attributes for the rts_obj rtslib object currently running on the live system, in LIO configuration file format. ''' attrs = [] for attribute in rts_obj.list_attributes(writable=True): value = rts_obj.get_attribute(attribute) attrs.append("attribute %s %s" % (attribute, dump_value(value))) for parameter in rts_obj.list_parameters(writable=True): value = rts_obj.get_parameter(parameter) attrs.append("parameter %s %s" % (parameter, dump_value(value))) for auth_attr in rts_obj.list_auth_attrs(writable=True): value = rts_obj.get_auth_attr(auth_attr) attrs.append("auth %s %s" % (auth_attr, dump_value(value))) return attrs def dump_live(): ''' Returns a text dump of the objects and attributes currently running on the live system, in LIO configuration file format. ''' dump = [] dump.append(dump_live_storage()) dump.append(dump_live_fabric()) return "\n".join(dump) def dump_live_storage(): ''' Returns a text dump of the storage objects and attributes currently running on the live system, in LIO configuration file format. ''' dump = [] for so in sorted(get_root().storage_objects, key=lambda so: so.name): dump.append("storage %s disk %s {" % (so.backstore.plugin, so.name)) attrs = [] if so.backstore.plugin in ['fileio', 'rd_mcp', 'iblock']: attrs.append("%swwn %s" % (_indent, so.wwn)) if so.backstore.plugin in ['fileio', 'pscsi', 'iblock']: attrs.append("%spath %s" % (_indent, so.udev_path)) if so.backstore.plugin in ['fileio', 'rd_mcp']: attrs.append("%ssize %s" % (_indent, _b2h(so.size))) if so.backstore.plugin in ['rd_mcp']: if so.nullio: nullio = 'yes' else: nullio = 'no' attrs.append("%snullio %s" % (_indent, nullio)) if so.backstore.plugin in ['fileio']: is_buffered = "buffered" in so.mode if is_buffered: is_buffered = 'yes' else: is_buffered = 'no' attrs.append("%sbuffered %s" % (_indent, is_buffered)) group_attrs = _list_live_group_attrs(so) attrs.extend(["%s%s" % (_indent, attr) for attr in group_attrs]) dump.append("\n".join(attrs)) dump.append("}") return "\n".join(dump) def dump_live_fabric(): ''' Returns a text dump of the fabric objects and attributes currently running on the live system, in LIO configuration file format. ''' dump = [] for fm in sorted(get_root().fabric_modules, key=lambda fm: fm.name): if fm.has_feature('discovery_auth'): dump.append("fabric %s {" % fm.name) dump.append("%sdiscovery_auth enable %s" % (_indent, dump_value(fm.discovery_enable_auth))) dump.append("%sdiscovery_auth userid %s" % (_indent, dump_value(fm.discovery_userid))) dump.append("%sdiscovery_auth password %s" % (_indent, dump_value(fm.discovery_password))) dump.append("%sdiscovery_auth mutual_userid %s" % (_indent, dump_value(fm.discovery_mutual_userid))) dump.append("%sdiscovery_auth mutual_password %s" % (_indent, dump_value(fm.discovery_mutual_password))) dump.append("}") for tg in fm.targets: tpgs = [] if not list(tg.tpgs): dump.append("fabric %s target %s" % (fm.name, tg.wwn)) for tpg in tg.tpgs: if tpg.has_feature("tpgts"): head = ("fabric %s target %s tpgt %s" % (fm.name, tg.wwn, tpg.tag)) else: head = ("fabric %s target %s" % (fm.name, tg.wwn)) if tpg.has_enable(): enable = int(tpg.enable) else: enable = None section = [] if tpg.has_feature("nexus"): section.append("%snexus_wwn %s" % (_indent, tpg.nexus_wwn)) attrs = ["%s%s" % (_indent, attr) for attr in _list_live_group_attrs(tpg)] if attrs: section.append("\n".join(attrs)) for lun in sorted(tpg.luns, key=lambda l: l.lun): attrs = ["%s%s" % (_indent, attr) for attr in _list_live_group_attrs(lun)] if attrs: fmt = "%slun %s %s %s {" else: fmt = "%slun %s backend %s:%s" section.append(fmt % (_indent, lun.lun, lun.storage_object.backstore.plugin, lun.storage_object.name)) if attrs: section.append("\n".join(attrs)) section.append("%s}" % _indent) if tpg.has_feature("acls"): for acl in tpg.node_acls: section.append("%sacl %s {" % (_indent, acl.node_wwn)) attrs = ["%s%s" % (2*_indent, attr) for attr in _list_live_group_attrs(acl)] if attrs: section.append("\n".join(attrs)) for mlun in acl.mapped_luns: section.append("%smapped_lun %s {" % (2*_indent, mlun.mapped_lun)) section.append("%s target_lun %s" % (3*_indent, mlun.tpg_lun.lun)) section.append("%s write_protect %s" % (3*_indent, int(mlun.write_protect))) section.append("%s}" % (2*_indent)) section.append("%s}" % (_indent)) if tpg.has_feature("nps"): for np in tpg.network_portals: section.append("%sportal %s:%s" % (_indent, np.ip_address, np.port)) if section: if enable is not None: section.append("%senable %s" % (_indent, enable)) dump.append("%s {" % head) dump.append("\n".join(section)) dump.append("}") else: if enable is not None: dump.append("%s enable %s" % (head, enable)) else: dump.append(head) return "\n".join(dump) def obj_attr(obj, attr): ''' Returns the value of attribute attr of the ConfigTree obj. If we cannot find the attribute, a ConfigError exception will be raised. Else, the attribute's value will be converted from its internal string representation to whatever rtslib expects. ''' # TODO Factorize a bit the val_type switch. # TODO Maybe consolidate with validate_val in config.py log.debug("obj_attr(%s, %s)" % (obj, attr)) matches = obj.search([(attr, ".*")]) if len(matches) != 1: raise ConfigError("Could not determine value of %s attribute for %s" % (attr, obj.path_str)) if matches[0].data['type'] not in ['attr', 'group']: raise ConfigError("Configuration error, expected attribute for %s" % obj.path_str) string = matches[0].key[1] if string == NO_VALUE: raise ConfigError("Value of %s attribute is not set for %s" % (attr, obj.path_str)) val_type = matches[0].data.get('val_type') ref_path = matches[0].data.get('ref_path') valid_value = None if val_type == 'bool': # FIXME There are inconsistencies in bools at the configfs level # The parameters take Yes/No values, the attributes 1/0 # Maybe something can be done about it ? if string in ['yes', 'true', '1', 'enable']: valid_value = 1 elif string in ['no', 'false', '0', 'disable']: valid_value = 0 if obj.key[0] == 'parameter': if valid_value == 1: valid_value = 'Yes' else: valid_value = 'No' elif val_type == 'bytes': mults = {'K': 1024, 'M': 1024**2, 'G': 1024**3, 'T': 1024**4} val = float(string[:-2]) unit = string[-2:-1] valid_value = int(val * mults[unit]) elif val_type == 'int': valid_value = int(string) elif val_type == 'ipport': (addr, _, port) = string.rpartition(":") valid_value = (addr, int(port)) elif val_type == 'posint': valid_value = int(string) elif val_type == 'str': valid_value = string elif val_type == 'erl': valid_value = int(string) elif val_type == 'iqn': valid_value = string elif val_type == 'naa': valid_value = string elif val_type == 'backend': (plugin, _, name) = string.partition(':') valid_value = (plugin, name) elif val_type == 'raw': valid_value = string elif ref_path: valid_value = ref_path else: raise ConfigError("Unknown value type '%s' when validating %s" % (val_type, matches[0])) return valid_value def apply_group_attrs(obj, lio_obj): ''' Applies group attributes obj to the live lio_obj. ''' # TODO Split that one up, too much indentation there! unsupported_fmt = "Unsupported %s %s: consider upgrading your kernel" for group in obj.nodes: if group.data['type'] == 'group': group_name = group.key[0] for attr in group.nodes: if attr.data['type'] == 'attr' \ and not attr.data['required']: name = attr.key[0] value = obj_attr(group, name) if group_name == 'auth': try: lio_obj.get_auth_attr(name) except RTSLibError: log.info(unsupported_fmt % ("auth attribute", name)) else: log.debug("Setting auth %s to %s" % (name, value)) lio_obj.set_auth_attr(name, value) elif group_name == 'attribute': try: lio_obj.get_attribute(name) except RTSLibError: log.info(unsupported_fmt % ("attribute", name)) else: log.debug("Setting attribute %s to %s" % (name, value)) lio_obj.set_attribute(name, value) elif group_name == 'parameter': try: lio_obj.get_parameter(name) except RTSLibError: log.info(unsupported_fmt % ("parameter", name)) else: log.debug("Setting parameter %s to %s" % (name, value)) lio_obj.set_parameter(name, value) elif group_name == 'discovery_auth': log.debug("Setting discovery_auth %s to %s" % (name, value)) if name == 'enable': lio_obj.discovery_enable_auth = value elif name == 'mutual_password': lio_obj.discovery_mutual_password = value elif name == 'mutual_userid': lio_obj.discovery_mutual_userid = value elif name == 'password': lio_obj.discovery_password = value elif name == 'userid': lio_obj.discovery_userid = value else: raise ConfigError("Unexpected discovery_auth " "attribute: %s" % name) def apply_create_obj(obj): ''' Creates an object on the live system. ''' # TODO Factorize this when stable, merging it with update and delete, # leveraging rtslib 'any' mode (create if not exist) # TODO storage root = get_root() log.debug("apply_create(%s)" % obj.data) if obj.key[0] == 'mapped_lun': acl = obj.parent if acl.parent.key[0] == 'tpgt': tpg = acl.parent target = tpg.parent else: tpg = None target = acl.parent fabric = target.parent lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') if tpg is None: tpgt = 1 else: tpgt = int(tpg.key[1]) lio_tpg = TPG(lio_target, tpgt, mode='lookup') node_wwn = acl.key[1] lio_acl = NodeACL(lio_tpg, node_wwn, mode='lookup') mlun = int(obj.key[1]) write_protect = obj_attr(obj, "write_protect") tpg_lun = int(obj_attr(obj, "target_lun").rpartition(' ')[2]) lio_mlun = MappedLUN(lio_acl, mlun, tpg_lun, write_protect) apply_group_attrs(obj, lio_mlun) elif obj.key[0] == 'acl': if obj.parent.key[0] == 'tpgt': tpg = obj.parent target = tpg.parent else: tpg = None target = obj.parent fabric = target.parent lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') if tpg is None: tpgt = 1 else: tpgt = int(tpg.key[1]) lio_tpg = TPG(lio_target, tpgt, mode='lookup') node_wwn = obj.key[1] lio_acl = NodeACL(lio_tpg, node_wwn) apply_group_attrs(obj, lio_acl) elif obj.key[0] == 'portal': if obj.parent.key[0] == 'tpgt': tpg = obj.parent target = tpg.parent else: tpg = None target = obj.parent fabric = target.parent lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') if tpg is None: tpgt = 1 else: tpgt = int(tpg.key[1]) lio_tpg = TPG(lio_target, tpgt, mode='lookup') (address, _, port) = obj.key[1].partition(':') port = int(port) lio_portal = NetworkPortal(lio_tpg, address, port) apply_group_attrs(obj, lio_portal) elif obj.key[0] == 'lun': if obj.parent.key[0] == 'tpgt': tpg = obj.parent target = tpg.parent else: tpg = None target = obj.parent fabric = target.parent lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') if tpg is None: tpgt = 1 else: tpgt = int(tpg.key[1]) lio_tpg = TPG(lio_target, tpgt, mode='lookup') lun = int(obj.key[1]) (plugin, name) = obj_attr(obj, "backend") # TODO move that to a separate function, use for disk too matching_lio_so = [so for so in root.storage_objects if so.backstore.plugin == plugin and so.name == name] if len(matching_lio_so) > 1: raise ConfigError("Detected unsupported configfs storage objects " "allocation schema for storage object '%s'" % obj.path_str) elif len(matching_lio_so) == 0: raise ConfigError("Could not find storage object '%s %s' for '%s'" % (plugin, name, obj.path_str)) else: lio_so = matching_lio_so[0] lio_lun = LUN(lio_tpg, lun, lio_so) apply_group_attrs(obj, lio_lun) elif obj.key[0] == 'tpgt': target = obj.parent fabric = target.parent has_enable = len(obj.search([("enable", ".*")])) != 0 if has_enable: enable = obj_attr(obj, "enable") lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') tpgt = int(obj.key[1]) try: nexus_wwn = obj_attr(obj, "nexus_wwn") lio_tpg = TPG(lio_target, tpgt, nexus_wwn=nexus_wwn) except ConfigError: lio_tpg = TPG(lio_target, tpgt) if has_enable: lio_tpg.enable = enable apply_group_attrs(obj, lio_tpg) elif obj.key[0] == 'target': fabric = obj.parent wwn = obj.key[1] lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=wwn) apply_group_attrs(obj, lio_target) if not lio_target.has_feature("tpgts"): try: nexus_wwn = obj_attr(obj, "nexus_wwn") lio_tpg = TPG(lio_target, 1, nexus_wwn=nexus_wwn) except ConfigError: lio_tpg = TPG(lio_target, 1) if len(obj.search([("enable", ".*")])) != 0: lio_tpg.enable = True elif obj.key[0] == 'fabric': lio_fabric = FabricModule(obj.key[1]) apply_group_attrs(obj, lio_fabric) elif obj.key[0] == 'disk': plugin = obj.parent.key[1] name = obj.key[1] idx = max([0] + [b.index for b in root.backstores if b.plugin == plugin]) + 1 if plugin == 'fileio': dev = obj_attr(obj, "path") size = obj_attr(obj, "size") try: wwn = obj_attr(obj, "wwn") except ConfigError: wwn = None buffered = obj_attr(obj, "buffered") lio_bs = FileIOBackstore(idx) lio_so = lio_bs.storage_object(name, dev, size, wwn, buffered) apply_group_attrs(obj, lio_so) elif plugin == 'iblock': # TODO Add policy for iblock lio_bs = IBlockBackstore(idx) dev = obj_attr(obj, "path") wwn = obj_attr(obj, "wwn") lio_so = lio_bs.storage_object(name, dev, wwn) apply_group_attrs(obj, lio_so) elif plugin == 'pscsi': # TODO Add policy for pscsi lio_bs = PSCSIBackstore(idx) dev = obj_attr(obj, "path") lio_so = lio_bs.storage_object(name, dev) apply_group_attrs(obj, lio_so) elif plugin == 'rd_mcp': # TODO Add policy for rd_mcp lio_bs = RDMCPBackstore(idx) size = obj_attr(obj, "size") wwn = obj_attr(obj, "wwn") nullio = obj_attr(obj, "nullio") lio_so = lio_bs.storage_object(name, size, wwn, nullio) apply_group_attrs(obj, lio_so) else: raise ConfigError("Unknown backend '%s' for backstore '%s'" % (plugin, obj)) matching_lio_so = [so for so in root.storage_objects if so.backstore.plugin == plugin and so.name == name] if len(matching_lio_so) > 1: raise ConfigError("Detected unsupported configfs storage objects " "allocation schema for '%s'" % obj.path_str) elif len(matching_lio_so) == 0: raise ConfigError("Could not find backstore '%s'" % obj.path_str) else: lio_so = matching_lio_so[0] def apply_delete_obj(obj): ''' Deletes an object from the live system. ''' # TODO Factorize this when stable # TODO storage fabric cannot be deleted from the system, find a way to # handle this when i.e. path 'storage fileio' is in current config, but # no objects are hanging under it. root = get_root() log.debug("apply_delete(%s)" % obj.data) if obj.key[0] == 'mapped_lun': acl = obj.parent if acl.parent.key[0] == 'tpgt': tpg = acl.parent target = tpg.parent else: tpg = None target = acl.parent fabric = target.parent lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') if tpg is None: tpgt = 1 else: tpgt = int(tpg.key[1]) lio_tpg = TPG(lio_target, tpgt, mode='lookup') node_wwn = acl.key[1] lio_acl = NodeACL(lio_tpg, node_wwn, mode='lookup') mlun = int(obj.key[1]) lio_mlun = MappedLUN(lio_acl, mlun) lio_mlun.delete() elif obj.key[0] == 'acl': if obj.parent.key[0] == 'tpgt': tpg = obj.parent target = tpg.parent else: tpg = None target = obj.parent fabric = target.parent lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') if tpg is None: tpgt = 1 else: tpgt = int(tpg.key[1]) lio_tpg = TPG(lio_target, tpgt, mode='lookup') node_wwn = obj.key[1] lio_acl = NodeACL(lio_tpg, node_wwn, mode='lookup') lio_acl.delete() elif obj.key[0] == 'portal': if obj.parent.key[0] == 'tpgt': tpg = obj.parent target = tpg.parent else: tpg = None target = obj.parent fabric = target.parent lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') if tpg is None: tpgt = 1 else: tpgt = int(tpg.key[1]) lio_tpg = TPG(lio_target, tpgt, mode='lookup') (address, _, port) = obj.key[1].partition(':') port = int(port) lio_portal = NetworkPortal(lio_tpg, address, port, mode='lookup') lio_portal.delete() elif obj.key[0] == 'lun': if obj.parent.key[0] == 'tpgt': tpg = obj.parent target = tpg.parent else: tpg = None target = obj.parent fabric = target.parent lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') if tpg is None: tpgt = 1 else: tpgt = int(tpg.key[1]) lio_tpg = TPG(lio_target, tpgt, mode='lookup') lun = int(obj.key[1]) lio_lun = LUN(lio_tpg, lun) lio_lun.delete() elif obj.key[0] == 'tpgt': target = obj.parent fabric = target.parent lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=target.key[1], mode='lookup') tpgt = int(obj.key[1]) lio_tpg = TPG(lio_target, tpgt, mode='lookup') # FIXME IS this really needed ? lio_tpg.enable = True lio_tpg.delete() elif obj.key[0] == 'target': fabric = obj.parent wwn = obj.key[1] lio_fabric = FabricModule(fabric.key[1]) lio_target = Target(lio_fabric, wwn=wwn, mode='lookup') lio_target.delete() elif obj.key[0] == 'disk': plugin = obj.parent.key[1] name = obj.key[1] matching_lio_so = [so for so in root.storage_objects if so.backstore.plugin == plugin and so.name == name] log.debug("Looking for storage object %s in %s" % (obj.path_str, str(["%s/%s" % (so.backstore.plugin, so.name) for so in root.storage_objects]))) if len(matching_lio_so) > 1: raise ConfigError("Detected unsupported configfs storage objects " "allocation schema for storage object '%s'" % obj.path_str) elif len(matching_lio_so) == 0: raise ConfigError("Could not find storage object '%s'" % obj.path_str) else: lio_so = matching_lio_so[0] lio_so.delete() def clear_configfs(): ''' Clears the live configfs by deleteing all nodes. ''' root = get_root() for target in root.targets: target.delete() for backstore in root.backstores: backstore.delete() rtslib-3.0.pre4.1~g1b33ceb/rtslib/config_parser.py0000664000000000000000000002621412443074135016665 0ustar ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import logging import pyparsing as pp from config_tree import NO_VALUE # TODO Add strategic debug (and logging too, it is absent) # TODO Using group names as we do with obj_classes would be more robust DEBUG = False if DEBUG: logging.basicConfig() log = logging.getLogger('ConfigParser') log.setLevel(logging.DEBUG) else: log = logging.getLogger('ConfigParser') log.setLevel(logging.INFO) class ConfigParser(object): ''' Our configuration format parser. ''' # Order is important, used for sorting in Config obj_classes = "storage disk fabric target tpgt lun acl portal mapped_lun" def __init__(self): self._init_parser() def _init_parser(self): pp.ParserElement.setDefaultWhitespaceChars(' \t') tok_comment = pp.Regex(r'#.*') tok_ws = pp.Suppress(pp.OneOrMore(pp.White(' \t'))) tok_delim = (pp.Optional(pp.Suppress(tok_comment)) + pp.Suppress(pp.lineEnd | pp.Literal(';'))) tok_string = (pp.QuotedString('"') | pp.QuotedString("'") | pp.Word(pp.printables, excludeChars="{}#'\";")) tok_obj_class = pp.oneOf(self.obj_classes) tok_obj_ident = tok_string tok_obj = pp.Group(tok_obj_class + tok_ws + tok_obj_ident) tok_obj = tok_obj.setParseAction(self._parse_action_obj) tok_attr_name = pp.Word(pp.alphas, pp.alphas + pp.nums + "_") tok_attr_value = tok_string tok_attr = pp.Group(tok_attr_name + tok_ws + tok_attr_value + pp.Optional(tok_comment)) tok_attr = tok_attr.setParseAction(self._parse_action_attr) tok_group = pp.Word(pp.alphas, pp.alphas + "_") tok_group = tok_group.setParseAction(self._parse_action_group) # FIXME This does not work as intended when used # tok_empty_block = pp.Suppress('{' + pp.ZeroOrMore(tok_delim) + '}') tok_statement = pp.Forward() tok_block = (pp.Group(pp.Suppress('{') + pp.OneOrMore(tok_statement) + pp.Suppress('}'))) tok_block = tok_block.setParseAction(self._parse_action_block) tok_statement_no_path = ((tok_group + tok_ws + tok_attr) #| (tok_group + tok_empty_block) | (tok_group + tok_block) | tok_attr) tok_optional_if_path = ((tok_ws + tok_group + tok_ws + tok_attr) #| (tok_ws + tok_group + tok_empty_block) | (tok_ws + tok_group + tok_block) #| tok_empty_block | tok_block | (tok_ws + tok_attr)) tok_statement_if_path = (pp.OneOrMore(tok_obj) + pp.Optional(tok_optional_if_path)) tok_statement << pp.Group(pp.ZeroOrMore(tok_delim) + (tok_statement_if_path | tok_statement_no_path) + pp.OneOrMore(tok_delim)) self._parser = pp.ZeroOrMore(tok_statement) def _parse_action_obj(self, source, idx, tokin): value = tokin[0] return [{'type': 'obj', 'line': pp.lineno(idx, source), 'col': pp.col(idx, source), 'key': (value[0], value[1])}] def _parse_action_attr(self, source, idx, tokin): value = tokin[0] tokout = {'type': 'attr', 'line': pp.lineno(idx, source), 'col': pp.col(idx, source), 'key': (value[0], value[1])} if len(value) > 2: tokout['comment'] = value[2][1:].strip() return [tokout] def _parse_action_group(self, source, idx, tokin): value = tokin return [{'type': 'group', 'line': pp.lineno(idx, source), 'col': pp.col(idx, source), 'key': (value[0],)}] def _parse_action_block(self, source, idx, tokin): value = tokin[0].asList() return [{'type': 'block', 'line': pp.lineno(idx, source), 'col': pp.col(idx, source), 'statements': value}] def parse_file(self, filepath): return self._parser.parseFile(filepath, parseAll=True).asList() def parse_string(self, string): if string.strip(): return self._parser.parseString(string, parseAll=True).asList() else: return [] class PolicyParser(ConfigParser): ''' Our policy format parser. ''' def _init_parser(self): # TODO Once stable, factorize with ConfigParser pp.ParserElement.setDefaultWhitespaceChars(' \t') tok_comment = pp.Regex(r'#.*') tok_ws = pp.Suppress(pp.OneOrMore(pp.White(' \t'))) tok_delim = (pp.Optional(pp.Suppress(tok_comment)) + pp.Suppress(pp.lineEnd | pp.Literal(';'))) tok_string = (pp.QuotedString('"') | pp.QuotedString("'") | pp.Word(pp.printables, excludeChars="{}#'\";%@()")) tok_ref_path = (pp.Suppress('@') + pp.Suppress('(') + pp.OneOrMore(tok_string) + pp.Suppress(')')) tok_id_rule = pp.Suppress('%') + tok_string("id_type") tok_val_rule = (pp.Suppress('%') + tok_string("val_type") + pp.Optional(pp.Suppress('(') + tok_string("val_dfl") + pp.Suppress(')'))) tok_obj_class = pp.oneOf(self.obj_classes) tok_obj_ident = tok_id_rule | tok_string("id_fixed") tok_obj = pp.Group(tok_obj_class("class") + tok_ws + tok_obj_ident) tok_obj = tok_obj.setParseAction(self._parse_action_obj) tok_attr_name = pp.Word(pp.alphas, pp.alphas + pp.nums + "_") tok_attr_value = tok_ref_path("ref_path") | tok_val_rule tok_attr = pp.Group(tok_attr_name("name") + tok_ws + tok_attr_value + pp.Optional(tok_comment)("comment")) tok_attr = tok_attr.setParseAction(self._parse_action_attr) tok_group = pp.Word(pp.alphas, pp.alphas + "_") tok_group = tok_group.setParseAction(self._parse_action_group) tok_statement = pp.Forward() tok_block = (pp.Group(pp.Suppress('{') + pp.OneOrMore(tok_statement) + pp.Suppress('}'))) tok_block = tok_block.setParseAction(self._parse_action_block) tok_statement_no_path = ((tok_group + tok_ws + tok_attr) | (tok_group + tok_block) | tok_attr) tok_optional_if_path = ((tok_ws + tok_group + tok_ws + tok_attr) | (tok_ws + tok_group + tok_block) | tok_block | (tok_ws + tok_attr)) tok_statement_if_path = (pp.OneOrMore(tok_obj) + pp.Optional(tok_optional_if_path)) tok_statement << pp.Group(pp.ZeroOrMore(tok_delim) + (tok_statement_if_path | tok_statement_no_path) + pp.OneOrMore(tok_delim)) self._parser = pp.ZeroOrMore(tok_statement) def _parse_action_attr(self, source, idx, tokin): value = tokin[0].asDict() ref_path = value.get('ref_path') if ref_path is not None: ref_path = " ".join(ref_path.asList()) tokout = {'type': 'attr', 'line': pp.lineno(idx, source), 'col': pp.col(idx, source), 'ref_path': ref_path, 'val_type': value.get('val_type'), 'val_dfl': value.get('val_dfl', NO_VALUE), 'required': value.get('val_dfl', NO_VALUE) == NO_VALUE, 'comment': value.get('comment'), 'key': (value.get('name'), 'xxx')} return [tokout] def _parse_action_obj(self, source, idx, tokin): value = tokin[0].asDict() return [{'type': 'obj', 'line': pp.lineno(idx, source), 'col': pp.col(idx, source), 'id_type': value.get('id_type'), 'id_fixed': value.get('id_fixed'), 'key': (value.get('class'), value.get('id_fixed', 'xxx'))}] class PatternParser(ConfigParser): ''' Our pattern format parser. ''' def _init_parser(self): # TODO Once stable, factorize with ConfigParser pp.ParserElement.setDefaultWhitespaceChars(' \t') tok_ws = pp.Suppress(pp.OneOrMore(pp.White(' \t'))) tok_string = (pp.QuotedString('"') | pp.QuotedString("'") | pp.Word(pp.printables, excludeChars="{}#'\";")) tok_obj_class = pp.oneOf(self.obj_classes) tok_obj_ident = tok_string tok_obj = pp.Group(tok_obj_class + tok_ws + tok_obj_ident) tok_obj = tok_obj.setParseAction(self._parse_action_obj_attr) tok_attr_name = pp.Word(pp.alphas + pp.nums + "_.*[]-") tok_attr_value = tok_string tok_attr = pp.Group(tok_attr_name + tok_ws + tok_attr_value) tok_attr = tok_attr.setParseAction(self._parse_action_obj_attr) tok_group = pp.Word(pp.alphas + "_.*[]-") tok_group = tok_group.setParseAction(self._parse_action_group) tok_statement_no_path = ((tok_group + tok_ws + tok_attr) | tok_attr | tok_group) tok_optional_if_path = ((tok_ws + tok_group + tok_ws + tok_attr) | (tok_ws + tok_attr) | (tok_ws + tok_group)) tok_statement_if_path = (pp.OneOrMore(tok_obj) + pp.Optional(tok_optional_if_path)) self._parser = tok_statement_if_path | tok_statement_no_path def _parse_action_obj_attr(self, source, idx, tokin): return (tokin[0][0], tokin[0][1]) def _parse_action_group(self, source, idx, tokin): return (tokin[0],) rtslib-3.0.pre4.1~g1b33ceb/rtslib/config.py0000664000000000000000000010055212443074135015307 0ustar ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os, re, time, copy, logging from rtslib.utils import is_valid_wwn, list_eth_ips, fread from config_filters import * from config_tree import ConfigTree, NO_VALUE from config_parser import ConfigParser, PolicyParser, PatternParser DEBUG = False if DEBUG: logging.basicConfig() log = logging.getLogger('Config') log.setLevel(logging.DEBUG) else: log = logging.getLogger('Config') log.setLevel(logging.INFO) # FIXME validate_* and _load_parse_tree are a mess !!! # TODO Implement resync() to reload both policy and configfs state # TODO Add class_match_ids (objs) and name_match_value (attrs) to search etc. # Use it to simplify all "%s .*" tricks in cli # TODO Implement commit_live() # TODO Custom defaults load # TODO Add copy() operation def dump_value(string): if string == NO_VALUE: return NO_VALUE for char in " ~\t{}#',;": if char in string: return '"%s"' % string if '"' in string: return "'%s'" % string elif not string: return '""' else: return string def key_to_string(key): strings = [] for item in key: strings.append(dump_value(item)) return " ".join(strings) def is_valid_backend(value, parent): cur = parent while cur.parent is not None: cur = cur.parent (backend, _, disk) = value.partition(':') if cur.search([("storage", backend), ("disk", disk)]): return True else: return False def sort_key(node): ''' A sort key for configuration nodes, that ensures nodes potentially referenced in the config come first: storage before fabric and lun objects before acl objects. Also, attributes will be sorted before objects, so that configuration dumps are easier to read, with simple attributes coming before attribute groups. ''' node_type = node.data['type'] obj_classes = ConfigParser.obj_classes ordered_obj = {} for k, v in enumerate(obj_classes.split()): ordered_obj[v] = "%s%s" % (k, v) if node_type == 'attr': key = ('0', node.key[0], node.key[1]) elif node_type == 'group': key = ('1', node.key[0]) elif node_type == 'obj': key = ('2', ordered_obj.get(node.key[0], node.key[0]), node.key[1]) else: raise ConfigError("Unknown configuration node type %s for %s" % (node_type, node)) return key class ConfigError(Exception): pass class Config(object): ''' The LIO configuration API. The Config object provide methods to edit, search, validate and update the current configuration, and commit that configuration to the live system on request. It features pattern-matching search for all configuration objects and attributes as well as multi-level undo capabilities. In addition, all configuration changes are staged before being applied, isolating the current configuration from load-time and validation errors. ''' policy_dir = "/var/target/policy" def __init__(self): data = {'source': {'operation': 'init', 'timestamp': time.time()}, 'type': 'root', 'policy_path': []} self.policy = ConfigTree(data, sort_key, key_to_string) self.reference = ConfigTree(data, sort_key, key_to_string) self._parser = ConfigParser() self._policy_parser = PolicyParser() self._pattern_parser = PatternParser() self._configs = [ConfigTree(data, sort_key, key_to_string)] self._load_policy() def _load_policy(self): ''' Loads all LIO system policy files. ''' filepaths = ["%s/%s" % (self.policy_dir, path) for path in os.listdir(self.policy_dir) if path.endswith(".lio")] for filepath in filepaths: log.debug('Loading policy file %s' % filepath) parse_tree = self._policy_parser.parse_file(filepath) source = {'operation': 'load', 'filepath': filepath, 'timestamp': time.time(), 'mtime': os.path.getmtime(filepath)} self._load_parse_tree(parse_tree, replace=False, source=source, target='policy') def _load_parse_tree(self, parse_tree, cur_stage=None, replace=False, source=None, target='config', allow_new_attrs=False): ''' target can be 'config', 'policy' or 'reference' ''' # TODO accept 'defaults' target too if source is None: source = {} if cur_stage is None: update_target = True if replace: data = {'source': source, 'policy_path': [], 'type': 'root'} stage = ConfigTree(data, sort_key, key_to_string) elif target == 'config': stage = self.current.get_clone() stage.data['source'] = source elif target == 'policy': stage = self.policy.get_clone() stage.data['source'] = source elif target == 'reference': stage = self.reference.get_clone() stage.data['source'] = source else: update_target = False stage = cur_stage loaded = [] log.debug("Loading parse tree %s" % parse_tree) for statement in parse_tree: cur = stage log.debug("Visiting statement %s" % statement) for token in statement: token['source'] = source log.debug("Visiting token %s" % token) if token['type'] == 'obj': log.debug("Loading obj token: %s" % token) if target != 'policy': token = self.validate_obj(token, cur) old = cur.get(token['key']) cur = cur.cine(token['key'], token) if not old: loaded.append(cur) if target != 'policy': self._add_missing_attributes(cur) log.debug("Added object %s" % cur.path) elif token['type'] == 'attr': log.debug("Loading attr token: %s" % token) if target != 'policy': token = self.validate_attr(token, cur, allow_new_attrs) old_nodes = cur.search([(token['key'][0], ".*")]) for old_node in old_nodes: log.debug("Deleting old value: %s\nnew is: %s" % (old_node.path, str(token['key']))) deleted = cur.delete([old_node.key]) log.debug("Deleted: %s" % str(deleted)) cur = cur.cine(token['key'], token) if old_nodes and old_nodes[0].key != cur.key: loaded.append(cur) log.debug("Added attribute %s" % cur.path) elif token['type'] == 'group': log.debug("Loading group token: %s" % token) if target != 'policy': log.debug("cur '%s' token '%s'" % (cur, token)) token['policy_path'] = (cur.data['policy_path'] + [(token['key'][0],)]) old = cur.get(token['key']) cur = cur.cine(token['key'], token) if not old: loaded.append(cur) elif token['type'] == 'block': log.debug("Loading block token: %s" % token) for statement in token['statements']: log.debug("_load_parse_tree recursion on block " "statement: %s" % [statement]) loaded.extend(self._load_parse_tree( [statement], cur, source=source, target=target, allow_new_attrs=allow_new_attrs)) if update_target: if target == 'config': self.current = stage elif target == 'policy': self.policy = stage elif target == 'reference': self.reference = stage return loaded def _add_missing_attributes(self, obj): ''' Given an obj node, add all missing attributes and attribute groups in the configuration. ''' source = {'operation': 'auto', 'timestamp': time.time()} policy_root = self.policy.get_path(obj.data['policy_path']) for policy_node in [node for node in policy_root.nodes if node.data['type'] == 'attr']: attr = obj.search([(policy_node.key[0], ".*")]) if not attr: key = (policy_node.key[0], policy_node.data.get('val_dfl')) data = {'key': key, 'type': 'attr', 'source': source, 'val_dfl': policy_node.data.get('val_dfl'), 'val_type': policy_node.data['val_type'], 'required': key[1] is None, 'policy_path': policy_node.path} log.debug("obj.set(%s, %s)" % (str(key), data)) obj.set(key, data) groups = [] for policy_node in [node for node in policy_root.nodes if node.data['type'] == 'group']: group = obj.get((policy_node.key[0],)) if not group: key = (policy_node.key[0],) data = {'key': key, 'type': 'group', 'source': source, 'policy_path': policy_node.path} groups.append(obj.set(key, data)) else: groups.append(group) for group in groups: policy_root = self.policy.get_path(group.data['policy_path']) for policy_node in [node for node in policy_root.nodes if node.data['type'] == 'attr']: attr = group.search([(policy_node.key[0], ".*")]) if not attr: key = (policy_node.key[0], policy_node.data.get('val_dfl')) data = {'key': key, 'type': 'attr', 'source': source, 'val_dfl': policy_node.data.get('val_dfl'), 'val_type': policy_node.data['val_type'], 'required': key[1] is None, 'policy_path': policy_node.path} group.set(key, data) def validate_val(self, value, val_type, parent=None): valid_value = None log.debug("validate_val(%s, %s)" % (value, val_type)) if value == NO_VALUE: return None if val_type == 'bool': if value.lower() in ['yes', 'true', '1', 'enable']: valid_value = 'yes' elif value.lower() in ['no', 'false', '0', 'disable']: valid_value = 'no' elif val_type == 'bytes': match = re.match(r'(\d+(\.\d*)?)([kKMGT]?B?$)', value) if match: qty = str(float(match.group(1))) unit = match.group(3).upper() if not unit.endswith('B'): unit += 'B' valid_value = "%s%s" % (qty, unit) elif val_type == 'int': try: valid_value = str(int(value)) except: pass elif val_type == 'ipport': (addr, _, port) = value.rpartition(":") try: str(int(port)) except: pass else: try: listen_all = int(addr.replace(".", "")) == 0 except: listen_all = False if listen_all: valid_value = "0.0.0.0:%s" % port elif addr in list_eth_ips(): valid_value = value elif val_type == 'posint': try: val = int(value) except: pass else: if val > 0: valid_value = value elif val_type == 'str': valid_value = str(value) forbidden = "*?[]" for char in forbidden: if char in valid_value: valid_value = None break elif val_type == 'erl': if value in ["0", "1", "2"]: valid_value = value elif val_type == 'iqn': if is_valid_wwn('iqn', value): valid_value = value elif val_type == 'naa': if is_valid_wwn('naa', value): valid_value = value elif val_type == 'backend': if is_valid_backend(value, parent): valid_value = value else: raise ConfigError("Unknown value type '%s' when validating %s" % (val_type, value)) log.debug("validate_val(%s) is a valid %s: %s" % (value, val_type, valid_value)) return valid_value def validate_obj(self, token, parent): log.debug("validate_obj(%s, %s)" % (token, parent.data)) policy_search = parent.data['policy_path'] + [(token['key'][0], ".*")] policy_nodes = self.policy.search(policy_search) valid_token = copy.deepcopy(token) expected_val_types = set() for policy_node in policy_nodes: id_fixed = policy_node.data['id_fixed'] id_type = policy_node.data['id_type'] if id_fixed is not None: expected_val_types.add("'%s'" % id_fixed) if id_fixed == token['key'][1]: valid_token['policy_path'] = policy_node.path return valid_token else: expected_val_types.add(id_type) valid_value = self.validate_val(valid_token['key'][1], id_type) if valid_value is not None: valid_token['key'] = (valid_token['key'][0], valid_value) valid_token['policy_path'] = policy_node.path return valid_token if not policy_nodes: obj_type = ("%s %s" % (parent.path_str, token['key'][0])).strip() raise ConfigError("Unknown object type: %s" % obj_type) else: raise ConfigError("Invalid %s identifier '%s': expected type %s" % (token['key'][0], token['key'][1], ", ".join(expected_val_types))) def validate_attr(self, token, parent, allow_new_attr=False): log.debug("validate_attr(%s, %s)" % (token, parent.data)) if token['key'][1] is None: return token policy_search = parent.data['policy_path'] + [(token['key'][0], ".*")] policy_nodes = self.policy.search(policy_search) valid_token = copy.deepcopy(token) expected_val_types = set() for policy_node in policy_nodes: ref_path = policy_node.data['ref_path'] valid_token['required'] = policy_node.data['required'] valid_token['comment'] = policy_node.data['comment'] valid_token['val_dfl'] = policy_node.data.get('val_dfl') valid_token['val_type'] = policy_node.data['val_type'] if ref_path is not None: root = parent if ref_path.startswith('-'): (upno, _, down) = ref_path[1:].partition(' ') for i in range(int(upno) - 1): root = root.parent else: while not root.is_root: root = root.parent search_path = [(down, token['key'][1])] nodes = root.search(search_path) if len(nodes) == 1: valid_token['ref_path'] = nodes[0].path_str return valid_token elif len(nodes) == 0: raise ConfigError("Invalid reference for attribute %s: %s" % (token['key'][0], search_path)) else: raise ConfigError("Unexpected reference error, got: %s" % nodes) return valid_token else: expected_val_types.add(policy_node.data['val_type']) if valid_token['key'][1] == NO_VALUE: valid_value = NO_VALUE else: valid_value = \ self.validate_val(valid_token['key'][1], policy_node.data['val_type'], parent=parent) if valid_value is not None: valid_token['key'] = (valid_token['key'][0], valid_value) return valid_token if not policy_nodes: if allow_new_attr: valid_token['required'] = False valid_token['comment'] = "Unknown" valid_token['val_dfl'] = valid_token['key'][1] valid_token['val_type'] = "raw" valid_token['ref_path'] = None return valid_token else: attr_name = ("%s %s" % (parent.path_str, token['key'][0])).strip() raise ConfigError("Unknown attribute: %s" % attr_name) else: raise ConfigError("Invalid %s value '%s': expected type %s" % (token['key'][0], token['key'][1], ", ".join(expected_val_types))) @property def current(self): return self._configs[-1] @current.setter def current(self, config_tree): self._configs.append(config_tree) def undo(self): ''' Restores the previous state of the configuration, before the last set, load, delete, update or clear operation. If there is nothing to undo, a ConfigError exception will be raised. ''' if len(self._configs) < 2: raise ConfigError("Nothing to undo") else: self._configs.pop() def set(self, configuration): ''' Evaluates the configuration (a string in LIO configuration format) and sets the relevant objects, attributes and atttribute groups. Existing attributes and objects will be updated if needed and new ones will be added. The list of created configuration nodes will be returned. If an error occurs, the operation will be aborted, leaving the current configuration intact. ''' parse_tree = self._parser.parse_string(configuration) source = {'operation': 'set', 'data': configuration, 'timestamp': time.time()} return self._load_parse_tree(parse_tree, source=source) def delete(self, pattern, node_filter=lambda x:x): ''' Deletes all configuration objects and attributes whose paths match the pattern, along with their children. The pattern is a single LIO configuration statement without any block, where object identifiers, attributes names, attribute values and attribute groups are regular expressions patterns. Object types have to use their exact string representation to match. node_filter is a function applied to each node before returning it: node_filter(node_in) -> node_out | None (aka filtered out) Returns a list of all deleted nodes. If an error occurs, the operation will be aborted, leaving the current configuration intact. ''' path = [token for token in self._pattern_parser.parse_string(pattern)] log.debug("delete(%s)" % pattern) source = {'operation': 'delete', 'pattern': pattern, 'timestamp': time.time()} stage = self.current.get_clone() stage.data['source'] = source deleted = [] for node in stage.search(path, node_filter): log.debug("delete() found node %s" % node) deleted.append(stage.delete(node.path)) self.current = stage return deleted def load(self, filepath, allow_new_attrs=False): ''' Loads an LIO configuration file and replace the current configuration with it. All existing objects and attributes will be deleted, and new ones will be added. If an error occurs, the operation will be aborted, leaving the current configuration intact. ''' for c in fread(filepath): if c not in ["\n", "\t", " "]: parse_tree = self._parser.parse_file(filepath) source = {'operation': 'load', 'filepath': filepath, 'timestamp': time.time(), 'mtime': os.path.getmtime(filepath)} self._load_parse_tree(parse_tree, replace=True, source=source, allow_new_attrs=allow_new_attrs) break def load_live(self): ''' Loads the live-running configuration. ''' from config_live import dump_live live = dump_live() parse_tree = self._parser.parse_string(live) source = {'operation': 'resync', 'timestamp': time.time()} self._load_parse_tree(parse_tree, replace=True, source=source, allow_new_attrs=True) def update(self, filepath): ''' Updates the current configuration with the contents of an LIO configuration file. Existing attributes and objects will be updated if needed and new ones will be added. If an error occurs, the operation will be aborted, leaving the current configuration intact. ''' parse_tree = self._parser.parse_file(filepath) source = {'operation': 'update', 'filepath': filepath, 'timestamp': time.time(), 'mtime': os.path.getmtime(filepath)} self._load_parse_tree(parse_tree, source=source) def clear(self): ''' Clears the current configuration. This removes all current objects and attributes from the configuration. ''' source = {'operation': 'clear', 'timestamp': time.time()} self.current = ConfigTree({'source': source}, sort_key, key_to_string) def search(self, search_statement, node_filter=lambda x:x): ''' Returns a list of nodes matching the search_statement, relative to the current node, or an empty list if no match was found. The search_statement is a single LIO configuration statement without any block, where object identifiers, attributes names, attribute values and attribute groups are regular expressions patterns. Object types have to use their exact string representation to match. node_filter is a function applied to each node before returning it: node_filter(node_in) -> node_out | None (aka filtered out) ''' path = [token for token in self._pattern_parser.parse_string(search_statement)] return self.current.search(path, node_filter) def dump(self, search_statement=None, node_filter=lambda x:x): ''' Returns a LIO configuration file format dump of the nodes matching the search_statement, or of all nodes if search_statement is None. The search_statement is a single LIO configuration statement without any block, where object identifiers, attributes names, attribute values and attribute groups are regular expressions patterns. Object types have to use their exact string representation to match. node_filter is a function applied to each node before dumping it: node_filter(node_in) -> node_out | None (aka filtered out) ''' # FIXME: Breaks with filter_only_missing if not search_statement: root_nodes = [self.current] else: root_nodes = self.search(search_statement, node_filter) if root_nodes: parts = [] for root_node_in in root_nodes: root_node = node_filter(root_node_in) if root_node is None: break dump = '' if root_node.key_str: dump = "%s " % root_node.key_str nodes = root_node.nodes if root_node.is_root or len(nodes) == 1: for node in nodes: section = self.dump(node.path_str, node_filter) if section: dump += section elif len(nodes) > 1: dump += "{\n" for node in nodes: section = self.dump(node.path_str, node_filter) if section is not None: lines = section.splitlines() else: lines = [] dump += "\n".join(" %s" % line for line in lines if line) dump += "\n" dump += "}\n" parts.append(dump) dump = "\n".join(parts) if dump.strip(): return dump def save(self, filepath, pattern=None): ''' Saves the current configuration to filepath, using LIO configuration file format. If path is not None, only objects and attributes starting at path and hanging under it will be saved. For convenience, the saved configuration will also be returned as a string. The pattern is a whitespace-separated string of regular expressions, each of which will be matched against configuration objects and attributes. In case of dump, the pattern must be non-ambiguous and match only a single configuration node. If the pattern matches either zero or more than one configuration nodes, a ConfigError exception will be raised. ''' dump = self.dump(pattern, filter_no_missing) if dump is None: dump = '' with open(filepath, 'w') as f: f.write(dump) return dump def verify(self): ''' Validates the configuration for the following points: - Portal IP Addresses exist - Devices and file paths exist - Files for fileio exist - No required attributes are missing - References are correct Returns a dictionary of validation_test: [errors] ''' return {} def apply(self, brute_force=True): ''' Applies the configuration to the live system: - Remove objects absent from the configuration and objects in the configuration with different required attributes - Create new storage objects - Create new fabric objects - Update relevant storage objects - Update relevant fabric objects ''' from config_live import apply_create_obj, apply_delete_obj if brute_force: from config_live import apply_create_obj, clear_configfs yield "[clear] delete all live objects" clear_configfs() for obj in self.current.walk(get_filter_on_type(['obj'])): yield("[create] %s" % obj.path_str) apply_create_obj(obj) else: # TODO for minor_obj, update instead of create/delete diff = self.diff_live() delete_list = diff['removed'] + diff['major_obj'] + diff['minor_obj'] delete_list.reverse() for obj in delete_list: yield "[delete] %s" % obj.path_str apply_delete_obj(obj) for obj in diff['created'] + diff['major_obj'] + diff['minor_obj']: yield "[create] %s" % obj.path_str apply_create_obj(obj) def diff_live(self): ''' Returns a diff between the current configuration and the live configuration as a reference. ''' from config_live import dump_live parse_tree = self._parser.parse_string(dump_live()) source = {'operation': 'load', 'timestamp': time.time()} self._load_parse_tree(parse_tree, replace=True, source=source, target='reference', allow_new_attrs=True) return self.diff() def diff(self): ''' Computes differences between a valid current configuration and a previously loaded valid reference configuration. Returns a dict of: - 'removed': list of removed objects - 'major': list of changed required attributes - 'major_obj': list of obj with major changes - 'minor': list of changed non-required attributes - 'major_obj': list of obj with minor changes - 'created': list of new objects in the current configuration ''' # FIXME data['required'] check should be enough without NO_VALUE check # FIXME Can't we just pass the reference config instead of having to preload it? diffs = {} keys = ('removed', 'major', 'major_obj', 'minor', 'minor_obj', 'created') for key in keys: diffs[key] = [] for obj in self.current.walk(get_filter_on_type(['obj'])): if not self.reference.get_path(obj.path): diffs['created'].append(obj) for obj in self.reference.walk(get_filter_on_type(['obj'])): if not self.current.get_path(obj.path): diffs['removed'].append(obj) for obj in self.current.walk(get_filter_on_type(['obj'])): if self.reference.get_path(obj.path): for node in obj.nodes: if node.data['type'] == 'attr' \ and (node.data['required'] \ or node.key[1] == NO_VALUE): if not self.reference.get_path(node.path): diffs['major'].append(node) diffs['major_obj'].append(node.parent) for obj in self.current.walk(get_filter_on_type(['obj'])): if self.reference.get_path(obj.path): for node in obj.nodes: if node.data['type'] == 'attr' \ and not node.data['required'] \ and node.key[1] != NO_VALUE: if not self.reference.get_path(node.path): diffs['minor'].append(node) if node.parent not in diffs['minor_obj'] \ and node.parent not in diffs['major_obj']: diffs['minor_obj'].append(node.parent) elif node.data['type'] == 'group': for attr in node.nodes: if attr.data['type'] == 'attr' \ and not attr.data['required'] \ and attr.key[1] != NO_VALUE: if not self.reference.get_path(attr.path): diffs['minor'].append(attr) if node.parent not in diffs['minor_obj'] \ and node.parent not in diffs['major_obj']: diffs['minor_obj'].append(node.parent) return diffs rtslib-3.0.pre4.1~g1b33ceb/rtslib/config_tree.py0000664000000000000000000002431212443074135016325 0ustar ''' This file is part of LIO(tm). Copyright (c) 2012-2014 by Datera, Inc. More information on www.datera.io. Original author: Jerome Martin Datera and LIO are trademarks of Datera, Inc., which may be registered in some jurisdictions. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import re, copy, logging DEBUG = False if DEBUG: logging.basicConfig() log = logging.getLogger('ConfigTree') log.setLevel(logging.DEBUG) else: log = logging.getLogger('ConfigTree') log.setLevel(logging.INFO) NO_VALUE = '~~~' def match_key(search_key, key): ''' Matches search_key and key tuple items-for-item, with search_key containing regular expressions patterns or None values, and key containing string ir None values. ''' log.debug("match_key(%s, %s)" % (search_key, key)) if len(search_key) == len(key): for idx, pattern in enumerate(search_key): item = key[idx] if not pattern.endswith('$'): pattern = "%s$" % pattern if item is None and pattern is None: continue elif item is None: break else: match = re.match(pattern, item) if match is None: break else: return True class ConfigTreeError(Exception): pass class ConfigTree(object): ''' An ordered tree structure to hold configuration data. A node can be referred to by its path, relative to the current node. A path is a list of keys, each key a tuple of either string or None items. ''' def __init__(self, data=None, sort_key=lambda x:x, key_to_string=lambda x:str(x)): ''' Initializes a new ConfigTree. The optional sort_key is a function used when ordering children of a configuration node. The optional key_to_string is a function used when converting a node key to string. Direct instanciation should only happen for the root node of the tree. Adding a new node to the tree is achieved by using the set() method of the desired parent for that new node. ''' self.data = data self._key = () self._nodes = {} self._parent = None self._sort_key = sort_key self._key_to_string = key_to_string def __repr__(self): return "(%s)" % self.path_str def __str__(self): return self.path_str def get_clone(self, parent=None): ''' Returns a clone of the ConfigTree, not sharing any mutable data. ''' clone = ConfigTree(copy.deepcopy(self.data), self._sort_key, self._key_to_string) clone._parent = parent clone._key = self._key clone.data = copy.deepcopy(self.data) for node in self.nodes: clone._nodes[node.key] = node.get_clone(parent=clone) return clone @property def root(self): ''' Returns the root node of the tree. ''' cur = self while cur.parent: cur = cur.parent return cur @property def key(self): ''' Returns the current node's key tuple. ''' return self._key @property def key_str(self): ''' Returns the current node's key as a string. ''' return self._key_to_string(self.key) @property def path(self): ''' Returns the node's full path from the tree root as a list of keys. ''' if self.is_root: path = [] else: path = self.parent.path + [self._key] return path @property def path_str(self): ''' Returns the node's full path from the tree root as a string. ''' strings = [] for key in self.path: strings.append(self._key_to_string(key)) return " ".join(strings) @property def nodes(self): ''' Returns the list of all children nodes, sorted with potential dependencies first. ''' nodes = sorted(self._nodes.values(), key=self._sort_key) return nodes @property def keys(self): ''' Generates all children nodes keys, sorted with potential dependencies first. ''' keys = (node.key for node in self.nodes) return keys @property def parent(self): ''' Returns the parent node of the current node, or None. ''' return self._parent @property def is_root(self): ''' Returns True if this is a root node, else False. ''' return self._parent == None def get(self, node_key): ''' Returns the current node's child having node_key, or None. ''' return self._nodes.get(node_key) def set(self, node_key, node_data=None): ''' Creates and adds a child node to the current node, and returns that new node. If the node already exists, then a ConfigTreeError exception will be raised. Else, the new node will be returned. node_key is any tuple of strings node_data is an optional arbitrary value ''' if node_key not in self.keys: new_node = ConfigTree(self.data, self._sort_key, self._key_to_string) new_node._parent = self new_node.data = node_data new_node._key = node_key self._nodes[node_key] = new_node return new_node else: raise ConfigTreeError("Node already exists, cannot set: %s" % self.get(node_key)) def cine(self, node_key, node_data=None): ''' cine stands for create if not exist: it makes sure a node exists. If it does not, it will create it using node_data. Else node_data will not be updated. Returns the matching node in all cases. node_key is any tuple of strings node_data is an optional arbitrary value ''' if node_key in self.keys: log.debug("cine(%s %s) -> Already exists" % (self.path_str, node_key)) return self.get(node_key) else: log.debug("cine(%s %s) -> Creating" % (self.path_str, node_key)) return self.set(node_key, node_data) def update(self, node_key, node_data=None): ''' If a node already has node_key as key, its data will be replaced with node_data. Else, it will be created using node_data. The matching node will be returned in both cases. node_key is any tuple of strings. node_data is an optional arbitrary value. ''' try: node = self.set(node_key, node_data) except ConfigTreeError: node = self.get(node_key) node.data = node_data return node def delete(self, path): ''' Given a path, deletes an entire subtree from the configuration, relative to the current node. The deleted subtree will be returned, or None is the path does not exist or is empty. The path must be a list of node keys. ''' log.debug("delete(%s) getting subtree" % str(path)) subtree = self.get_path(path) log.debug("delete(%s) got subtree: %s" % (str(path), subtree)) if subtree is not None: del subtree.parent._nodes[subtree.key] return subtree def get_path(self, path): ''' Returns either the node matching path, relative to the current node, or None if the path does not exists. ''' log.debug("get_path(%s)" % str(path)) cur = self log.debug("get_path(%s) - cur: %s" % (str(path), cur)) if path: for key in path: cur = cur.get(key) if cur is None: break else: return cur def search(self, search_path, node_filter=lambda x:x): ''' Returns a list of nodes matching the search_path, relative to the current node, or an empty list if no match was found. The search_path is a list of node search_key. Each will be matched against node key tuples items-for-item, with search_key containing regular expressions patterns or None values, and key containing string or None values. node_filter is a function applied to each node before returning it: node_filter(node_in) -> node_out | None (aka filtered out) ''' results = [] if search_path: search_key = search_path[0] for node in self.nodes: if match_key(search_key, node.key): if search_path[1:]: results.extend(node.search(search_path[1:])) else: node_out = node_filter(node) if node_out is not None: results.append(node_out) return results def walk(self, node_filter=lambda x:x): ''' Returns a generator yielding our children's tree in depth-first order. node_filter is a function applied to each node before dumping it: node_filter(node_in) -> node_out | None (aka filtered out) When a node is filtered out, its children will still be walked and filtered/yielded as usual. ''' for node_in in self.nodes: node_out = node_filter(node_in) if node_out is not None: yield node_out for next in node_in.walk(node_filter): yield next rtslib-3.0.pre4.1~g1b33ceb/rtslib/__init__.py0000664000000000000000000000271012443074135015576 0ustar ''' This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import utils from root import RTSRoot from utils import RTSLibError, RTSLibBrokenLink, RTSLibNotInCFS from target import LUN, MappedLUN from target import list_specfiles, parse_specfile from target import NodeACL, NetworkPortal, TPG, Target, FabricModule from tcm import FileIOBackstore, IBlockBackstore from tcm import FileIOStorageObject, IBlockStorageObject from tcm import PSCSIBackstore, RDMCPBackstore from tcm import PSCSIStorageObject, RDMCPStorageObject from config_filters import * from config import Config, ConfigError from config_tree import ConfigTree, NO_VALUE from config_parser import ConfigParser, PolicyParser, PatternParser __version__ = '3.0.pre4.1~g1b33ceb' __author__ = "Jerome Martin " __url__ = "http://www.risingtidesystems.com" __description__ = "API for RisingTide Systems generic SCSI target." __license__ = __doc__ rtslib-3.0.pre4.1~g1b33ceb/rtslib/node.py0000664000000000000000000002577212443074135015001 0ustar ''' Implements the base CFSNode class and a few inherited variants. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os import stat from utils import fread, fwrite, RTSLibError, RTSLibNotInCFS class CFSNode(object): # Where is the configfs base LIO directory ? configfs_dir = '/sys/kernel/config/target' # TODO: Make the ALUA path generic, not iscsi-centric # What is the ALUA directory ? alua_metadata_dir = "/var/target/alua/iSCSI" # CFSNode private stuff def __init__(self): self._path = self.configfs_dir def __nonzero__(self): if os.path.isdir(self.path): return True else: return False def __str__(self): return self.path def _get_path(self): return self._path def _create_in_cfs_ine(self, mode): ''' Creates the configFS node if it does not already exist depending on the mode. any -> makes sure it exists, also works if the node already does exist lookup -> make sure it does NOT exist create -> create the node which must not exist beforehand Upon success (no exception raised), self._fresh is True if a node was created, else self._fresh is False. ''' if mode not in ['any', 'lookup', 'create']: raise RTSLibError("Invalid mode: %s" % mode) if self and mode == 'create': raise RTSLibError("This %s already exists in configFS." % self.__class__.__name__) elif not self and mode == 'lookup': raise RTSLibNotInCFS("No such %s in configfs: %s." % (self.__class__.__name__, self.path)) if self: self._fresh = False return try: os.mkdir(self.path) self._fresh = True except Exception as e: raise RTSLibError("Could not create %s: %s" % (self.path, e)) def _exists(self): return bool(self) def _check_self(self): if not self: raise RTSLibNotInCFS("This %s does not exist in configFS." % self.__class__.__name__) def _is_fresh(self): return self._fresh def _list_files(self, path, writable=None): ''' List files under a path depending on their owner's write permissions. @param path: The path under which the files are expected to be. If the path itself is not a directory, an empty list will be returned. @type path: str @param writable: If None (default), returns all parameters, if True, returns read-write parameters, if False, returns just the read-only parameters. @type writable: bool or None @return: List of file names filtered according to their write perms. ''' if not os.path.isdir(path): return [] if writable is None: names = os.listdir(path) elif writable: names = [name for name in os.listdir(path) if (os.stat("%s/%s" % (path, name))[stat.ST_MODE] \ & stat.S_IWUSR)] else: names = [os.path.basename(name) for name in os.listdir(path) if not (os.stat("%s/%s" % (path, name))[stat.ST_MODE] \ & stat.S_IWUSR)] names.sort() return names # CFSNode public stuff def list_parameters(self, writable=None): ''' @param writable: If None (default), returns all parameters, if True, returns read-write parameters, if False, returns just the read-only parameters. @type writable: bool or None @return: The list of existing RFC-3720 parameter names. ''' self._check_self() path = "%s/param" % self.path return self._list_files(path, writable) def list_attributes(self, writable=None): ''' @param writable: If None (default), returns all attributes, if True, returns read-write attributes, if False, returns just the read-only attributes. @type writable: bool or None @return: A list of existing attribute names as strings. ''' self._check_self() path = "%s/attrib" % self.path attributes = self._list_files(path, writable) # FIXME Bug in the pSCSI kernel implementation, these should be ro backstore = getattr(self, "backstore", None) plugin = getattr(backstore, "plugin", None) edited_attributes = [] force_ro_attrs = ["block_size", "emulate_fua_write", "optimal_sectors"] if writable is True and plugin == "pscsi": edited_attributes = [attr for attr in attributes if attr not in force_ro_attrs] elif writable is False and plugin == "pscsi": edited_attributes = list(set(attributes + force_ro_attrs)) else: edited_attributes = attributes return edited_attributes def list_auth_attrs(self, writable=None): ''' @param writable: If None (default), returns all auth attrs, if True, returns read-write auth attrs, if False, returns just the read-only auth attrs. @type writable: bool or None @return: A list of existing attribute names as strings. ''' self._check_self() path = "%s/auth" % self.path return self._list_files(path, writable) def set_attribute(self, attribute, value): ''' Sets the value of a named attribute. The attribute must exist in configFS. @param attribute: The attribute's name. It is case-sensitive. @type attribute: string @param value: The attribute's value. @type value: string ''' self._check_self() path = "%s/attrib/%s" % (self.path, str(attribute)) if not os.path.isfile(path): raise RTSLibError("Cannot find attribute: %s." % str(attribute)) else: try: fwrite(path, "%s\n" % str(value)) except Exception, msg: msg = msg[1] raise RTSLibError("Cannot set attribute %s to '%s': %s" % (str(attribute), str(value), str(msg))) def get_attribute(self, attribute): ''' @param attribute: The attribute's name. It is case-sensitive. @return: The named attribute's value, as a string. ''' self._check_self() path = "%s/attrib/%s" % (self.path, str(attribute)) if not os.path.isfile(path): raise RTSLibError("Cannot find attribute: %s." % str(attribute)) else: return fread(path).strip() def set_parameter(self, parameter, value): ''' Sets the value of a named RFC-3720 parameter. The parameter must exist in configFS. @param parameter: The RFC-3720 parameter's name. It is case-sensitive. @type parameter: string @param value: The parameter's value. @type value: string ''' self._check_self() path = "%s/param/%s" % (self.path, str(parameter)) if not os.path.isfile(path): raise RTSLibError("Cannot find parameter: %s." % str(parameter)) else: try: fwrite(path, "%s\n" % str(value)) except Exception, msg: msg = msg[1] raise RTSLibError("Cannot set parameter %s: %s" % (str(parameter), str(msg))) def get_parameter(self, parameter): ''' @param parameter: The RFC-3720 parameter's name. It is case-sensitive. @type parameter: string @return: The named parameter value as a string. ''' self._check_self() path = "%s/param/%s" % (self.path, str(parameter)) if not os.path.isfile(path): raise RTSLibError("Cannot find RFC-3720 parameter: %s." % str(parameter)) else: return fread(path).rstrip() def set_auth_attr(self, auth_attr, value): ''' Sets the value of a named auth_attr. The auth_attr must exist in configFS. @param auth_attr: The auth_attr's name. It is case-sensitive. @type auth_attr: string @param value: The auth_attr's value. @type value: string ''' self._check_self() path = "%s/auth/%s" % (self.path, str(auth_attr)) if not os.path.isfile(path): raise RTSLibError("Cannot find auth attribute: %s." % str(auth_attr)) else: try: fwrite(path, "%s" % str(value)) except IOError, msg: msg = msg[1] raise RTSLibError("Cannot set auth attribute %s: %s" % (str(auth_attr), str(msg))) def get_auth_attr(self, auth_attr): ''' @param auth_attr: The auth_attr's name. It is case-sensitive. @return: The named auth_attr's value, as a string. ''' self._check_self() path = "%s/auth/%s" % (self.path, str(auth_attr)) if not os.path.isfile(path): raise RTSLibError("Cannot find auth attribute: %s." % str(auth_attr)) else: return fread(path).strip() def delete(self): ''' If the underlying configFS object does not exists, this method does nothing. If the underlying configFS object exists, this method attempts to delete it. ''' if self: os.rmdir(self.path) path = property(_get_path, doc="Get the configFS object path.") exists = property(_exists, doc="Is True as long as the underlying configFS object exists. " \ + "If the underlying configFS objects gets deleted " \ + "either by calling the delete() method, or by any " \ + "other means, it will be False.") is_fresh = property(_is_fresh, doc="Is True if the underlying configFS object has been created " \ + "when instantiating this particular object. Is " \ + "False if this object instantiation just looked " \ + "up the underlying configFS object.") def _test(): import doctest doctest.testmod() if __name__ == "__main__": _test() rtslib-3.0.pre4.1~g1b33ceb/rtslib/root.py0000664000000000000000000001160612443074135015026 0ustar ''' Implements the RTSRoot class. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import re import os import glob from node import CFSNode from target import Target, FabricModule from tcm import FileIOBackstore, IBlockBackstore from tcm import PSCSIBackstore, RDMCPBackstore from utils import RTSLibError, RTSLibBrokenLink class RTSRoot(CFSNode): ''' This is an interface to the root of the configFS object tree. Is allows one to start browsing Target and Backstore objects, as well as helper methods to return arbitrary objects from the configFS tree. >>> import rtslib.root as root >>> rtsroot = root.RTSRoot() >>> rtsroot.path '/sys/kernel/config/target' >>> rtsroot.exists True >>> rtsroot.targets # doctest: +ELLIPSIS [...] >>> rtsroot.backstores # doctest: +ELLIPSIS [...] >>> rtsroot.tpgs # doctest: +ELLIPSIS [...] >>> rtsroot.storage_objects # doctest: +ELLIPSIS [...] >>> rtsroot.network_portals # doctest: +ELLIPSIS [...] ''' # The core target/tcm kernel module target_core_mod = 'target_core_mod' # RTSRoot private stuff def __init__(self): ''' Instantiate an RTSRoot object. Basically checks for configfs setup and base kernel modules (tcm ) ''' super(RTSRoot, self).__init__() def _list_targets(self): self._check_self() targets = set([]) for fabric_module in self.fabric_modules: for target in fabric_module.targets: yield target def _list_backstores(self): self._check_self() if os.path.isdir("%s/core" % self.path): backstore_dirs = glob.glob("%s/core/*_*" % self.path) for backstore_dir in [os.path.basename(path) for path in backstore_dirs]: regex = re.search("([a-z]+[_]*[a-z]+)(_)([0-9]+)", backstore_dir) if regex: if regex.group(1) == "fileio": yield FileIOBackstore(int(regex.group(3)), 'lookup') elif regex.group(1) == "pscsi": yield PSCSIBackstore(int(regex.group(3)), 'lookup') elif regex.group(1) == "iblock": yield IBlockBackstore(int(regex.group(3)), 'lookup') elif regex.group(1) == "rd_mcp": yield RDMCPBackstore(int(regex.group(3)), 'lookup') def _list_storage_objects(self): self._check_self() for bs in self.backstores: for so in bs.storage_objects: yield so def _list_tpgs(self): self._check_self() for t in self.targets: for tpg in t.tpgs: yield tpg def _list_node_acls(self): self._check_self() for t in self.tpgs: for node_acl in t.node_acls: yield node_acl def _list_network_portals(self): self._check_self() for t in self.tpgs: for p in t.network_portals: yield p def _list_luns(self): self._check_self() for t in self.tpgs: for lun in t.luns: yield lun def _list_fabric_modules(self): self._check_self() for mod in FabricModule.all(): yield mod def __str__(self): return "rtslib" # RTSRoot public stuff backstores = property(_list_backstores, doc="Get the list of Backstore objects.") targets = property(_list_targets, doc="Get the list of Target objects.") tpgs = property(_list_tpgs, doc="Get the list of all the existing TPG objects.") node_acls = property(_list_node_acls, doc="Get the list of all the existing NodeACL objects.") network_portals = property(_list_network_portals, doc="Get the list of all the existing Network Portal objects.") storage_objects = property(_list_storage_objects, doc="Get the list of all the existing Storage objects.") luns = property(_list_luns, doc="Get the list of all existing LUN objects.") fabric_modules = property(_list_fabric_modules, doc="Get the list of all FabricModule objects.") def _test(): '''Run the doctests.''' import doctest doctest.testmod() if __name__ == "__main__": _test() rtslib-3.0.pre4.1~g1b33ceb/rtslib/target.py0000664000000000000000000012730312443074135015333 0ustar ''' Implements the RTS generic Target fabric classes. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import re import os import glob import uuid import shutil from node import CFSNode from os.path import isdir from configobj import ConfigObj from utils import RTSLibError, RTSLibBrokenLink from utils import is_ipv6_address, is_ipv4_address from utils import fread, fwrite, generate_wwn, is_valid_wwn, exec_argv # Where do we store the fabric modules spec files ? spec_dir = "/var/target/fabric" def list_specfiles(): ''' Returns the list of all specfile paths found on the system. ''' return ["%s/%s" % (spec_dir, name) for name in os.listdir(spec_dir) if name.endswith('.spec')] def parse_specfile(spec_file): ''' Parses the fabric module spec file. @param spec_file: the path to the specfile to parse. @type spec_file: str @return: a dict of spec options ''' # Recognized options and their default values name = os.path.basename(spec_file).partition(".")[0] defaults = dict(features=['discovery_auth', 'acls', 'acls_auth', 'nps', 'tpgts'], kernel_module="%s_target_mod" % name, configfs_group=name, wwn_from_files=[], wwn_from_files_filter='', wwn_from_cmds=[], wwn_from_cmds_filter='', wwn_type='free') spec = ConfigObj(spec_file).dict() # Do not allow unknown options unknown_options = set(spec.keys()) - set(defaults.keys()) if unknown_options: raise RTSLibError("Unknown option(s) in %s: %s" % (spec_file, list(unknown_options))) # Use defaults for missing options missing_options = set(defaults.keys()) - set(spec.keys()) for option in missing_options: spec[option] = defaults[option] # Type conversion and checking for option in spec: spec_type = type(spec[option]).__name__ defaults_type = type(defaults[option]).__name__ if spec_type != defaults_type: # Type mismatch, go through acceptable conversions if spec_type == 'str' and defaults_type == 'list': spec[option] = [spec[option]] else: raise RTSLibError("Wrong type for option '%s' in %s. " % (option, spec_file) + "Expected type '%s' and got '%s'." % (defaults_type, spec_type)) # Generate the list of fixed WWNs if not empty wwn_list = None wwn_type = spec['wwn_type'] if spec['wwn_from_files']: for wwn_pattern in spec['wwn_from_files']: for wwn_file in glob.iglob(wwn_pattern): wwns_in_file = [wwn for wwn in re.split('\t|\0|\n| ', fread(wwn_file)) if wwn.strip()] if spec['wwn_from_files_filter']: wwns_filtered = [] for wwn in wwns_in_file: filter = "echo %s|%s" \ % (wwn, spec['wwn_from_files_filter']) wwns_filtered.append(exec_argv(filter, shell=True)) else: wwns_filtered = wwns_in_file if wwn_list is None: wwn_list = set([]) wwn_list.update(set([wwn for wwn in wwns_filtered if is_valid_wwn(wwn_type, wwn) if wwn] )) if spec['wwn_from_cmds']: for wwn_cmd in spec['wwn_from_cmds']: cmd_result = exec_argv(wwn_cmd, shell=True) wwns_from_cmd = [wwn for wwn in re.split('\t|\0|\n| ', cmd_result) if wwn.strip()] if spec['wwn_from_cmds_filter']: wwns_filtered = [] for wwn in wwns_from_cmd: filter = "echo %s|%s" \ % (wwn, spec['wwn_from_cmds_filter']) wwns_filtered.append(exec_argv(filter, shell=True)) else: wwns_filtered = wwns_from_cmd if wwn_list is None: wwn_list = set([]) wwn_list.update(set([wwn for wwn in wwns_filtered if is_valid_wwn(wwn_type, wwn) if wwn] )) spec['wwn_list'] = wwn_list return spec class FabricModule(CFSNode): ''' This is an interface to RTS Target Fabric Modules. It can load/unload modules, provide information about them and handle the configfs housekeeping. It uses module configuration files in /var/target/fabric/*.spec. After instantiation, whether or not the fabric module is loaded and ''' version_attributes = set(["lio_version", "version"]) discovery_auth_attributes = set(["discovery_auth"]) target_names_excludes = version_attributes | discovery_auth_attributes @classmethod def all(cls): for name in [os.path.basename(path).partition(".")[0] for path in list_specfiles()]: try: fabric = FabricModule(name) except: pass else: yield fabric # FabricModule private stuff def __init__(self, name): ''' Instantiate a FabricModule object, according to the provided name. @param name: the name of the FabricModule object. It must match an existing target fabric module specfile (name.spec). @type name: str ''' super(FabricModule, self).__init__() self.name = str(name) self.spec_file = "%s/%s.spec" % (spec_dir, name) self.spec = parse_specfile(self.spec_file) self._path = "%s/%s" % (self.configfs_dir, self.spec['configfs_group']) self._create_in_cfs_ine('any') # FabricModule public stuff def has_feature(self, feature): ''' Whether or not this FabricModule has a certain feature. ''' if feature in self.spec['features']: return True else: return False def _list_targets(self): if self.exists: for wwn in os.listdir(self.path): if os.path.isdir("%s/%s" % (self.path, wwn)) and \ wwn not in self.target_names_excludes: yield Target(self, wwn, 'lookup') def _get_version(self): if self.exists: for attr in self.version_attributes: path = "%s/%s" % (self.path, attr) if os.path.isfile(path): return fread(path) else: raise RTSLibError("Can't find version for fabric module %s." % self.name) else: return None # FabricModule public stuff def is_valid_wwn(self, wwn): ''' Checks whether or not the provided WWN is valid for this fabric module according to the spec file. ''' return is_valid_wwn(self.spec['wwn_type'], wwn, self.spec['wwn_list']) def _assert_feature(self, feature): if not self.has_feature(feature): raise RTSLibError("This fabric module does not implement " + "the %s feature." % feature) def _get_discovery_mutual_password(self): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/password_mutual" % self.path value = fread(path).strip() if value == "NULL": return '' else: return value def _set_discovery_mutual_password(self, password): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/password_mutual" % self.path if password.strip() == '': password = "NULL" fwrite(path, "%s" % password) def _get_discovery_mutual_userid(self): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/userid_mutual" % self.path value = fread(path).strip() if value == "NULL": return '' else: return value def _set_discovery_mutual_userid(self, userid): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/userid_mutual" % self.path if userid.strip() == '': userid = "NULL" fwrite(path, "%s" % userid) def _get_discovery_password(self): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/password" % self.path value = fread(path).strip() if value == "NULL": return '' else: return value def _set_discovery_password(self, password): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/password" % self.path if password.strip() == '': password = "NULL" fwrite(path, "%s" % password) def _get_discovery_userid(self): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/userid" % self.path value = fread(path).strip() if value == "NULL": return '' else: return value def _set_discovery_userid(self, userid): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/userid" % self.path if userid.strip() == '': userid = "NULL" fwrite(path, "%s" % userid) def _get_discovery_enable_auth(self): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/enforce_discovery_auth" % self.path value = fread(path).strip() return value def _set_discovery_enable_auth(self, enable): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/enforce_discovery_auth" % self.path if int(enable): enable = 1 else: enable = 0 fwrite(path, "%s" % enable) discovery_userid = \ property(_get_discovery_userid, _set_discovery_userid, doc="Set or get the initiator discovery userid.") discovery_password = \ property(_get_discovery_password, _set_discovery_password, doc="Set or get the initiator discovery password.") discovery_mutual_userid = \ property(_get_discovery_mutual_userid, _set_discovery_mutual_userid, doc="Set or get the mutual discovery userid.") discovery_mutual_password = \ property(_get_discovery_mutual_password, _set_discovery_mutual_password, doc="Set or get the mutual discovery password.") discovery_enable_auth = \ property(_get_discovery_enable_auth, _set_discovery_enable_auth, doc="Set or get the discovery enable_auth flag.") targets = property(_list_targets, doc="Get the list of target objects.") version = property(_get_version, doc="Get the fabric module version string.") class LUN(CFSNode): ''' This is an interface to RTS Target LUNs in configFS. A LUN is identified by its parent TPG and LUN index. ''' # LUN private stuff def __init__(self, parent_tpg, lun, storage_object=None, alias=None): ''' A LUN object can be instantiated in two ways: - B{Creation mode}: If I{storage_object} is specified, the underlying configFS object will be created with that parameter. No LUN with the same I{lun} index can pre-exist in the parent TPG in that mode, or instantiation will fail. - B{Lookup mode}: If I{storage_object} is not set, then the LUN will be bound to the existing configFS LUN object of the parent TPG having the specified I{lun} index. The underlying configFS object must already exist in that mode. @param parent_tpg: The parent TPG object. @type parent_tpg: TPG @param lun: The LUN index. @type lun: 0-255 @param storage_object: The storage object to be exported as a LUN. @type storage_object: StorageObject subclass @param alias: An optional parameter to manually specify the LUN alias. You probably do not need this. @type alias: string @return: A LUN object. ''' super(LUN, self).__init__() if isinstance(parent_tpg, TPG): self._parent_tpg = parent_tpg else: raise RTSLibError("Invalid parent TPG.") try: lun = int(lun) except ValueError: raise RTSLibError("Invalid LUN index: %s" % str(lun)) else: if lun > 255 or lun < 0: raise RTSLibError("Invalid LUN index, it must be " \ + "between 0 and 255: %d" % lun) self._lun = lun self._path = "%s/lun/lun_%d" % (self.parent_tpg.path, self.lun) if storage_object is None and alias is not None: raise RTSLibError("The alias parameter has no meaning " \ + "without the storage_object parameter.") if storage_object is not None: self._create_in_cfs_ine('create') try: self._configure(storage_object, alias) except: self.delete() raise else: self._create_in_cfs_ine('lookup') def _create_in_cfs_ine(self, mode): super(LUN, self)._create_in_cfs_ine(mode) def _configure(self, storage_object, alias): self._check_self() if alias is None: alias = str(uuid.uuid4())[-10:] else: alias = str(alias).strip() if '/' in alias: raise RTSLibError("Invalid alias: %s", alias) destination = "%s/%s" % (self.path, alias) from tcm import StorageObject if isinstance(storage_object, StorageObject): if storage_object.exists: source = storage_object.path else: raise RTSLibError("The storage_object does not exist " \ + "in configFS.") else: raise RTSLibError("Invalid storage object.") os.symlink(source, destination) def _get_alias(self): self._check_self() alias = None for path in os.listdir(self.path): if os.path.islink("%s/%s" % (self.path, path)): alias = os.path.basename(path) break if alias is None: raise RTSLibBrokenLink("Broken LUN in configFS, no " \ + "storage object attached.") else: return alias def _get_storage_object(self): self._check_self() alias_path = None for path in os.listdir(self.path): if os.path.islink("%s/%s" % (self.path, path)): alias_path = os.path.realpath("%s/%s" % (self.path, path)) break if alias_path is None: raise RTSLibBrokenLink("Broken LUN in configFS, no " + "storage object attached.") from root import RTSRoot rtsroot = RTSRoot() for storage_object in rtsroot.storage_objects: if storage_object.path == alias_path: return storage_object raise RTSLibBrokenLink("Broken storage object link in LUN.") def _get_parent_tpg(self): return self._parent_tpg def _get_lun(self): return self._lun def _get_alua_metadata_path(self): return "%s/lun_%d" % (self.parent_tpg.alua_metadata_path, self.lun) def _list_mapped_luns(self): self._check_self() listdir = os.listdir realpath = os.path.realpath path = self.path tpg = self.parent_tpg if not tpg.has_feature('acls'): return [] else: base = "%s/acls/" % tpg.path xmlun = ["param", "info", "cmdsn_depth", "auth", "attrib", "node_name", "port_name"] return [MappedLUN(NodeACL(tpg, nodeacl), mapped_lun.split('_')[1]) for nodeacl in listdir(base) for mapped_lun in listdir("%s/%s" % (base, nodeacl)) if mapped_lun not in xmlun if isdir("%s/%s/%s" % (base, nodeacl, mapped_lun)) for link in listdir("%s/%s/%s" \ % (base, nodeacl, mapped_lun)) if realpath("%s/%s/%s/%s" \ % (base, nodeacl, mapped_lun, link)) == path] # LUN public stuff def delete(self): ''' If the underlying configFS object does not exist, this method does nothing. If the underlying configFS object exists, this method attempts to delete it along with all MappedLUN objects referencing that LUN. ''' self._check_self() for mlun in self.mapped_luns: mlun.delete() try: link = self.alias except RTSLibBrokenLink: pass else: if os.path.islink("%s/%s" % (self.path, link)): os.unlink("%s/%s" % (self.path, link)) super(LUN, self).delete() if os.path.isdir(self.alua_metadata_path): shutil.rmtree(self.alua_metadata_path) alua_metadata_path = property(_get_alua_metadata_path, doc="Get the ALUA metadata directory path for the LUN.") parent_tpg = property(_get_parent_tpg, doc="Get the parent TPG object.") lun = property(_get_lun, doc="Get the LUN index as an int.") storage_object = property(_get_storage_object, doc="Get the storage object attached to the LUN.") alias = property(_get_alias, doc="Get the LUN alias.") mapped_luns = property(_list_mapped_luns, doc="List all MappedLUN objects referencing this LUN.") class MappedLUN(CFSNode): ''' This is an interface to RTS Target Mapped LUNs. A MappedLUN is a mapping of a TPG LUN to a specific initiator node, and is part of a NodeACL. It allows the initiator to actually access the TPG LUN if ACLs are enabled for the TPG. The initial TPG LUN will then be seen by the initiator node as the MappedLUN. ''' # MappedLUN private stuff def __init__(self, parent_nodeacl, mapped_lun, tpg_lun=None, write_protect=None): ''' A MappedLUN object can be instantiated in two ways: - B{Creation mode}: If I{tpg_lun} is specified, the underlying configFS object will be created with that parameter. No MappedLUN with the same I{mapped_lun} index can pre-exist in the parent NodeACL in that mode, or instantiation will fail. - B{Lookup mode}: If I{tpg_lun} is not set, then the MappedLUN will be bound to the existing configFS MappedLUN object of the parent NodeACL having the specified I{mapped_lun} index. The underlying configFS object must already exist in that mode. @param mapped_lun: The mapped LUN index. @type mapped_lun: int @param tpg_lun: The TPG LUN index to map, or directly a LUN object that belong to the same TPG as the parent NodeACL. @type tpg_lun: int or LUN @param write_protect: The write-protect flag value, defaults to False (write-protection disabled). @type write_protect: bool ''' super(MappedLUN, self).__init__() if not isinstance(parent_nodeacl, NodeACL): raise RTSLibError("The parent_nodeacl parameter must be " \ + "a NodeACL object.") else: self._parent_nodeacl = parent_nodeacl if not parent_nodeacl.exists: raise RTSLibError("The parent_nodeacl does not exist.") try: self._mapped_lun = int(mapped_lun) except ValueError: raise RTSLibError("The mapped_lun parameter must be an " \ + "integer value.") self._path = "%s/lun_%d" % (self.parent_nodeacl.path, self.mapped_lun) if tpg_lun is None and write_protect is not None: raise RTSLibError("The write_protect parameter has no " \ + "meaning without the tpg_lun parameter.") if tpg_lun is not None: self._create_in_cfs_ine('create') try: self._configure(tpg_lun, write_protect) except: self.delete() raise else: self._create_in_cfs_ine('lookup') def _configure(self, tpg_lun, write_protect): self._check_self() if isinstance(tpg_lun, LUN): tpg_lun = tpg_lun.lun else: try: tpg_lun = int(tpg_lun) except ValueError: raise RTSLibError("The tpg_lun must be either an " + "integer or a LUN object.") # Check that the tpg_lun exists in the TPG for lun in self.parent_nodeacl.parent_tpg.luns: if lun.lun == tpg_lun: tpg_lun = lun break if not (isinstance(tpg_lun, LUN) and tpg_lun): raise RTSLibError("LUN %s does not exist in this TPG." % str(tpg_lun)) os.symlink(tpg_lun.path, "%s/%s" % (self.path, str(uuid.uuid4())[-10:])) try: self.write_protect = int(write_protect) > 0 except: self.write_protect = False def _get_alias(self): self._check_self() alias = None for path in os.listdir(self.path): if os.path.islink("%s/%s" % (self.path, path)): alias = os.path.basename(path) break if alias is None: raise RTSLibBrokenLink("Broken LUN in configFS, no " \ + "storage object attached.") else: return alias def _get_mapped_lun(self): return self._mapped_lun def _get_parent_nodeacl(self): return self._parent_nodeacl def _set_write_protect(self, write_protect): self._check_self() path = "%s/write_protect" % self.path if write_protect: fwrite(path, "1") else: fwrite(path, "0") def _get_write_protect(self): self._check_self() path = "%s/write_protect" % self.path return bool(int(fread(path))) def _get_tpg_lun(self): self._check_self() path = os.path.realpath("%s/%s" % (self.path, self._get_alias())) for lun in self.parent_nodeacl.parent_tpg.luns: if lun.path == path: return lun raise RTSLibBrokenLink("Broken MappedLUN, no TPG LUN found !") def _get_node_wwn(self): self._check_self() return self.parent_nodeacl.node_wwn # MappedLUN public stuff def delete(self): ''' Delete the MappedLUN. ''' self._check_self() try: lun_link = "%s/%s" % (self.path, self._get_alias()) except RTSLibBrokenLink: pass else: if os.path.islink(lun_link): os.unlink(lun_link) super(MappedLUN, self).delete() mapped_lun = property(_get_mapped_lun, doc="Get the integer MappedLUN mapped_lun index.") parent_nodeacl = property(_get_parent_nodeacl, doc="Get the parent NodeACL object.") write_protect = property(_get_write_protect, _set_write_protect, doc="Get or set the boolean write protection.") tpg_lun = property(_get_tpg_lun, doc="Get the TPG LUN object the MappedLUN is pointing at.") node_wwn = property(_get_node_wwn, doc="Get the wwn of the node for which the TPG LUN is mapped.") class NodeACL(CFSNode): ''' This is an interface to node ACLs in configFS. A NodeACL is identified by the initiator node wwn and parent TPG. ''' # NodeACL private stuff def __init__(self, parent_tpg, node_wwn, mode='any'): ''' @param parent_tpg: The parent TPG object. @type parent_tpg: TPG @param node_wwn: The wwn of the initiator node for which the ACL is created. @type node_wwn: string @param mode: An optionnal string containing the object creation mode: - I{'any'} means the configFS object will be either looked up or created. - I{'lookup'} means the object MUST already exist configFS. - I{'create'} means the object must NOT already exist in configFS. @type mode: string @return: A NodeACL object. ''' super(NodeACL, self).__init__() if isinstance(parent_tpg, TPG): self._parent_tpg = parent_tpg else: raise RTSLibError("Invalid parent TPG.") self._node_wwn = str(node_wwn).lower() self._path = "%s/acls/%s" % (self.parent_tpg.path, self.node_wwn) self._create_in_cfs_ine(mode) def _get_node_wwn(self): return self._node_wwn def _get_parent_tpg(self): return self._parent_tpg def _get_tcq_depth(self): self._check_self() path = "%s/cmdsn_depth" % self.path return fread(path).strip() def _set_tcq_depth(self, depth): self._check_self() path = "%s/cmdsn_depth" % self.path try: fwrite(path, "%s" % depth) except IOError, msg: msg = msg[1] raise RTSLibError("Cannot set tcq_depth: %s" % str(msg)) def _list_mapped_luns(self): self._check_self() for mapped_lun_dir in glob.glob("%s/lun_*" % self.path): mapped_lun = int(os.path.basename(mapped_lun_dir).split("_")[1]) yield MappedLUN(self, mapped_lun) # NodeACL public stuff def has_feature(self, feature): ''' Whether or not this NodeACL has a certain feature. ''' return self.parent_tpg.has_feature(feature) def delete(self): ''' Delete the NodeACL, including all MappedLUN objects. If the underlying configFS object does not exist, this method does nothing. ''' self._check_self() for mapped_lun in self.mapped_luns: mapped_lun.delete() super(NodeACL, self).delete() def mapped_lun(self, mapped_lun, tpg_lun=None, write_protect=None): ''' Same as MappedLUN() but without the parent_nodeacl parameter. ''' self._check_self() return MappedLUN(self, mapped_lun=mapped_lun, tpg_lun=tpg_lun, write_protect=write_protect) tcq_depth = property(_get_tcq_depth, _set_tcq_depth, doc="Set or get the TCQ depth for the initiator " \ + "sessions matching this NodeACL.") parent_tpg = property(_get_parent_tpg, doc="Get the parent TPG object.") node_wwn = property(_get_node_wwn, doc="Get the node wwn.") mapped_luns = property(_list_mapped_luns, doc="Get the list of all MappedLUN objects in this NodeACL.") class NetworkPortal(CFSNode): ''' This is an interface to NetworkPortals in configFS. A NetworkPortal is identified by its IP and port, but here we also require the parent TPG, so instance objects represent both the NetworkPortal and its association to a TPG. This is necessary to get path information in order to create the portal in the proper configFS hierarchy. ''' # NetworkPortal private stuff def __init__(self, parent_tpg, ip_address, port=3260, mode='any'): ''' @param parent_tpg: The parent TPG object. @type parent_tpg: TPG @param ip_address: The ipv4 IP address of the NetworkPortal. @type ip_address: string @param port: The optional (defaults to 3260) NetworkPortal TCP/IP port. @type port: int @param mode: An optionnal string containing the object creation mode: - I{'any'} means the configFS object will be either looked up or created. - I{'lookup'} means the object MUST already exist configFS. - I{'create'} means the object must NOT already exist in configFS. @type mode:string @return: A NetworkPortal object. ''' super(NetworkPortal, self).__init__() if not (is_ipv4_address(ip_address) or is_ipv6_address(ip_address)): raise RTSLibError("Invalid IP address: %s" % ip_address) else: self._ip_address = str(ip_address) try: self._port = int(port) except ValueError: raise RTSLibError("Invalid port.") if isinstance(parent_tpg, TPG): self._parent_tpg = parent_tpg else: raise RTSLibError("Invalid parent TPG.") if is_ipv4_address(ip_address): self._path = "%s/np/%s:%d" \ % (self.parent_tpg.path, self.ip_address, self.port) else: self._path = "%s/np/[%s]:%d" \ % (self.parent_tpg.path, self.ip_address, self.port) try: self._create_in_cfs_ine(mode) except OSError, msg: raise RTSLibError(msg[1]) def _get_ip_address(self): return self._ip_address def _get_port(self): return self._port def _get_parent_tpg(self): return self._parent_tpg def _set_iser_attr(self, iser_attr): path = "%s/iser" % self.path if os.path.isfile(path): if iser_attr: fwrite(path, "1") else: fwrite(path, "0") else: raise RTSLibError("iser network portal attribute does not exist.") def _get_iser_attr(self): path = "%s/iser" % self.path if os.path.isfile(path): iser_attr = fread(path).strip() if iser_attr == "1": return True else: return False else: return False # NetworkPortal public stuff def delete(self): ''' Delete the NetworkPortal. ''' path = "%s/iser" % self.path if os.path.isfile(path): iser_attr = fread(path).strip() if iser_attr == "1": fwrite(path, "0") super(NetworkPortal, self).delete() parent_tpg = property(_get_parent_tpg, doc="Get the parent TPG object.") port = property(_get_port, doc="Get the NetworkPortal's TCP port as an int.") ip_address = property(_get_ip_address, doc="Get the NetworkPortal's IP address as a string.") class TPG(CFSNode): ''' This is a an interface to Target Portal Groups in configFS. A TPG is identified by its parent Target object and its TPG Tag. To a TPG object is attached a list of NetworkPortals. Targets without the 'tpgts' feature cannot have more than a single TPG, so attempts to create more will raise an exception. ''' # TPG private stuff def __init__(self, parent_target, tag, mode='any', nexus_wwn=None): ''' @param parent_target: The parent Target object of the TPG. @type parent_target: Target @param tag: The TPG Tag (TPGT). @type tag: int > 0 @param mode: An optionnal string containing the object creation mode: - I{'any'} means the configFS object will be either looked up or created. - I{'lookup'} means the object MUST already exist configFS. - I{'create'} means the object must NOT already exist in configFS. @type mode: string @param nexus: An optionnal naa WWN that makes sense only for fabrics supporting that feature, like the loopback fabric. @type nexus: string @return: A TPG object. ''' super(TPG, self).__init__() try: self._tag = int(tag) except ValueError: raise RTSLibError("Invalid Tag.") if tag < 0: raise RTSLibError("Invalig Tag, it must be 0 or more.") if isinstance(parent_target, Target): self._parent_target = parent_target else: raise RTSLibError("Invalid parent Target.") self._path = "%s/tpgt_%d" % (self.parent_target.path, self.tag) target_path = self.parent_target.path if not self.has_feature('tpgts') and not os.path.isdir(self._path): for filename in os.listdir(target_path): if filename.startswith("tpgt_") \ and os.path.isdir("%s/%s" % (target_path, filename)) \ and filename != "tpgt_%d" % self.tag: raise RTSLibError("Target cannot have multiple TPGs.") self._create_in_cfs_ine(mode) if self.has_feature('nexus') and not self._get_nexus(): self._set_nexus(nexus_wwn) def _get_tag(self): return self._tag def _get_parent_target(self): return self._parent_target def _list_network_portals(self): self._check_self() if not self.has_feature('nps'): return for network_portal_dir in os.listdir("%s/np" % self.path): if network_portal_dir.startswith('['): # IPv6 portals are [IPv6]:PORT (ip_address, port) = \ os.path.basename(network_portal_dir)[1:].split("]") port = port[1:] else: # IPv4 portals are IPv4:PORT (ip_address, port) = \ os.path.basename(network_portal_dir).split(":") port = int(port) yield NetworkPortal(self, ip_address, port, 'lookup') def _get_enable(self): self._check_self() path = "%s/enable" % self.path # If the TPG does not have the enable attribute, then it is always # enabled. if os.path.isfile(path): return bool(int(fread(path))) else: return True def _set_enable(self, boolean): ''' Enables or disables the TPG. Raises an error if trying to disable a TPG without an enable attribute (but enabling works in that case). ''' self._check_self() path = "%s/enable" % self.path if os.path.isfile(path) and (boolean != self._get_enable()): try: fwrite(path, str(int(boolean))) except IOError, e: raise RTSLibError("Cannot change enable state: %s" % e) elif not boolean: raise RTSLibError("TPG cannot be disabled.") def _get_nexus(self): ''' Gets the nexus initiator WWN, or None if the TPG does not have one. ''' self._check_self() if self.has_feature('nexus'): try: nexus_wwn = fread("%s/nexus" % self.path).strip() except IOError: nexus_wwn = '' return nexus_wwn else: return None def _set_nexus(self, nexus_wwn=None): ''' Sets the nexus initiator WWN. Raises an exception if the nexus is already set or if the TPG does not use a nexus. ''' self._check_self() if not self.has_feature('nexus'): raise RTSLibError("The TPG does not use a nexus.") elif self._get_nexus(): raise RTSLibError("The TPG's nexus initiator WWN is already set.") else: if nexus_wwn is None: nexus_wwn = generate_wwn(self.parent_target.wwn_type) elif not is_valid_wwn(self.parent_target.wwn_type, nexus_wwn): raise RTSLibError("WWN '%s' is not of type '%s'." % (nexus_wwn, self.parent_target.wwn_type)) fwrite("%s/nexus" % self.path, nexus_wwn) def _create_in_cfs_ine(self, mode): super(TPG, self)._create_in_cfs_ine(mode) if not os.path.isdir(self.alua_metadata_path): os.makedirs(self.alua_metadata_path) def _list_node_acls(self): self._check_self() if not self.has_feature('acls'): return node_acl_dirs = [os.path.basename(path) for path in os.listdir("%s/acls" % self.path)] for node_acl_dir in node_acl_dirs: yield NodeACL(self, node_acl_dir, 'lookup') def _list_luns(self): self._check_self() lun_dirs = [os.path.basename(path) for path in os.listdir("%s/lun" % self.path)] for lun_dir in lun_dirs: lun = lun_dir.split('_')[1] lun = int(lun) yield LUN(self, lun) def _control(self, command): self._check_self() path = "%s/control" % self.path fwrite(path, "%s\n" % str(command)) def _get_alua_metadata_path(self): return "%s/%s+%d" \ % (self.alua_metadata_dir, self.parent_target.wwn, self.tag) # TPG public stuff def has_feature(self, feature): ''' Whether or not this TPG has a certain feature. ''' return self.parent_target.has_feature(feature) def delete(self): ''' Recursively deletes a TPG object. This will delete all attached LUN, NetworkPortal and Node ACL objects and then the TPG itself. Before starting the actual deletion process, all sessions will be disconnected. ''' self._check_self() path = "%s/enable" % self.path if os.path.isfile(path): self.enable = False for acl in self.node_acls: acl.delete() for lun in self.luns: lun.delete() for portal in self.network_portals: portal.delete() super(TPG, self).delete() # TODO: check that ALUA MD removal works while removing TPG if os.path.isdir(self.alua_metadata_path): shutil.rmtree(self.alua_metadata_path) def node_acl(self, node_wwn, mode='any'): ''' Same as NodeACL() but without specifying the parent_tpg. ''' self._check_self() return NodeACL(self, node_wwn=node_wwn, mode=mode) def network_portal(self, ip_address, port, mode='any'): ''' Same as NetworkPortal() but without specifying the parent_tpg. ''' self._check_self() return NetworkPortal(self, ip_address=ip_address, port=port, mode=mode) def lun(self, lun, storage_object=None, alias=None): ''' Same as LUN() but without specifying the parent_tpg. ''' self._check_self() return LUN(self, lun=lun, storage_object=storage_object, alias=alias) def has_enable(self): ''' Returns True if the TPG has the enable attribute, else False. ''' self._check_self() path = "%s/enable" % self.path return os.path.isfile(path) alua_metadata_path = property(_get_alua_metadata_path, doc="Get the ALUA metadata directory path " \ + "for the TPG.") tag = property(_get_tag, doc="Get the TPG Tag as an int.") parent_target = property(_get_parent_target, doc="Get the parent Target object to which the " \ + "TPG is attached.") enable = property(_get_enable, _set_enable, doc="Get or set a boolean value representing the " \ + "enable status of the TPG. " \ + "True means the TPG is enabled, False means it is " \ + "disabled.") network_portals = property(_list_network_portals, doc="Get the list of NetworkPortal objects currently attached " \ + "to the TPG.") node_acls = property(_list_node_acls, doc="Get the list of NodeACL objects currently " \ + "attached to the TPG.") luns = property(_list_luns, doc="Get the list of LUN objects currently attached " \ + "to the TPG.") nexus_wwn = property(_get_nexus, _set_nexus, doc="Get or set (once) the TPG's Nexus initiator WWN.") class Target(CFSNode): ''' This is an interface to Targets in configFS. A Target is identified by its wwn. To a Target is attached a list of TPG objects. ''' # Target private stuff def __init__(self, fabric_module, wwn=None, mode='any'): ''' @param fabric_module: The target's fabric module. @type fabric_module: FabricModule @param wwn: The optionnal Target's wwn. If no wwn or an empty wwn is specified, one will be generated for you. @type wwn: string @param mode:An optionnal string containing the object creation mode: - I{'any'} means the configFS object will be either looked up or created. - I{'lookup'} means the object MUST already exist configFS. - I{'create'} means the object must NOT already exist in configFS. @type mode:string @return: A Target object. ''' super(Target, self).__init__() self.fabric_module = fabric_module self.wwn_type = fabric_module.spec['wwn_type'] if wwn is not None: wwn = str(wwn).strip() elif fabric_module.spec['wwn_list']: existing_wwns = set([child.wwn for child in fabric_module.targets]) free_wwns = fabric_module.spec['wwn_list'] - existing_wwns if free_wwns: wwn = free_wwns.pop() else: raise RTSLibError("All WWN are in use, can't create target.") else: wwn = generate_wwn(self.wwn_type) self.wwn = wwn self._path = "%s/%s" % (self.fabric_module.path, self.wwn) if not self: if not self.fabric_module.is_valid_wwn(self.wwn): raise RTSLibError("Invalid %s wwn: %s" % (self.wwn_type, self.wwn)) self._create_in_cfs_ine(mode) def _list_tpgs(self): self._check_self() for tpg_dir in glob.glob("%s/tpgt*" % self.path): tag = os.path.basename(tpg_dir).split('_')[1] tag = int(tag) yield TPG(self, tag, 'lookup') # Target public stuff def has_feature(self, feature): ''' Whether or not this Target has a certain feature. ''' return self.fabric_module.has_feature(feature) def delete(self): ''' Recursively deletes a Target object. This will delete all attached TPG objects and then the Target itself. ''' self._check_self() for tpg in self.tpgs: tpg.delete() super(Target, self).delete() tpgs = property(_list_tpgs, doc="Get the list of TPG for the Target.") def _test(): from doctest import testmod testmod() if __name__ == "__main__": _test() rtslib-3.0.pre4.1~g1b33ceb/rtslib/tcm.py0000664000000000000000000011500112443074135014620 0ustar ''' Implements the RTS Target backstore and storage object classes. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os import re from target import LUN, TPG, Target, FabricModule from node import CFSNode from utils import (fread, fwrite, RTSLibError, list_scsi_hbas, generate_wwn) from utils import convert_scsi_path_to_hctl, convert_scsi_hctl_to_path from utils import convert_human_to_bytes, is_dev_in_use, get_block_type from utils import is_disk_partition, get_disk_size class Backstore(CFSNode): # Backstore private stuff def __init__(self, plugin, storage_class, index, mode): super(Backstore, self).__init__() if issubclass(storage_class, StorageObject): self._storage_object_class = storage_class self._plugin = plugin else: raise RTSLibError("StorageClass must derive from StorageObject.") try: self._index = int(index) except ValueError: raise RTSLibError("Invalid backstore index: %s" % index) self._path = "%s/core/%s_%d" % (self.configfs_dir, self._plugin, self._index) self._create_in_cfs_ine(mode) def _get_index(self): return self._index def _list_storage_objects(self): self._check_self() storage_object_names = [os.path.basename(s) for s in os.listdir(self.path) if s not in set(["hba_info", "hba_mode"])] for storage_object_name in storage_object_names: yield self._storage_object_class(self, storage_object_name) def _create_in_cfs_ine(self, mode): try: super(Backstore, self)._create_in_cfs_ine(mode) except OSError, msg: raise RTSLibError("Cannot create backstore: %s" % msg) def _parse_info(self, key): self._check_self() info = fread("%s/hba_info" % self.path) return re.search(".*%s: ([^: ]+).*" \ % key, ' '.join(info.split())).group(1).lower() def _get_version(self): self._check_self() return self._parse_info("version") def _get_plugin(self): self._check_self() return self._parse_info("plugin") def _get_name(self): self._check_self() return "%s%d" % (self.plugin, self.index) # Backstore public stuff def delete(self): ''' Recursively deletes a Backstore object. This will delete all attached StorageObject objects, and then the Backstore itself. The underlying file and block storages will not be touched, but all ramdisk data will be lost. ''' self._check_self() for storage in self.storage_objects: storage.delete() super(Backstore, self).delete() plugin = property(_get_plugin, doc="Get the backstore plugin name.") index = property(_get_index, doc="Get the backstore index as an int.") storage_objects = property(_list_storage_objects, doc="Get the list of StorageObjects attached to the backstore.") version = property(_get_version, doc="Get the Backstore plugin version string.") name = property(_get_name, doc="Get the backstore name.") class PSCSIBackstore(Backstore): ''' This is an interface to pscsi backstore plugin objects in configFS. A PSCSIBackstore object is identified by its backstore index. ''' # PSCSIBackstore private stuff def __init__(self, index, mode='any', legacy=False): ''' @param index: The backstore index matching a physical SCSI HBA. @type index: int @param mode: An optionnal string containing the object creation mode: - I{'any'} the configFS object will be either lookuped or created. - I{'lookup'} the object MUST already exist configFS. - I{'create'} the object must NOT already exist in configFS. @type mode:string @param legacy: Enable legacy physcal HBA mode. If True, you must specify it also in lookup mode for StorageObjects to be notified. You've been warned ! @return: A PSCSIBackstore object. ''' self._legacy = legacy super(PSCSIBackstore, self).__init__("pscsi", PSCSIStorageObject, index, mode) def _create_in_cfs_ine(self, mode): if self.legacy_mode and self._index not in list_scsi_hbas(): raise RTSLibError("Cannot create backstore, hba " + "scsi%d does not exist." % self._index) else: Backstore._create_in_cfs_ine(self, mode) def _get_legacy(self): return self._legacy # PSCSIBackstore public stuff def storage_object(self, name, dev=None): ''' Same as PSCSIStorageObject() without specifying the backstore ''' self._check_self() return PSCSIStorageObject(self, name=name, dev=dev) legacy_mode = property(_get_legacy, doc="Get the legacy mode flag. If True, the Vitualbackstore " + " index must match the StorageObjects real HBAs.") class RDMCPBackstore(Backstore): ''' This is an interface to rd_mcp backstore plugin objects in configFS. A RDMCPBackstore object is identified by its backstore index. ''' # RDMCPBackstore private stuff def __init__(self, index, mode='any'): ''' @param index: The backstore index. @type index: int @param mode: An optionnal string containing the object creation mode: - I{'any'} the configFS object will be either lookupd or created. - I{'lookup'} the object MUST already exist configFS. - I{'create'} the object must NOT already exist in configFS. @type mode:string @return: A RDMCPBackstore object. ''' super(RDMCPBackstore, self).__init__("rd_mcp", RDMCPStorageObject, index, mode) # RDMCPBackstore public stuff def storage_object(self, name, size=None, wwn=None, nullio=False): ''' Same as RDMCPStorageObject() without specifying the backstore ''' self._check_self() return RDMCPStorageObject(self, name=name, size=size, wwn=wwn, nullio=nullio) class FileIOBackstore(Backstore): ''' This is an interface to fileio backstore plugin objects in configFS. A FileIOBackstore object is identified by its backstore index. ''' # FileIOBackstore private stuff def __init__(self, index, mode='any'): ''' @param index: The backstore index. @type index: int @param mode: An optionnal string containing the object creation mode: - I{'any'} the configFS object will be either lookuped or created. - I{'lookup'} the object MUST already exist configFS. - I{'create'} the object must NOT already exist in configFS. @type mode:string @return: A FileIOBackstore object. ''' super(FileIOBackstore, self).__init__("fileio", FileIOStorageObject, index, mode) # FileIOBackstore public stuff def storage_object(self, name, dev=None, size=None, wwn=None, buffered_mode=False): ''' Same as FileIOStorageObject() without specifying the backstore ''' self._check_self() return FileIOStorageObject(self, name=name, dev=dev, size=size, wwn=wwn, buffered_mode=buffered_mode) class IBlockBackstore(Backstore): ''' This is an interface to iblock backstore plugin objects in configFS. An IBlockBackstore object is identified by its backstore index. ''' # IBlockBackstore private stuff def __init__(self, index, mode='any'): ''' @param index: The backstore index. @type index: int @param mode: An optionnal string containing the object creation mode: - I{'any'} the configFS object will be either lookupd or created. - I{'lookup'} the object MUST already exist configFS. - I{'create'} the object must NOT already exist in configFS. @type mode:string @return: An IBlockBackstore object. ''' super(IBlockBackstore, self).__init__("iblock", IBlockStorageObject, index, mode) # IBlockBackstore public stuff def storage_object(self, name, dev=None, wwn=None): ''' Same as IBlockStorageObject() without specifying the backstore ''' self._check_self() return IBlockStorageObject(self, name=name, dev=dev, wwn=wwn) class StorageObject(CFSNode): ''' This is an interface to storage objects in configFS. A StorageObject is identified by its backstore and its name. ''' pr_aptpl_metadata_dir = "/var/target/pr" # StorageObject private stuff def __init__(self, backstore, backstore_class, name, mode): if not isinstance(backstore, backstore_class): raise RTSLibError("The parent backstore must be of " + "type %s" % backstore_class.__name__) super(StorageObject, self).__init__() self._backstore = backstore if "/" in name or " " in name or "\t" in name or "\n" in name: raise RTSLibError("A storage object's name cannot contain " " /, newline or spaces/tabs.") else: self._name = name self._path = "%s/%s" % (self.backstore.path, self.name) self._create_in_cfs_ine(mode) def _get_wwn(self): self._check_self() if self.is_configured(): path = "%s/wwn/vpd_unit_serial" % self.path return fread(path).partition(":")[2].strip() else: return "" def _set_wwn(self, wwn): self._check_self() if wwn is None: wwn = generate_wwn('unit_serial') if self.is_configured(): path = "%s/wwn/vpd_unit_serial" % self.path fwrite(path, "%s\n" % wwn) else: raise RTSLibError("Cannot write a T10 WWN Unit Serial to " + "an unconfigured StorageObject.") def _set_udev_path(self, udev_path): self._check_self() path = "%s/udev_path" % self.path fwrite(path, "%s" % udev_path) def _get_udev_path(self): self._check_self() path = "%s/udev_path" % self.path udev_path = fread(path).strip() if not udev_path and self.backstore.plugin == "fileio": udev_path = self._parse_info('File').strip() return udev_path def _get_name(self): return self._name def _get_backstore(self): return self._backstore def _enable(self): self._check_self() path = "%s/enable" % self.path fwrite(path, "1\n") def _control(self, command): self._check_self() path = "%s/control" % self.path fwrite(path, "%s" % str(command).strip()) def _write_fd(self, contents): self._check_self() path = "%s/fd" % self.path fwrite(path, "%s" % str(contents).strip()) def _parse_info(self, key): self._check_self() info = fread("%s/info" % self.path) return re.search(".*%s: ([^: ]+).*" \ % key, ' '.join(info.split())).group(1).lower() def _get_status(self): self._check_self() return self._parse_info('Status') def _gen_attached_luns(self): ''' Fast scan of luns attached to a storage object. This is an order of magnitude faster than using root.luns and matching path on them. ''' isdir = os.path.isdir islink = os.path.islink listdir = os.listdir realpath = os.path.realpath path = self.path from root import RTSRoot rtsroot = RTSRoot() target_names_excludes = FabricModule.target_names_excludes for fabric_module in rtsroot.fabric_modules: base = fabric_module.path for tgt_dir in listdir(base): if tgt_dir not in target_names_excludes: tpgts_base = "%s/%s" % (base, tgt_dir) for tpgt_dir in listdir(tpgts_base): luns_base = "%s/%s/lun" % (tpgts_base, tpgt_dir) if isdir(luns_base): for lun_dir in listdir(luns_base): links_base = "%s/%s" % (luns_base, lun_dir) for lun_file in listdir(links_base): link = "%s/%s" % (links_base, lun_file) if islink(link) and realpath(link) == path: val = (tpgt_dir + "_" + lun_dir) val = val.split('_') target = Target(fabric_module, tgt_dir) yield LUN(TPG(target, val[1]), val[3]) def _list_attached_luns(self): ''' Generates all luns attached to a storage object. ''' self._check_self() for lun in self._gen_attached_luns(): yield lun # StorageObject public stuff def delete(self): ''' Recursively deletes a StorageObject object. This will delete all attached LUNs currently using the StorageObject object, and then the StorageObject itself. The underlying file and block storages will not be touched, but all ramdisk data will be lost. ''' self._check_self() # If we are called after a configure error, we can skip this if self.is_configured(): for lun in self._gen_attached_luns(): if self.status != 'activated': break else: lun.delete() super(StorageObject, self).delete() def is_configured(self): ''' @return: True if the StorageObject is configured, else returns False ''' self._check_self() path = "%s/info" % self.path try: fread(path) except IOError: return False else: return True def restore_pr_aptpl(self, src_path=None): ''' Restores StorageObject persistent reservations read from src_path. If src_path is omitted, uses the default LIO PR APTPL system path if it exists. This only works if the StorageObject is not in use currently, else an IO error will occur. @param src_path: The PR metadata file path. @type src_path: string or None ''' dst_path = "%s/pr/res_aptpl_metadata" % self.path if src_path is None: src_path = "%s/aptpl_%s" % (self.pr_aptpl_metadata_dir, self.wwn) if not os.path.isfile(src_path): return lines = fread(src_path).split() if not lines[0].startswith("PR_REG_START:"): return for line in lines: if line.startswith("PR_REG_START:"): pr_lines = [] elif line.startswith("PR_REG_END:"): fwrite(dst_path, ",".join(pr_lines)) else: pr_lines.append(line.strip()) backstore = property(_get_backstore, doc="Get the backstore object.") name = property(_get_name, doc="Get the StorageObject name as a string.") udev_path = property(_get_udev_path, doc="Get the StorageObject udev_path as a string.") wwn = property(_get_wwn, _set_wwn, doc="Get or set the StorageObject T10 WWN Serial as a string, or None for random.") status = property(_get_status, doc="Get the storage object status, depending on whether or not it"\ + "is used by any LUN") attached_luns = property(_list_attached_luns, doc="Get the list of all LUN objects attached.") class PSCSIStorageObject(StorageObject): ''' An interface to configFS storage objects for pscsi backstore. ''' # PSCSIStorageObject private stuff def __init__(self, backstore, name, dev=None): ''' A PSCSIStorageObject can be instantiated in two ways: - B{Creation mode}: If I{dev} is specified, the underlying configFS object will be created with that parameter. No PSCSIStorageObject with the same I{name} can pre-exist in the parent PSCSIBackstore in that mode, or instantiation will fail. - B{Lookup mode}: If I{dev} is not set, then the PSCSIStorageObject will be bound to the existing configFS object in the parent PSCSIBackstore having the specified I{name}. The underlying configFS object must already exist in that mode, or instantiation will fail. @param backstore: The parent backstore of the PSCSIStorageObject. @type backstore: PSCSIBackstore @param name: The name of the PSCSIStorageObject. @type name: string @param dev: You have two choices: - Use the SCSI id of the device: I{dev="H:C:T:L"}. If the parent backstore is in legacy mode, you must use I{dev="C:T:L"} instead, as the backstore index of the SCSI dev device would then be constrained by the parent backstore index. - Use the path to the SCSI device: I{dev="/path/to/dev"}. Note that if the parent Backstore is in legacy mode, the device must have the same backstore index as the parent backstore. @type dev: string @return: A PSCSIStorageObject object. ''' if dev is not None: super(PSCSIStorageObject, self).__init__(backstore, PSCSIBackstore, name, 'create') try: self._configure(dev) except: self.delete() raise else: super(PSCSIStorageObject, self).__init__(backstore, PSCSIBackstore, name, 'lookup') def _configure(self, dev): self._check_self() parent_hostid = self.backstore.index legacy = self.backstore.legacy_mode if legacy: try: (hostid, channelid, targetid, lunid) = \ convert_scsi_path_to_hctl(dev) except TypeError: try: (channelid, targetid, lunid) = dev.split(':') channelid = int(channelid) targetid = int(targetid) lunid = int(lunid) except ValueError: raise RTSLibError("Cannot find SCSI device by " + "path, and dev parameter not " + "in C:T:L format: %s." % dev) else: udev_path = convert_scsi_hctl_to_path(parent_hostid, channelid, targetid, lunid) if not udev_path: raise RTSLibError("SCSI device does not exist.") else: if hostid != parent_hostid: raise RTSLibError("The specified SCSI device does " + "not belong to the backstore.") else: udev_path = dev.strip() else: # The Backstore is not in legacy mode. # Use H:C:T:L format or preserve the path given by the user. try: (hostid, channelid, targetid, lunid) = \ convert_scsi_path_to_hctl(dev) except TypeError: try: (hostid, channelid, targetid, lunid) = dev.split(':') hostid = int(hostid) channelid = int(channelid) targetid = int(targetid) lunid = int(lunid) except ValueError: raise RTSLibError("Cannot find SCSI device by " + "path, and dev " + "parameter not in H:C:T:L " + "format: %s." % dev) else: udev_path = convert_scsi_hctl_to_path(hostid, channelid, targetid, lunid) if not udev_path: raise RTSLibError("SCSI device does not exist.") else: udev_path = dev.strip() if is_dev_in_use(udev_path): raise RTSLibError("Cannot configure StorageObject because " + "device %s (SCSI %d:%d:%d:%d) " % (udev_path, hostid, channelid, targetid, lunid) + "is already in use.") if legacy: self._control("scsi_channel_id=%d," % channelid \ + "scsi_target_id=%d," % targetid \ + "scsi_lun_id=%d" % lunid) else: self._control("scsi_host_id=%d," % hostid \ + "scsi_channel_id=%d," % channelid \ + "scsi_target_id=%d," % targetid \ + "scsi_lun_id=%d" % lunid) self._set_udev_path(udev_path) self._enable() def _get_model(self): self._check_self() info = fread("%s/info" % self.path) return str(re.search(".*Model:(.*)Rev:", ' '.join(info.split())).group(1)).strip() def _get_vendor(self): self._check_self() info = fread("%s/info" % self.path) return str(re.search(".*Vendor:(.*)Model:", ' '.join(info.split())).group(1)).strip() def _get_revision(self): self._check_self() return self._parse_info('Rev') def _get_channel_id(self): self._check_self() return int(self._parse_info('Channel ID')) def _get_target_id(self): self._check_self() return int(self._parse_info('Target ID')) def _get_lun(self): self._check_self() return int(self._parse_info('LUN')) def _get_host_id(self): self._check_self() return int(self._parse_info('Host ID')) # PSCSIStorageObject public stuff wwn = property(StorageObject._get_wwn, doc="Get the StorageObject T10 WWN Unit Serial as a string." + " You cannot set it for pscsi-backed StorageObjects.") model = property(_get_model, doc="Get the SCSI device model string") vendor = property(_get_vendor, doc="Get the SCSI device vendor string") revision = property(_get_revision, doc="Get the SCSI device revision string") host_id = property(_get_host_id, doc="Get the SCSI device host id") channel_id = property(_get_channel_id, doc="Get the SCSI device channel id") target_id = property(_get_target_id, doc="Get the SCSI device target id") lun = property(_get_lun, doc="Get the SCSI device LUN") class RDMCPStorageObject(StorageObject): ''' An interface to configFS storage objects for rd_mcp backstore. ''' # RDMCPStorageObject private stuff def __init__(self, backstore, name, size=None, wwn=None, nullio=False): ''' A RDMCPStorageObject can be instantiated in two ways: - B{Creation mode}: If I{size} is specified, the underlying configFS object will be created with that parameter. No RDMCPStorageObject with the same I{name} can pre-exist in the parent RDMCPBackstore in that mode, or instantiation will fail. - B{Lookup mode}: If I{size} is not set, then the RDMCPStorageObject will be bound to the existing configFS object in the parent RDMCPBackstore having the specified I{name}. The underlying configFS object must already exist in that mode, or instantiation will fail. @param backstore: The parent backstore of the RDMCPStorageObject. @type backstore: RDMCPBackstore @param name: The name of the RDMCPStorageObject. @type name: string @param size: The size of the ramdrive to create: - If size is an int, it represents a number of bytes - If size is a string, the following units can be used : - B{B} or no unit present for bytes - B{k}, B{K}, B{kB}, B{KB} for kB (kilobytes) - B{m}, B{M}, B{mB}, B{MB} for MB (megabytes) - B{g}, B{G}, B{gB}, B{GB} for GB (gigabytes) - B{t}, B{T}, B{tB}, B{TB} for TB (terabytes) Example: size="1MB" for a one megabytes storage object. - Note that the size will be rounded to the closest 4096 Bytes RAM pages count. For instance, a size of 100000 Bytes will be rounded to 24 pages, really 98304 Bytes. - The base value for kilo is 1024, aka 1kB = 1024B. Strictly speaking, we use kiB, MiB, etc. @type size: string or int @param wwn: Either None (use random unit serial) or the WWN to use as T10 Unit Serial. @type wwn: None or string @param nullio: If rd should be created w/o backing page store. @type nullio: bool @return: A RDMCPStorageObject object. ''' if size is not None: super(RDMCPStorageObject, self).__init__(backstore, RDMCPBackstore, name, 'create') try: self._configure(size, wwn, nullio) except: self.delete() raise else: super(RDMCPStorageObject, self).__init__(backstore, RDMCPBackstore, name, 'lookup') def _configure(self, size, wwn, nullio): self._check_self() size = convert_human_to_bytes(size) # convert to 4k pages size = round(float(size)/4096) if size == 0: size = 1 self._control("rd_pages=%d" % size) if nullio: self._control("rd_nullio=1") self._enable() self.wwn = wwn self.restore_pr_aptpl() def _get_page_size(self): self._check_self() return int(self._parse_info("PAGES/PAGE_SIZE").split('*')[1]) def _get_pages(self): self._check_self() return int(self._parse_info("PAGES/PAGE_SIZE").split('*')[0]) def _get_size(self): self._check_self() size = self._get_page_size() * self._get_pages() return size def _get_nullio(self): self._check_self() # nullio not present before 3.10 try: return bool(int(self._parse_info('nullio'))) except AttributeError: return False # RDMCPStorageObject public stuff page_size = property(_get_page_size, doc="Get the ramdisk page size.") pages = property(_get_pages, doc="Get the ramdisk number of pages.") size = property(_get_size, doc="Get the ramdisk size in bytes.") nullio = property(_get_nullio, doc="Get the nullio status.") class FileIOStorageObject(StorageObject): ''' An interface to configFS storage objects for fileio backstore. ''' # FileIOStorageObject private stuff def __init__(self, backstore, name, dev=None, size=None, wwn=None, buffered_mode=False): ''' A FileIOStorageObject can be instantiated in two ways: - B{Creation mode}: If I{dev} and I{size} are specified, the underlying configFS object will be created with those parameters. No FileIOStorageObject with the same I{name} can pre-exist in the parent FileIOBackstore in that mode, or instantiation will fail. - B{Lookup mode}: If I{dev} and I{size} are not set, then the FileIOStorageObject will be bound to the existing configFS object in the parent FileIOBackstore having the specified I{name}. The underlying configFS object must already exist in that mode, or instantiation will fail. @param backstore: The parent backstore of the FileIOStorageObject. @type backstore: FileIOBackstore @param name: The name of the FileIOStorageObject. @type name: string @param dev: The path to the backend file or block device to be used. - Examples: I{dev="/dev/sda"}, I{dev="/tmp/myfile"} - The only block device type that is accepted I{TYPE_DISK}, or partitions of a I{TYPE_DISK} device. For other device types, use pscsi. @type dev: string @param size: The maximum size to allocate for the file. Not used for block devices. - If size is an int, it represents a number of bytes - If size is a string, the following units can be used : - B{B} or no unit present for bytes - B{k}, B{K}, B{kB}, B{KB} for kB (kilobytes) - B{m}, B{M}, B{mB}, B{MB} for MB (megabytes) - B{g}, B{G}, B{gB}, B{GB} for GB (gigabytes) - B{t}, B{T}, B{tB}, B{TB} for TB (terabytes) Example: size="1MB" for a one megabytes storage object. - The base value for kilo is 1024, aka 1kB = 1024B. Strictly speaking, we use kiB, MiB, etc. @type size: string or int @param wwn: Either None (use random WWN) or the WWN to use as T10 Unit Serial. @type wwn: None or string @param buffered_mode: Should we create the StorageObject in buffered mode or not ? Byt default, we create it in synchronous mode (non-buffered). This cannot be changed later. @type buffered_mode: bool @return: A FileIOStorageObject object. ''' if dev is not None: super(FileIOStorageObject, self).__init__(backstore, FileIOBackstore, name, 'create') try: self._configure(dev, size, wwn, buffered_mode) except: self.delete() raise else: super(FileIOStorageObject, self).__init__(backstore, FileIOBackstore, name, 'lookup') def _configure(self, dev, size, wwn, buffered_mode): self._check_self() rdev = os.path.realpath(dev) if not os.path.isdir(os.path.dirname(rdev)): raise RTSLibError("The dev parameter must be a path to a " + "file inside an existing directory, " + "not %s." % str(os.path.dirname(dev))) if os.path.isdir(rdev): raise RTSLibError("The dev parameter must be a path to a " + "file or block device not a directory:" + "%s." % dev) block_type = get_block_type(rdev) if block_type is None and not is_disk_partition(rdev): if os.path.exists(rdev) and not os.path.isfile(dev): raise RTSLibError("Device %s is neither a file, " % dev + "a disk partition or a block device.") # It is a file if size is None: raise RTSLibError("The size parameter is mandatory " + "when using a file.") size = convert_human_to_bytes(size) self._control("fd_dev_name=%s,fd_dev_size=%d" % (dev, size)) else: # it is a block device or a disk partition if size is not None: raise RTSLibError("You cannot specify a size for a " + "block device.") if block_type != 0 and block_type is not None: raise RTSLibError("Device %s is a block device, " % dev + "but not of TYPE_DISK.") if is_dev_in_use(rdev): raise RTSLibError("Cannot configure StorageObject " + "because device " + "%s is already in use." % dev) if is_disk_partition(rdev): size = get_disk_size(rdev) self._control("fd_dev_name=%s,fd_dev_size=%d" % (dev, size)) else: self._control("fd_dev_name=%s" % dev) self._set_udev_path(dev) if buffered_mode: self._set_buffered_mode() self._enable() self.wwn = wwn self.restore_pr_aptpl() def _get_mode(self): self._check_self() return self._parse_info('Mode') def _get_size(self): self._check_self() return int(self._parse_info('Size')) def _set_buffered_mode(self): ''' FileIOStorage objects have synchronous mode enable by default. This allows to move them to buffered mode. Warning, setting the object back to synchronous mode is not implemented yet, so there is no turning back unless you delete and recreate the FileIOStorageObject. ''' self._check_self() self._control("fd_buffered_io=1") # FileIOStorageObject public stuff mode = property(_get_mode, doc="Get the current FileIOStorage mode, buffered or synchronous") size = property(_get_size, doc="Get the current FileIOStorage size in bytes") class IBlockStorageObject(StorageObject): ''' An interface to configFS storage objects for iblock backstore. ''' # IBlockStorageObject private stuff def __init__(self, backstore, name, dev=None, wwn=None): ''' A BlockIOStorageObject can be instantiated in two ways: - B{Creation mode}: If I{dev} is specified, the underlying configFS object will be created with that parameter. No BlockIOStorageObject with the same I{name} can pre-exist in the parent BlockIOBackstore in that mode. - B{Lookup mode}: If I{dev} is not set, then the BlockIOStorageObject will be bound to the existing configFS object in the parent BlockIOBackstore having the specified I{name}. The underlying configFS object must already exist in that mode, or instantiation will fail. @param backstore: The parent backstore of the BlockIOStorageObject. @type backstore: BlockIOBackstore @param name: The name of the BlockIOStorageObject. @type name: string @param dev: The path to the backend block device to be used. - Example: I{dev="/dev/sda"}. - The only device type that is accepted I{TYPE_DISK}. For other device types, use pscsi. @type dev: string @param wwn: Either None (use random WWN) or the WWN to use as T10 Unit Serial. @type wwn: None or string @return: A BlockIOStorageObject object. ''' if dev is not None: super(IBlockStorageObject, self).__init__(backstore, IBlockBackstore, name, 'create') try: self._configure(dev, wwn) except: self.delete() raise else: super(IBlockStorageObject, self).__init__(backstore, IBlockBackstore, name, 'lookup') def _configure(self, dev, wwn): self._check_self() if get_block_type(dev) != 0: raise RTSLibError("Device is not a TYPE_DISK block device.") if is_dev_in_use(dev): raise RTSLibError("Cannot configure StorageObject because " + "device %s is already in use." % dev) self._set_udev_path(dev) if self._backstore.version.startswith("v3."): # For 3.x, use the fd method file_fd = os.open(dev, os.O_RDWR) try: self._write_fd(file_fd) finally: os.close(file_fd) else: # For 4.x and above, use the generic udev_path method self._control("udev_path=%s" % dev) self._enable() self.wwn = wwn self.restore_pr_aptpl() def _get_major(self): self._check_self() return int(self._parse_info('Major')) def _get_minor(self): self._check_self() return int(self._parse_info('Minor')) # IblockStorageObject public stuff major = property(_get_major, doc="Get the block device major number") minor = property(_get_minor, doc="Get the block device minor number") def _test(): import doctest doctest.testmod() if __name__ == "__main__": _test() rtslib-3.0.pre4.1~g1b33ceb/rtslib/utils.py0000664000000000000000000006457112443074135015214 0ustar ''' Provides various utility functions. This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import re import os import stat import uuid import glob import socket import ipaddr import netifaces import subprocess from array import array from fcntl import ioctl from threading import Thread from Queue import Queue, Empty from struct import pack, unpack class RTSLibError(Exception): ''' Generic rtslib error. ''' pass class RTSLibBrokenLink(RTSLibError): ''' Broken link in configfs, i.e. missing LUN storage object. ''' pass class RTSLibNotInCFS(RTSLibError): ''' The underlying configfs object does not exist. Happens when calling methods of an object that is instantiated but have been deleted from configs, or when trying to lookup an object that does not exist. ''' pass def fwrite(path, string): ''' This function writes a string to a file, and takes care of opening it and closing it. If the file does not exist, it will be created. >>> from rtslib.utils import * >>> fwrite("/tmp/test", "hello") >>> fread("/tmp/test") 'hello' @param path: The file to write to. @type path: string @param string: The string to write to the file. @type string: string ''' path = os.path.realpath(str(path)) file_fd = open(path, 'w') try: file_fd.write("%s" % string) finally: file_fd.close() def fread(path): ''' This function reads the contents of a file. It takes care of opening and closing it. >>> from rtslib.utils import * >>> fwrite("/tmp/test", "hello") >>> fread("/tmp/test") 'hello' >>> fread("/tmp/notexistingfile") # doctest: +ELLIPSIS Traceback (most recent call last): ... IOError: [Errno 2] No such file or directory: '/tmp/notexistingfile' @param path: The path to the file to read from. @type path: string @return: A string containing the file's contents. ''' path = os.path.realpath(str(path)) file_fd = open(path, 'r') try: string = file_fd.read() finally: file_fd.close() return string def is_dev_in_use(path): ''' This function will check if the device or file referenced by path is already mounted or used as a storage object backend. It works by trying to open the path with O_EXCL flag, which will fail if someone else already did. Note that the file is closed before the function returns, so this does not guaranteed the device will still be available after the check. @param path: path to the file of device to check @type path: string @return: A boolean, True is we cannot get exclusive descriptor on the path, False if we can. ''' path = os.path.realpath(str(path)) try: file_fd = os.open(path, os.O_EXCL|os.O_NDELAY) except OSError: return True else: os.close(file_fd) return False def is_disk_partition(path): ''' Try to find out if path is a partition of a TYPE_DISK device. Handles both /dev/sdaX and /dev/disk/by-*/*-part? schemes. ''' regex = re.match(r'([a-z/]+)([1-9]+)$', path) if not regex: regex = re.match(r'(/dev/disk/.+)(-part[1-9]+)$', path) if not regex: return False else: if get_block_type(regex.group(1)) == 0: return True def get_disk_size(path): ''' This function returns the size in bytes of a disk-type block device, or None if path does not point to a disk- type device. ''' (major, minor) = get_block_numbers(path) if major is None: return None # list of [major, minor, #blocks (1K), name partitions = [ x.split()[0:4] for x in fread("/proc/partitions").split("\n")[2:] if x] size = None for partition in partitions: if partition[0:2] == [str(major), str(minor)]: size = int(partition[2]) * 1024 break return size def get_block_numbers(path): ''' This function returns a (major,minor) tuple for the block device found at path, or (None, None) if path is not a block device. ''' dev = os.path.realpath(path) try: mode = os.stat(dev) except OSError: return (None, None) if not stat.S_ISBLK(mode[stat.ST_MODE]): return (None, None) major = os.major(mode.st_rdev) minor = os.minor(mode.st_rdev) return (major, minor) def get_block_type(path): ''' This function returns a block device's type. Example: 0 is TYPE_DISK If no match is found, None is returned. >>> from rtslib.utils import * >>> get_block_type("/dev/sda") 0 >>> get_block_type("/dev/sr0") 5 >>> get_block_type("/dev/scd0") 5 >>> get_block_type("/dev/nodevicehere") is None True @param path: path to the block device @type path: string @return: An int for the block device type, or None if not a block device. ''' dev = os.path.realpath(path) # TODO: Make adding new majors on-the-fly possible, using some config file # for instance, maybe an additionnal list argument, or even a match all # mode for overrides ? # Make sure we are dealing with a block device (major, minor) = get_block_numbers(dev) if major is None: return None # Treat disk partitions as TYPE_DISK if is_disk_partition(path): return 0 # These devices are disk type block devices, but might not report this # correctly in /sys/block/xxx/device/type, so use their major number. type_disk_known_majors = [1, # RAM disk 8, # SCSI disk devices 9, # Metadisk RAID devices 13, # 8-bit MFM/RLL/IDE controller 19, # "Double" compressed disk 21, # Acorn MFM hard drive interface 30, # FIXME: Normally 'Philips LMS CM-205 # CD-ROM' in the Linux devices list but # used by Cirtas devices. 35, # Slow memory ramdisk 36, # MCA ESDI hard disk 37, # Zorro II ramdisk 43, # Network block devices 44, # Flash Translation Layer (FTL) filesystems 45, # Parallel port IDE disk devices 47, # Parallel port ATAPI disk devices 48, # Mylex DAC960 PCI RAID controller 48, # Mylex DAC960 PCI RAID controller 49, # Mylex DAC960 PCI RAID controller 50, # Mylex DAC960 PCI RAID controller 51, # Mylex DAC960 PCI RAID controller 52, # Mylex DAC960 PCI RAID controller 53, # Mylex DAC960 PCI RAID controller 54, # Mylex DAC960 PCI RAID controller 55, # Mylex DAC960 PCI RAID controller 58, # Reserved for logical volume manager 59, # Generic PDA filesystem device 60, # LOCAL/EXPERIMENTAL USE 61, # LOCAL/EXPERIMENTAL USE 62, # LOCAL/EXPERIMENTAL USE 63, # LOCAL/EXPERIMENTAL USE 64, # Scramdisk/DriveCrypt encrypted devices 65, # SCSI disk devices (16-31) 66, # SCSI disk devices (32-47) 67, # SCSI disk devices (48-63) 68, # SCSI disk devices (64-79) 69, # SCSI disk devices (80-95) 70, # SCSI disk devices (96-111) 71, # SCSI disk devices (112-127) 72, # Compaq Intelligent Drive Array 73, # Compaq Intelligent Drive Array 74, # Compaq Intelligent Drive Array 75, # Compaq Intelligent Drive Array 76, # Compaq Intelligent Drive Array 77, # Compaq Intelligent Drive Array 78, # Compaq Intelligent Drive Array 79, # Compaq Intelligent Drive Array 80, # I2O hard disk 80, # I2O hard disk 81, # I2O hard disk 82, # I2O hard disk 83, # I2O hard disk 84, # I2O hard disk 85, # I2O hard disk 86, # I2O hard disk 87, # I2O hard disk 93, # NAND Flash Translation Layer filesystem 94, # IBM S/390 DASD block storage 96, # Inverse NAND Flash Translation Layer 98, # User-mode virtual block device 99, # JavaStation flash disk 101, # AMI HyperDisk RAID controller 102, # Compressed block device 104, # Compaq Next Generation Drive Array 105, # Compaq Next Generation Drive Array 106, # Compaq Next Generation Drive Array 107, # Compaq Next Generation Drive Array 108, # Compaq Next Generation Drive Array 109, # Compaq Next Generation Drive Array 110, # Compaq Next Generation Drive Array 111, # Compaq Next Generation Drive Array 112, # IBM iSeries virtual disk 114, # IDE BIOS powered software RAID interfaces 115, # NetWare (NWFS) Devices (0-255) 117, # Enterprise Volume Management System 120, # LOCAL/EXPERIMENTAL USE 121, # LOCAL/EXPERIMENTAL USE 122, # LOCAL/EXPERIMENTAL USE 123, # LOCAL/EXPERIMENTAL USE 124, # LOCAL/EXPERIMENTAL USE 125, # LOCAL/EXPERIMENTAL USE 126, # LOCAL/EXPERIMENTAL USE 127, # LOCAL/EXPERIMENTAL USE 128, # SCSI disk devices (128-143) 129, # SCSI disk devices (144-159) 130, # SCSI disk devices (160-175) 131, # SCSI disk devices (176-191) 132, # SCSI disk devices (192-207) 133, # SCSI disk devices (208-223) 134, # SCSI disk devices (224-239) 135, # SCSI disk devices (240-255) 136, # Mylex DAC960 PCI RAID controller 137, # Mylex DAC960 PCI RAID controller 138, # Mylex DAC960 PCI RAID controller 139, # Mylex DAC960 PCI RAID controller 140, # Mylex DAC960 PCI RAID controller 141, # Mylex DAC960 PCI RAID controller 142, # Mylex DAC960 PCI RAID controller 143, # Mylex DAC960 PCI RAID controller 144, # Non-device (e.g. NFS) mounts 145, # Non-device (e.g. NFS) mounts 146, # Non-device (e.g. NFS) mounts 147, # DRBD device 152, # EtherDrive Block Devices 153, # Enhanced Metadisk RAID storage units 160, # Carmel 8-port SATA Disks 161, # Carmel 8-port SATA Disks 199, # Veritas volume manager (VxVM) volumes 201, # Veritas VxVM dynamic multipathing driver 202, # Xen block device 230, # ZFS ZVols 240, # LOCAL/EXPERIMENTAL USE 241, # LOCAL/EXPERIMENTAL USE 242, # LOCAL/EXPERIMENTAL USE 243, # LOCAL/EXPERIMENTAL USE 244, # LOCAL/EXPERIMENTAL USE 245, # LOCAL/EXPERIMENTAL USE 246, # LOCAL/EXPERIMENTAL USE 247, # LOCAL/EXPERIMENTAL USE 248, # LOCAL/EXPERIMENTAL USE 249, # LOCAL/EXPERIMENTAL USE 250, # LOCAL/EXPERIMENTAL USE 251, # LOCAL/EXPERIMENTAL USE 252, # LOCAL/EXPERIMENTAL USE 253, # LOCAL/EXPERIMENTAL USE 254, # LOCAL/EXPERIMENTAL USE 259 # NVME namespaces ] if major in type_disk_known_majors: return 0 # Same for LVM LVs, but as we cannot use major here # (it varies accross distros), use the realpath to check if os.path.dirname(dev) == "/dev/mapper": return 0 # list of (major, minor, type) tuples blocks = [(fread("%s/dev" % fdev).strip().split(':')[0], fread("%s/dev" % fdev).strip().split(':')[1], fread("%s/device/type" % fdev).strip()) for fdev in glob.glob("/sys/block/*") if os.path.isfile("%s/device/type" % fdev)] for block in blocks: if int(block[0]) == major and int(block[1]) == minor: return int(block[2]) return None def list_scsi_hbas(): ''' This function returns the list of HBA indexes for existing SCSI HBAs. ''' return list(set([int(device.partition(":")[0]) for device in os.listdir("/sys/bus/scsi/devices") if re.match("[0-9:]+", device)])) def convert_scsi_path_to_hctl(path): ''' This function returns the SCSI ID in H:C:T:L form for the block device being mapped to the udev path specified. If no match is found, None is returned. >>> import rtslib.utils as utils >>> utils.convert_scsi_path_to_hctl('/dev/scd0') (2, 0, 0, 0) >>> utils.convert_scsi_path_to_hctl('/dev/sr0') (2, 0, 0, 0) >>> utils.convert_scsi_path_to_hctl('/dev/sda') (3, 0, 0, 0) >>> utils.convert_scsi_path_to_hctl('/dev/sda1') >>> utils.convert_scsi_path_to_hctl('/dev/sdb') (3, 0, 1, 0) >>> utils.convert_scsi_path_to_hctl('/dev/sdc') (3, 0, 2, 0) @param path: The udev path to the SCSI block device. @type path: string @return: An (host, controller, target, lun) tuple of integer values representing the SCSI ID of the device, or None if no match is found. ''' devname = os.path.basename(os.path.realpath(path)) try: hctl = os.listdir("/sys/block/%s/device/scsi_device" % devname)[0].split(':') except: return None return [int(data) for data in hctl] def convert_scsi_hctl_to_path(host, controller, target, lun): ''' This function returns a udev path pointing to the block device being mapped to the SCSI device that has the provided H:C:T:L. >>> import rtslib.utils as utils >>> utils.convert_scsi_hctl_to_path(0,0,0,0) '' >>> utils.convert_scsi_hctl_to_path(2,0,0,0) # doctest: +ELLIPSIS '/dev/s...0' >>> utils.convert_scsi_hctl_to_path(3,0,2,0) '/dev/sdc' @param host: The SCSI host id. @type host: int @param controller: The SCSI controller id. @type controller: int @param target: The SCSI target id. @type target: int @param lun: The SCSI Logical Unit Number. @type lun: int @return: A string for the canonical path to the device, or empty string. ''' try: host = int(host) controller = int(controller) target = int(target) lun = int(lun) except ValueError: raise RTSLibError( "The host, controller, target and lun parameter must be integers.") for devname in os.listdir("/sys/block"): path = "/dev/%s" % devname hctl = [host, controller, target, lun] if convert_scsi_path_to_hctl(path) == hctl: return os.path.realpath(path) return '' def convert_bytes_to_human(size): if not size: return "" for x in ['bytes','K','M','G','T']: if size < 1024.0: return "%3.1f%s" % (size, x) size /= 1024.0 def convert_human_to_bytes(hsize, kilo=1024): ''' This function converts human-readable amounts of bytes to bytes. It understands the following units : - I{B} or no unit present for Bytes - I{k}, I{K}, I{kB}, I{KB} for kB (kilobytes) - I{m}, I{M}, I{mB}, I{MB} for MB (megabytes) - I{g}, I{G}, I{gB}, I{GB} for GB (gigabytes) - I{t}, I{T}, I{tB}, I{TB} for TB (terabytes) Note: The definition of I{kilo} defaults to 1kB = 1024Bytes. Strictly speaking, those should not be called I{kB} but I{kiB}. You can override that with the optional kilo parameter. Example: >>> import rtslib.utils as utils >>> utils.convert_human_to_bytes("1k") 1024 >>> utils.convert_human_to_bytes("1k", 1000) 1000 >>> utils.convert_human_to_bytes("1MB") 1048576 >>> utils.convert_human_to_bytes("12kB") 12288 @param hsize: The human-readable version of the Bytes amount to convert @type hsize: string or int @param kilo: Optionnal base for the kilo prefix @type kilo: int @return: An int representing the human-readable string converted to bytes ''' size = str(hsize).replace("g","G").replace("K","k") size = size.replace("m","M").replace("t","T") if not re.match("^[0-9]+[T|G|M|k]?[B]?$", size): raise RTSLibError("Cannot interpret size, wrong format: %s" % hsize) size = size.rstrip('B') units = ['k', 'M', 'G', 'T'] try: power = units.index(size[-1]) + 1 except ValueError: power = 0 size = int(size) else: size = int(size[:-1]) size = size * int(kilo) ** power return size def generate_wwn(wwn_type): ''' Generates a random WWN of the specified type: - unit_serial: T10 WWN Unit Serial. - iqn: iSCSI IQN - naa: SAS NAA address @param wwn_type: The WWN address type. @type wwn_type: str @returns: A string containing the WWN. ''' wwn_type = wwn_type.lower() if wwn_type == 'free': return str(uuid.uuid4()) if wwn_type == 'unit_serial': return str(uuid.uuid4()) elif wwn_type == 'iqn': localname = socket.gethostname().split(".")[0] localarch = os.uname()[4].replace("_","") prefix = "iqn.2003-01.org.linux-iscsi.%s.%s" % (localname, localarch) prefix = prefix.strip().lower() serial = "sn.%s" % str(uuid.uuid4())[24:] return "%s:%s" % (prefix, serial) elif wwn_type == 'naa': sas_address = "naa.6001405%s" % str(uuid.uuid4())[:10] return sas_address.replace('-', '') else: raise ValueError("Unknown WWN type: %s." % wwn_type) def is_valid_wwn(wwn_type, wwn, wwn_list=None): ''' Returns True if the wwn is a valid wwn of type wwn_type. @param wwn_type: The WWN address type. @type wwn_type: str @param wwn: The WWN address to check. @type wwn: str @param wwn_list: An optional list of wwns to check the wwn parameter from. @type wwn_list: list of str @returns: bool. ''' wwn_type = wwn_type.lower() if wwn_list is not None and wwn not in wwn_list: return False elif wwn_type == 'free': return True elif wwn_type == 'iqn' \ and re.match("iqn\.[0-9]{4}-[0-1][0-9]\..*\..*", wwn) \ and not re.search(' ', wwn) \ and not re.search('_', wwn): return True elif wwn_type == 'naa' \ and re.match("naa\.[0-9A-Fa-f]{16}$", wwn): return True elif wwn_type == 'unit_serial' \ and re.match( "[0-9A-Fa-f]{8}(-[0-9A-Fa-f]{4}){3}-[0-9A-Fa-f]{12}$", wwn): return True else: return False def list_available_kernel_modules(): ''' List all loadable kernel modules as registered by depmod ''' kver = os.uname()[2] depfile = "/lib/modules/%s/modules.dep" % kver handle = open(depfile) try: lines = handle.readlines() finally: handle.close() return [os.path.basename(line.partition(":")[0]).partition(".")[0] for line in lines] def list_loaded_kernel_modules(): ''' List all currently loaded kernel modules ''' return [line.split(" ")[0] for line in fread("/proc/modules").split('\n') if line] def exec_argv(argv, strip=True, shell=False): ''' Executes a command line given as an argv table and either: - raise an exception if return != 0 - return the output If strip is True, then output lines will be stripped. If shell is True, the argv must be a string that will be evaluated by the shell, instead of the argv list. ''' process = subprocess.Popen(argv, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell) (stdoutdata, stderrdata) = process.communicate() # Remove indents, trailing space and empty lines in output. if strip: stdoutdata = "\n".join([line.strip() for line in stdoutdata.split("\n") if line.strip()]) stderrdata = "\n".join([line.strip() for line in stderrdata.split("\n") if line.strip()]) if process.returncode != 0: raise RTSLibError(stderrdata) else: return stdoutdata def list_eth_names(max_eth=1024): ''' List the max_eth first local ethernet interfaces names from SIOCGIFCONF struct. ''' SIOCGIFCONF = 0x8912 if os.uname()[4].endswith("_64"): offset = 40 else: offset = 32 bytes = 32 * max_eth sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) ifaces = array('B', '\0' * bytes) packed = pack('iL', bytes, ifaces.buffer_info()[0]) outbytes = unpack('iL', ioctl(sock.fileno(), SIOCGIFCONF, packed))[0] names = ifaces.tostring() return [names[i:i+offset].split('\0', 1)[0] for i in range(0, outbytes, offset)] def list_eth_ips(ifnames=None): ''' List the IPv4 and IPv6 non-loopback, non link-local addresses (in the RFC3330 sense, not addresses attached to lo) of a list of ethernet interfaces from the SIOCGIFADDR struct. If ifname is omitted, list all IPs of all ifaces excepted for lo. ''' if ifnames is None: ifnames = [iface for iface in list_eth_names() if iface != 'lo'] addrs = [] for iface in ifnames: ifaddresses = netifaces.ifaddresses(iface) if netifaces.AF_INET in ifaddresses: addrs.extend(addr['addr'] for addr in ifaddresses[netifaces.AF_INET] if not addr['addr'].startswith('127.')) if netifaces.AF_INET6 in ifaddresses: addrs.extend(addr['addr'] for addr in ifaddresses[netifaces.AF_INET6] if not '%' in addr['addr'] if not addr['addr'].startswith('::')) return sorted(set(addrs)) def is_ipv4_address(addr): try: ipaddr.IPv4Address(addr) except: return False else: return True def is_ipv6_address(addr): try: ipaddr.IPv6Address(addr) except: return False else: return True def get_main_ip(): ''' Try to guess the local machine non-loopback IP. If available, local hostname resolution is used (if non-loopback), else try to find an other non-loopback IP on configured NICs. If no usable IP address is found, returns None. ''' # socket.gethostbyname does no have a timeout parameter # Let's use a thread to implement that in the background def start_thread(func): thread = Thread(target = func) thread.setDaemon(True) thread.start() def gethostbyname_timeout(hostname, timeout = 1): queue = Queue(1) def try_gethostbyname(hostname): try: hostname = socket.gethostbyname(hostname) except socket.gaierror: hostname = None return hostname def queue_try_gethostbyname(): queue.put(try_gethostbyname(hostname)) start_thread(queue_try_gethostbyname) try: result = queue.get(block = True, timeout = timeout) except Empty: result = None return result local_ips = list_eth_ips() # try to get a resolution in less than 1 second host_ip = gethostbyname_timeout(socket.gethostname()) # Put the host IP in first position of the IP list if it exists if host_ip in local_ips: local_ips.remove(host_ip) local_ips.insert(0, host_ip) for ip_addr in local_ips: if not ip_addr.startswith("127.") and ip_addr.strip(): return ip_addr return None def _test(): '''Run the doctests''' import doctest doctest.testmod() if __name__ == "__main__": _test() rtslib-3.0.pre4.1~g1b33ceb/setup.py0000775000000000000000000000216412443074135013706 0ustar #! /usr/bin/env python ''' This file is part of LIO(tm). Copyright (c) 2011-2014 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import re from distutils.core import setup import rtslib PKG = rtslib VERSION = str(PKG.__version__) (AUTHOR, EMAIL) = re.match('^(.*?)\s*<(.*)>$', PKG.__author__).groups() URL = PKG.__url__ LICENSE = PKG.__license__ SCRIPTS = [] DESCRIPTION = PKG.__description__ setup(name=PKG.__name__, description=DESCRIPTION, version=VERSION, author=AUTHOR, author_email=EMAIL, license=LICENSE, url=URL, scripts=SCRIPTS, packages=[PKG.__name__], package_data = {'':[]}) rtslib-3.0.pre4.1~g1b33ceb/specs/example_spec_file_for_fabric_modules.txt0000664000000000000000000000062612443074135023420 0ustar # Example LIO target fabric module. # # The example fabric module uses the default feature set. # features = discovery_auth, acls, acls_auth, nps # This module uses anything as WWNs. wwn_type = free # Convoluted kernel module name. Default would be example_target_mod kernel_module = my_complex_kernel_module_name # The configfs group name. Default would be "example" configfs_group = "example_group" rtslib-3.0.pre4.1~g1b33ceb/specs/ib_srpt.spec0000664000000000000000000000072612443074135015626 0ustar # The ib_srpt fabric module specfile. # # The fabric module feature set features = acls # Non-standard module naming scheme kernel_module = ib_srpt # The module uses hardware addresses from there wwn_from_files = /sys/class/infiniband/*/ports/*/gids/0 # Transform 'fe80:0000:0000:0000:0002:1903:000e:8acd' WWN notation to # '0xfe800000000000000002c903000e8acd' wwn_from_files_filter = "sed -e s/fe80/0xfe80/ -e 's/\://g'" # The configfs group configfs_group = srpt rtslib-3.0.pre4.1~g1b33ceb/specs/iscsi.spec0000664000000000000000000000052512443074135015273 0ustar # The iscsi fabric module specfile. # # The iscsi fabric module features set. features = discovery_auth, acls, acls_auth, acls_tcq_depth, nps, tpgts # Obviously, this module uses IQN strings as WWNs. wwn_type = iqn # This is default too # kernel_module = iscsi_target_mod # The configfs group name, default too # configfs_group = iscsi rtslib-3.0.pre4.1~g1b33ceb/specs/loopback.spec0000664000000000000000000000035112443074135015750 0ustar # The tcm_loop fabric module specfile. # # The fabric module feature set features = nexus # Use naa WWNs. wwn_type = naa # Non-standard module naming scheme kernel_module = tcm_loop # The configfs group configfs_group = loopback rtslib-3.0.pre4.1~g1b33ceb/specs/qla2xxx.spec0000664000000000000000000000062412443074135015570 0ustar # The qla2xxx fabric module specfile. # # The qla2xxx fabric module feature set features = acls # Non-standard module naming scheme kernel_module = tcm_qla2xxx # The module uses hardware addresses from there wwn_from_files = /sys/class/fc_host/host*/port_name # Transform '0x1234567812345678' WWN notation to '12:34:56:78:12:34:56:78' wwn_from_files_filter = "sed -e s/0x// -e 's/../&:/g' -e s/:$//" rtslib-3.0.pre4.1~g1b33ceb/specs/tcm_fc.spec0000664000000000000000000000065712443074135015422 0ustar # The tcm_fc fabric module specfile. # # The fabric module feature set features = acls # Non-standard module naming scheme kernel_module = tcm_fc # The module uses hardware addresses from there wwn_from_files = /sys/class/fc_host/host*/port_name # Transform '0x1234567812345678' WWN notation to '12:34:56:78:12:34:56:78' wwn_from_files_filter = "sed -e s/0x// -e 's/../&:/g' -e s/:$//" # The configfs group configfs_group = fc rtslib-3.0.pre4.1~g1b33ceb/specs/usb_gadget.spec0000664000000000000000000000013512443074135016262 0ustar features = nexus wwn_type = naa kernel_module = tcm_usb_gadget configfs_group = "usb_gadget" rtslib-3.0.pre4.1~g1b33ceb/specs/vhost.spec0000664000000000000000000000030512443074135015320 0ustar # The fabric module feature set features = nexus, tpgts # Use naa WWNs. wwn_type = naa # Non-standard module naming scheme kernel_module = vhost_scsi # The configfs group configfs_group = vhost rtslib-3.0.pre4.1~g1b33ceb/specs/writing_spec_files_for_fabric_modules.txt0000664000000000000000000001047412443074135023635 0ustar The /var/lib/target directory contains the spec files for RisingTide Systems's LIO SCSI target subsystem fabric modules. To support a new fabric module, a spec file should be installed containing information for RTSLib to use it: SCSI Target features supported, WWN scheme, kernel module information, etc. Each spec file should be named MODULE.spec, where MODULE is the name the fabric module is to be referred as. It contains a series of KEY = VALUE pairs, one per line. KEY is an alphanumeric (no spaces) string. VALUE can be anything. Quotes can be used for strings, but are not mandatory. Lists of VALUEs are comma-separated. Syntax ------ * Strings String values can either be enclosed in double quotes or not. Those examples are equivalent: kernel_module = "my_module" kernel_module = my_module * Lists Lists are comma-separated lists of values. If you want to use a comma in a string, use double quotes. Example: my_string = value1, value2, "value3, with comma", value4 * Comments All lines beginning with a pound sign (#) will be ignored. Empty lines will be ignored too. Available keys -------------- * features Lists the target fabric available features. Default value: discovery_auth, acls, acls_auth, nps exemple: features = discovery_auth, acls, acls_auth Detail of features: * tpgts The target fabric module is using iSCSI-style target portal group tags. * discovery_auth The target fabric module supports a fabric-wide authentication for discovery. * acls The target's TPGTs do support explicit initiator ACLs. * acls_auth The target's TPGT's ACLs do support per-ACL initiator authentication. * nps The TPGTs do support iSCSI-like IPv4/IPv6 network portals, using IP:PORT group names. * nexus The TPGTs do have a 'nexus' attribute that contains the local initiator serial unit. This attribute must be set before being able to create any LUNs. * wwn_type Sets the type of WWN expected by the target fabric. Defaults to 'free'. Example: wwn_type = iqn Current valid types are: * free Freeform WWN. * iqn The fabric module targets are using iSCSI-type IQNs. * naa NAA SAS address type WWN. * unit_serial Disk-type unit serial. * wwn_from_files In some cases, and independently from the wwn type, the target WWNs must be picked from a list of existing ones, the most obvious case being hardware-set WWNs. Only the WWNs both matching the wwn_type (after filtering if set, see below) and fetched from the specified files will be allowed for targets. The value of this key is a list (one or more, comma-separated) of UNIX style pathname patterns: * and ? wildcards can be used, and character ranges expressed with [] will be correctly expanded. Each file is assumed to contain one or more WWNs, and line ends, spaces, tabs and null (\0) will be considered as separators chars. Example: wwn_from_files = /sys/class/fc_host/host[0-9]/port_name * wwn_from_files_filter Empty by default, this one allows specifying a shell command to which each WWN from files will be fed, and the output of the filter will be used as the final WWN to use. Examples: wwn_from_files_filter = "sed -e s/0x// -e 's/../&:/g' -e s/:$//" wwn_from_files_filter = "sed -e s/0x// -e 's/../&:/g' -e s/:$// | tr [a-z] [A-Z]" The first example transforms strings like '0x21000024ff314c48' into '21:00:00:24:ff:31:4c:48', the second one also shifts lower cases into upper case, demonstrating that you can pipe as many commands you want into another. * wwn_from_cmds Same as wwn_from_files, but instead of taking a list of file patterns, takes a list of shell commands. Each commands output will be considered as a list of WWNs to be used, separated ny line ends, spaces, tabs and null (\0) chararcters. * wwn_from_cmds_filter Same as wwn_from_files_filter, but filters/transforms the WWNs gotten from the results of the wwn_from_cmds shell commands. * kernel_module Sets the name of the kernel module implementing the fabric modules. If not specified, it will be assumed to be MODULE_target_mod, where MODNAME is the name of the fabric module, as used to name the spec file. Note that you must not specify any .ko or such extension here. Example: kernel_module = my_module * configfs_group Sets the name of the configfs group used by the fabric module. Defaults to the name of the module as used to name the spec file. Example: configfs_group = iscsi rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_attribute_group.ast0000664000000000000000000000102712443074135021516 0ustar (lp0 (lp1 (dp2 S'line' p3 I1 sS'type' p4 S'obj' p5 sS'col' p6 I1 sS'key' p7 (S'storage' p8 S'fileio' p9 tp10 sa(dp11 g3 I1 sg4 g5 sg6 I16 sg7 (S'disk' p12 S'vm1' p13 tp14 sa(dp15 S'statements' p16 (lp17 (lp18 (dp19 g3 I2 sg4 S'attr' p20 sg6 I5 sg7 (S'path' p21 S'/tmp/vm1.img' p22 tp23 saa(lp24 (dp25 g3 I3 sg4 g20 sg6 I5 sg7 (S'size' p26 S'1MB' p27 tp28 saa(lp29 (dp30 g3 I4 sg4 S'group' p31 sg6 I5 sg7 (S'attribute' p32 tp33 sa(dp34 g3 I4 sg4 g20 sg6 I15 sg7 (S'block_size' p35 S'512' p36 tp37 saasg3 I1 sg4 S'block' p38 sg6 I25 saa.rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_attribute_group.lio0000664000000000000000000000013412443074135021510 0ustar storage fileio disk vm1 { path /tmp/vm1.img size 1MB attribute block_size 512 } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_basic.ast0000664000000000000000000000061212443074135017357 0ustar (lp0 (lp1 (dp2 S'line' p3 I1 sS'type' p4 S'obj' p5 sS'col' p6 I1 sS'key' p7 (S'storage' p8 S'fileio' p9 tp10 sa(dp11 g3 I1 sg4 g5 sg6 I16 sg7 (S'disk' p12 S'vm1' p13 tp14 sa(dp15 S'statements' p16 (lp17 (lp18 (dp19 g3 I2 sg4 S'attr' p20 sg6 I5 sg7 (S'path' p21 S'/tmp/vm1.img' p22 tp23 saa(lp24 (dp25 g3 I3 sg4 g20 sg6 I5 sg7 (S'size' p26 S'1MB' p27 tp28 saasg3 I1 sg4 S'block' p29 sg6 I25 saa.rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_basic.lio0000664000000000000000000000007712443074135017360 0ustar storage fileio disk vm1 { path /tmp/vm1.img size 1MB } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_comments.ast0000664000000000000000000000237112443074135020127 0ustar (lp0 (lp1 (dp2 S'line' p3 I2 sS'type' p4 S'obj' p5 sS'col' p6 I1 sS'key' p7 (S'storage' p8 S'fileio' p9 tp10 sa(dp11 S'statements' p12 (lp13 (lp14 (dp15 g3 I3 sg4 g5 sg6 I5 sg7 (S'disk' p16 S'vm1' p17 tp18 sa(dp19 g12 (lp20 (lp21 (dp22 g3 I4 sg4 S'attr' p23 sg6 I9 sg7 (S'path' p24 S'/tmp/disk1.img' p25 tp26 saa(lp27 (dp28 g3 I5 sg4 g23 sg6 I9 sg7 (S'size' p29 S'1MB' p30 tp31 saa(lp32 (dp33 g3 I7 sg4 S'group' p34 sg6 I9 sg7 (S'attribute' p35 tp36 sa(dp37 g12 (lp38 (lp39 (dp40 g3 I9 sg4 g23 sg6 I13 sg7 (S'block_size' p41 S'512' p42 tp43 saa(lp44 (dp45 g3 I10 sg4 g23 sg6 I13 sg7 (S'optimal_sectors' p46 S'1024' p47 tp48 saa(lp49 (dp50 g3 I11 sg4 g23 sg6 I13 sg7 (S'queue_depth' p51 S'32' p52 tp53 saa(lp54 (dp55 g3 I13 sg4 g23 sg6 I13 sg7 (S'emulate_tas' p56 S'yes' p57 tp58 saa(lp59 (dp60 S'comment' p61 S'EOL comment' p62 sg3 I14 sg4 g23 sg6 I13 sg7 (S'enforce_pr_isids' p63 S'yes' p64 tp65 saa(lp66 (dp67 g3 I16 sg4 g23 sg6 I13 sg7 (S'emulate_dpo' p68 S'no' p69 tp70 saa(lp71 (dp72 g61 S'Hello there!' p73 sg3 I17 sg4 g23 sg6 I13 sg7 (S'emulate_tpu' p74 S'no' p75 tp76 saa(lp77 (dp78 g61 S'Does what it says?' p79 sg3 I18 sg4 g23 sg6 I13 sg7 (S'is_nonrot' p80 S'no' p81 tp82 saasg3 I7 sg4 S'block' p83 sg6 I19 saasg3 I3 sg4 g83 sg6 I14 saasg3 I2 sg4 g83 sg6 I16 saa.rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_comments.lio0000664000000000000000000000111512443074135020116 0ustar # This is a comment before the first statement storage fileio { disk vm1 { path /tmp/disk1.img size 1MB # This is an indented comment after size and before a group attribute { # This is an indented comment after a group block_size 512 optimal_sectors 1024 queue_depth 32 emulate_tas yes enforce_pr_isids yes # EOL comment emulate_dpo no emulate_tpu no # Hello there! is_nonrot no # Does what it says? } } } # Last words comment rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_complete.ast0000664000000000000000000001405612443074135020115 0ustar (lp1 (lp2 (dp3 S'line' p4 I1 sS'type' p5 S'obj' p6 sS'col' p7 I1 sS'key' p8 (S'storage' S'fileio' tp9 sa(dp10 g4 I1 sg5 g6 sg7 I16 sg8 (S'disk' S'disk1' tp11 sa(dp12 S'statements' p13 (lp14 (lp15 (dp16 g4 I2 sg5 S'attr' p17 sg7 I5 sg8 (S'path' S'/tmp/disk1.img' tp18 saa(lp19 (dp20 g4 I3 sg5 g17 sg7 I5 sg8 (S'size' S'1MB' tp21 saa(lp22 (dp23 g4 I5 sg5 S'group' p24 sg7 I5 sg8 (S'attribute' tp25 sa(dp26 g13 (lp27 (lp28 (dp29 g4 I6 sg5 g17 sg7 I9 sg8 (S'block_size' S'512' tp30 saa(lp31 (dp32 g4 I7 sg5 g17 sg7 I9 sg8 (S'optimal_sectors' S'1024' tp33 saa(lp34 (dp35 g4 I8 sg5 g17 sg7 I9 sg8 (S'queue_depth' S'32' tp36 saa(lp37 (dp38 g4 I10 sg5 g17 sg7 I9 sg8 (S'emulate_tas' S'yes' tp39 saa(lp40 (dp41 g4 I11 sg5 g17 sg7 I9 sg8 (S'enforce_pr_isids' S'yes' tp42 saa(lp43 (dp44 g4 I13 sg5 g17 sg7 I9 sg8 (S'emulate_dpo' S'no' tp45 saa(lp46 (dp47 g4 I14 sg5 g17 sg7 I9 sg8 (S'emulate_tpu' S'no' tp48 saa(lp49 (dp50 g4 I15 sg5 g17 sg7 I9 sg8 (S'is_nonrot' S'no' tp51 saasg4 I5 sg5 S'block' p52 sg7 I15 saasg4 I1 sg5 g52 sg7 I27 saa(lp53 (dp54 g4 I19 sg5 g6 sg7 I1 sg8 (S'storage' S'fileio' tp55 sa(dp56 g4 I19 sg5 g6 sg7 I16 sg8 (S'disk' S'disk2' tp57 sa(dp58 g13 (lp59 (lp60 (dp61 g4 I20 sg5 g17 sg7 I5 sg8 (S'path' S'/tmp/disk2.img' tp62 saa(lp63 (dp64 g4 I21 sg5 g17 sg7 I5 sg8 (S'size' S'1M' tp65 saa(lp66 (dp67 g4 I22 sg5 g24 sg7 I5 sg8 (S'attribute' tp68 sa(dp69 g4 I22 sg5 g17 sg7 I15 sg8 (S'block_size' S'512' tp70 saa(lp71 (dp72 g4 I23 sg5 g24 sg7 I5 sg8 (S'attribute' tp73 sa(dp74 g4 I23 sg5 g17 sg7 I15 sg8 (S'optimal_sectors' S'1024' tp75 saa(lp76 (dp77 g4 I24 sg5 g24 sg7 I5 sg8 (S'attribute' tp78 sa(dp79 g4 I24 sg5 g17 sg7 I15 sg8 (S'queue_depth' S'32' tp80 saasg4 I19 sg5 g52 sg7 I27 saa(lp81 (dp82 g4 I28 sg5 g6 sg7 I1 sg8 (S'fabric' S'iscsi' tp83 sa(dp84 g4 I28 sg5 g24 sg7 I14 sg8 (S'discovery_auth' tp85 sa(dp86 g13 (lp87 (lp88 (dp89 g4 I29 sg5 g17 sg7 I5 sg8 (S'enable' S'yes' tp90 saa(lp91 (dp92 g4 I30 sg5 g17 sg7 I5 sg8 (S'userid' S'target1' tp93 saa(lp94 (dp95 g4 I31 sg5 g17 sg7 I5 sg8 (S'password' S'kjh45fDf_' tp96 saa(lp97 (dp98 g4 I32 sg5 g17 sg7 I5 sg8 (S'mutual_userid' S'no' tp99 saa(lp100 (dp101 g4 I33 sg5 g17 sg7 I5 sg8 (S'mutual_password' S'no' tp102 saasg4 I28 sg5 g52 sg7 I29 saa(lp103 (dp104 g4 I36 sg5 g6 sg7 I1 sg8 (S'fabric' S'iscsi' tp105 sa(dp106 g13 (lp107 (lp108 (dp109 g4 I37 sg5 g6 sg7 I5 sg8 (S'target' S'iqn.2003-01.org.linux-iscsi.targetcli.x8664:sn.4699f8812c88' tp110 sa(dp111 g13 (lp112 (lp113 (dp114 g4 I38 sg5 g6 sg7 I9 sg8 (S'tpgt' S'1' tp115 sa(dp116 g13 (lp117 (lp118 (dp119 g4 I39 sg5 g6 sg7 I13 sg8 (S'lun' S'1' tp120 sa(dp121 g4 I39 sg5 g17 sg7 I19 sg8 (S'backend' S'fileio:disk1' tp122 saa(lp123 (dp124 g4 I40 sg5 g6 sg7 I13 sg8 (S'lun' S'2' tp125 sa(dp126 g4 I40 sg5 g17 sg7 I19 sg8 (S'backend' S'fileio:disk2' tp127 saa(lp128 (dp129 g4 I41 sg5 g6 sg7 I13 sg8 (S'portal' S'0.0.0.0:3260' tp130 saa(lp131 (dp132 g4 I43 sg5 g24 sg7 I13 sg8 (S'attribute' tp133 sa(dp134 g13 (lp135 (lp136 (dp137 g4 I44 sg5 g17 sg7 I17 sg8 (S'authentication' S'no' tp138 saa(lp139 (dp140 g4 I45 sg5 g17 sg7 I17 sg8 (S'cache_dynamic_acls' S'no' tp141 saa(lp142 (dp143 g4 I46 sg5 g17 sg7 I17 sg8 (S'default_cmdsn_depth' S'16' tp144 saa(lp145 (dp146 g4 I47 sg5 g17 sg7 I17 sg8 (S'demo_mode_write_protect' S'no' tp147 saa(lp148 (dp149 g4 I48 sg5 g17 sg7 I17 sg8 (S'generate_node_acls' S'no' tp150 saa(lp151 (dp152 g4 I49 sg5 g17 sg7 I17 sg8 (S'login_timeout' S'15' tp153 saa(lp154 (dp155 g4 I50 sg5 g17 sg7 I17 sg8 (S'netif_timeout' S'2' tp156 saa(lp157 (dp158 g4 I51 sg5 g17 sg7 I17 sg8 (S'prod_mode_write_protect' S'no' tp159 saasg4 I43 sg5 g52 sg7 I23 saa(lp160 (dp161 g4 I54 sg5 g24 sg7 I13 sg8 (S'parameter' tp162 sa(dp163 g13 (lp164 (lp165 (dp166 g4 I55 sg5 g17 sg7 I17 sg8 (S'MaxConnections' S'12' tp167 saa(lp168 (dp169 g4 I56 sg5 g17 sg7 I17 sg8 (S'MaxOutstandingR2T' S'34' tp170 saa(lp171 (dp172 g4 I57 sg5 g17 sg7 I17 sg8 (S'TargetAlias' S'LIO Target' tp173 saa(lp174 (dp175 g4 I58 sg5 g17 sg7 I17 sg8 (S'AuthMethod' S'CHAP' tp176 saa(lp177 (dp178 g4 I59 sg5 g17 sg7 I17 sg8 (S'ImmediateData' S'yes' tp179 saa(lp180 (dp181 g4 I60 sg5 g17 sg7 I17 sg8 (S'MaxBurstLength' S'262144' tp182 saa(lp183 (dp184 g4 I61 sg5 g17 sg7 I17 sg8 (S'MaxRecvDataSegmentLength' S'8192' tp185 saa(lp186 (dp187 g4 I62 sg5 g17 sg7 I17 sg8 (S'HeaderDigest' S'CRC32C,None' tp188 saa(lp189 (dp190 g4 I63 sg5 g17 sg7 I17 sg8 (S'OFMarker' S'no' tp191 saasg4 I54 sg5 g52 sg7 I23 saa(lp192 (dp193 g4 I66 sg5 g6 sg7 I13 sg8 (S'acl' S'iqn.2003-01.org.linux-iscsi.targetcli.x8664:client1' tp194 sa(dp195 g13 (lp196 (lp197 (dp198 g4 I67 sg5 g24 sg7 I17 sg8 (S'attribute' tp199 sa(dp200 g13 (lp201 (lp202 (dp203 g4 I68 sg5 g17 sg7 I21 sg8 (S'dataout_timeout' S'3' tp204 saa(lp205 (dp206 g4 I69 sg5 g17 sg7 I21 sg8 (S'dataout_timeout_retries' S'5' tp207 saa(lp208 (dp209 g4 I70 sg5 g17 sg7 I21 sg8 (S'default_erl' S'0' tp210 saa(lp211 (dp212 g4 I71 sg5 g17 sg7 I21 sg8 (S'nopin_response_timeout' S'30' tp213 saa(lp214 (dp215 g4 I72 sg5 g17 sg7 I21 sg8 (S'nopin_timeout' S'15' tp216 saa(lp217 (dp218 g4 I73 sg5 g17 sg7 I21 sg8 (S'random_datain_pdu_offsets' S'no' tp219 saa(lp220 (dp221 g4 I74 sg5 g17 sg7 I21 sg8 (S'random_datain_seq_offsets' S'no' tp222 saa(lp223 (dp224 g4 I75 sg5 g17 sg7 I21 sg8 (S'random_r2t_offsets' S'no' tp225 saasg4 I67 sg5 g52 sg7 I27 saa(lp226 (dp227 g4 I77 sg5 g24 sg7 I17 sg8 (S'auth' tp228 sa(dp229 g13 (lp230 (lp231 (dp232 g4 I78 sg5 g17 sg7 I21 sg8 (S'userid' S'jerome' tp233 saa(lp234 (dp235 g4 I79 sg5 g17 sg7 I21 sg8 (S'password' S'foobar' tp236 saa(lp237 (dp238 g4 I80 sg5 g17 sg7 I21 sg8 (S'userid_mutual' S'just_the2ofus' tp239 saa(lp240 (dp241 g4 I81 sg5 g17 sg7 I21 sg8 (S'password_mutual' S'mutupass' tp242 saasg4 I77 sg5 g52 sg7 I22 saa(lp243 (dp244 g4 I83 sg5 g6 sg7 I17 sg8 (S'mapped_lun' S'1' tp245 sa(dp246 g4 I83 sg5 g17 sg7 I30 sg8 (S'target_lun' S'1' tp247 saa(lp248 (dp249 g4 I84 sg5 g6 sg7 I17 sg8 (S'mapped_lun' S'2' tp250 sa(dp251 g4 I84 sg5 g17 sg7 I30 sg8 (S'target_lun' S'1' tp252 saa(lp253 (dp254 g4 I85 sg5 g6 sg7 I17 sg8 (S'mapped_lun' S'3' tp255 sa(dp256 g4 I85 sg5 g17 sg7 I30 sg8 (S'target_lun' S'1' tp257 saasg4 I66 sg5 g52 sg7 I71 saasg4 I38 sg5 g52 sg7 I16 saasg4 I37 sg5 g52 sg7 I74 saasg4 I36 sg5 g52 sg7 I14 saa. rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_complete.lio0000664000000000000000000000444112443074135020106 0ustar storage fileio disk disk1 { path /tmp/disk1.img size 1MB attribute { block_size 512 optimal_sectors 1024 queue_depth 32 emulate_tas yes enforce_pr_isids yes emulate_dpo no emulate_tpu no is_nonrot no } } storage fileio disk disk2 { path /tmp/disk2.img size 1M attribute block_size 512 attribute optimal_sectors 1024 attribute queue_depth 32 } fabric iscsi discovery_auth { enable yes userid "target1" password "kjh45fDf_" mutual_userid no mutual_password no } fabric iscsi { target "iqn.2003-01.org.linux-iscsi.targetcli.x8664:sn.4699f8812c88" { tpgt 1 { lun 1 backend fileio:disk1 lun 2 backend fileio:disk2 portal 0.0.0.0:3260 attribute { authentication no cache_dynamic_acls no default_cmdsn_depth 16 demo_mode_write_protect no generate_node_acls no login_timeout 15 netif_timeout 2 prod_mode_write_protect no } parameter { MaxConnections 12 MaxOutstandingR2T 34 TargetAlias "LIO Target" AuthMethod "CHAP" ImmediateData yes MaxBurstLength 262144 MaxRecvDataSegmentLength 8192 HeaderDigest "CRC32C,None" OFMarker no } acl "iqn.2003-01.org.linux-iscsi.targetcli.x8664:client1" { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { userid jerome password foobar userid_mutual just_the2ofus password_mutual mutupass } mapped_lun 1 target_lun 1 mapped_lun 2 target_lun 1 mapped_lun 3 target_lun 1 } } } } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_invalid_reference.lio0000664000000000000000000000443612443074135021746 0ustar storage fileio disk disk1 { path /tmp/disk1.img size 1MB attribute { block_size 512 optimal_sectors 1024 queue_depth 32 emulate_tas yes enforce_pr_isids yes emulate_dpo no emulate_tpu no is_nonrot no } } storage fileio disk disk2 { path /tmp/disk2.img size 1M attribute block_size 512 attribute optimal_sectors 1024 attribute queue_depth 32 } fabric iscsi discovery_auth { enable yes userid "target1" password "kjh45fDf_" mutual_userid no mutual_password no } fabric iscsi { target "iqn.2003-01.org.linux-iscsi.targetcli.x8664:sn.4699f8812c88" { tpgt 1 { lun 1 backend fileio:disk3 lun 2 backend fileio:disk2 portal 0.0.0.0:3260 attribute { authentication no cache_dynamic_acls no default_cmdsn_depth 16 demo_mode_write_protect no generate_node_acls no login_timeout 15 netif_timeout 2 prod_mode_write_protect no } parameter { MaxConnections 12 MaxOutstandingR2T 34 TargetAlias "LIO Target" AuthMethod "CHAP" ImmediateData yes MaxBurstLength 262144 MaxRecvDataSegmentLength 8192 HeaderDigest "CRC32C,None" OFMarker no } acl "iqn.2003-01.org.linux-iscsi.targetcli.x8664:client1" { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl no nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { userid jerome password foobar userid_mutual just_the2ofus password_mutual mutupass } lun 1 target_lun 1 lun 2 target_lun 1 lun 3 target_lun 1 } } } } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_nested_blocks.ast0000664000000000000000000000223412443074135021117 0ustar (lp0 (lp1 (dp2 S'line' p3 I1 sS'type' p4 S'obj' p5 sS'col' p6 I1 sS'key' p7 (S'storage' p8 S'fileio' p9 tp10 sa(dp11 S'statements' p12 (lp13 (lp14 (dp15 g3 I2 sg4 g5 sg6 I5 sg7 (S'disk' p16 S'vm1' p17 tp18 sa(dp19 g12 (lp20 (lp21 (dp22 g3 I3 sg4 S'attr' p23 sg6 I9 sg7 (S'path' p24 S'/tmp/disk1.img' p25 tp26 saa(lp27 (dp28 g3 I4 sg4 g23 sg6 I9 sg7 (S'size' p29 S'1MB' p30 tp31 saa(lp32 (dp33 g3 I6 sg4 S'group' p34 sg6 I9 sg7 (S'attribute' p35 tp36 sa(dp37 g12 (lp38 (lp39 (dp40 g3 I7 sg4 g23 sg6 I13 sg7 (S'block_size' p41 S'512' p42 tp43 saa(lp44 (dp45 g3 I8 sg4 g23 sg6 I13 sg7 (S'optimal_sectors' p46 S'1024' p47 tp48 saa(lp49 (dp50 g3 I9 sg4 g23 sg6 I13 sg7 (S'queue_depth' p51 S'32' p52 tp53 saa(lp54 (dp55 g3 I11 sg4 g23 sg6 I13 sg7 (S'emulate_tas' p56 S'yes' p57 tp58 saa(lp59 (dp60 g3 I12 sg4 g23 sg6 I13 sg7 (S'enforce_pr_isids' p61 S'yes' p62 tp63 saa(lp64 (dp65 g3 I14 sg4 g23 sg6 I13 sg7 (S'emulate_dpo' p66 S'no' p67 tp68 saa(lp69 (dp70 g3 I15 sg4 g23 sg6 I13 sg7 (S'emulate_tpu' p71 S'no' p72 tp73 saa(lp74 (dp75 g3 I16 sg4 g23 sg6 I13 sg7 (S'is_nonrot' p76 S'no' p77 tp78 saasg3 I6 sg4 S'block' p79 sg6 I19 saasg3 I2 sg4 g79 sg6 I14 saasg3 I1 sg4 g79 sg6 I16 saa.rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_nested_blocks.lio0000664000000000000000000000053112443074135021111 0ustar storage fileio { disk vm1 { path /tmp/disk1.img size 1MB attribute { block_size 512 optimal_sectors 1024 queue_depth 32 emulate_tas yes enforce_pr_isids yes emulate_dpo no emulate_tpu no is_nonrot no } } } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_one_line.ast0000664000000000000000000000037712443074135020076 0ustar (lp0 (lp1 (dp2 S'line' p3 I1 sS'type' p4 S'obj' p5 sS'col' p6 I1 sS'key' p7 (S'fabric' p8 S'iscsi' p9 tp10 sa(dp11 g3 I1 sg4 S'group' p12 sg6 I14 sg7 (S'discovery_auth' p13 tp14 sa(dp15 g3 I1 sg4 S'attr' p16 sg6 I29 sg7 (S'enable' p17 S'yes' p18 tp19 saa.rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_one_line.lio0000664000000000000000000000004712443074135020064 0ustar fabric iscsi discovery_auth enable yes rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_ramdisk_fileio_iscsi.lio0000664000000000000000000001346512443074135022457 0ustar storage fileio disk disk1 { path /tmp/disk1.img size 1.0M buffered yes attribute block_size 512 attribute emulate_dpo 0 attribute emulate_fua_read 0 attribute emulate_fua_write 1 attribute emulate_rest_reord 0 attribute emulate_tas 1 attribute emulate_tpu 0 attribute emulate_tpws 0 attribute emulate_ua_intlck_ctrl 0 attribute emulate_write_cache 0 attribute enforce_pr_isids 1 attribute is_nonrot 0 attribute max_unmap_block_desc_count 0 attribute max_unmap_lba_count 0 attribute optimal_sectors 1024 attribute queue_depth 32 attribute unmap_granularity 0 attribute unmap_granularity_alignment 0 } storage fileio disk disk2 { path /tmp/disk2.img size 1.0M buffered yes attribute block_size 512 attribute emulate_dpo 0 attribute emulate_fua_read 0 attribute emulate_fua_write 1 attribute emulate_rest_reord 0 attribute emulate_tas 1 attribute emulate_tpu 0 attribute emulate_tpws 0 attribute emulate_ua_intlck_ctrl 0 attribute emulate_write_cache 0 attribute enforce_pr_isids 1 attribute is_nonrot 0 attribute max_unmap_block_desc_count 0 attribute max_unmap_lba_count 0 attribute optimal_sectors 1024 attribute queue_depth 32 attribute unmap_granularity 0 attribute unmap_granularity_alignment 0 } storage rd_mcp disk test { size 1.0M attribute block_size 512 attribute emulate_dpo 0 attribute emulate_fua_read 0 attribute emulate_fua_write 1 attribute emulate_rest_reord 0 attribute emulate_tas 1 attribute emulate_tpu 0 attribute emulate_tpws 0 attribute emulate_ua_intlck_ctrl 0 attribute emulate_write_cache 0 attribute enforce_pr_isids 1 attribute is_nonrot 0 attribute max_unmap_block_desc_count 0 attribute max_unmap_lba_count 0 attribute optimal_sectors 1024 attribute queue_depth 32 attribute unmap_granularity 0 attribute unmap_granularity_alignment 0 } storage rd_mcp disk test2 { size 1.0M attribute block_size 512 attribute emulate_dpo 0 attribute emulate_fua_read 0 attribute emulate_fua_write 1 attribute emulate_rest_reord 0 attribute emulate_tas 1 attribute emulate_tpu 0 attribute emulate_tpws 0 attribute emulate_ua_intlck_ctrl 0 attribute emulate_write_cache 0 attribute enforce_pr_isids 1 attribute is_nonrot 0 attribute max_unmap_block_desc_count 0 attribute max_unmap_lba_count 0 attribute optimal_sectors 1024 attribute queue_depth 32 attribute unmap_granularity 0 attribute unmap_granularity_alignment 0 } fabric iscsi { discovery_auth enable 1 discovery_auth userid target1 discovery_auth password kjh45fDf_ discovery_auth mutual_userid no discovery_auth mutual_password no } fabric iscsi target iqn.2003-01.org.linux-iscsi.targetcli.x8664:sn.4699f8812c88 tpgt 1 { attribute authentication 0 attribute cache_dynamic_acls 0 attribute default_cmdsn_depth 16 attribute demo_mode_write_protect 0 attribute generate_node_acls 0 attribute login_timeout 15 attribute netif_timeout 2 attribute prod_mode_write_protect 0 parameter AuthMethod CHAP parameter DataDigest "CRC32C,None" parameter DataPDUInOrder Yes parameter DataSequenceInOrder Yes parameter DefaultTime2Retain 20 parameter DefaultTime2Wait 2 parameter ErrorRecoveryLevel No parameter FirstBurstLength 65536 parameter HeaderDigest "CRC32C,None" parameter IFMarkInt "2048~65535" parameter IFMarker No parameter ImmediateData Yes parameter InitialR2T Yes parameter MaxBurstLength 262144 parameter MaxConnections 12 parameter MaxOutstandingR2T 34 parameter MaxRecvDataSegmentLength 8192 parameter OFMarkInt "2048~65535" parameter OFMarker No parameter TargetAlias "LIO Target" lun 1 backend fileio:disk1 lun 2 backend fileio:disk2 acl iqn.2003-01.org.linux-iscsi.targetcli.x8664:client1 { attribute dataout_timeout 3 attribute dataout_timeout_retries 5 attribute default_erl 0 attribute nopin_response_timeout 30 attribute nopin_timeout 15 attribute random_datain_pdu_offsets 0 attribute random_datain_seq_offsets 0 attribute random_r2t_offsets 0 auth password foobar auth password_mutual mutupass auth userid jerome auth userid_mutual just_the2ofus mapped_lun 1 { target_lun 1 write_protect 0 } mapped_lun 2 { target_lun 1 write_protect 0 } mapped_lun 3 { target_lun 1 write_protect 0 } } portal 0.0.0.0:3260 enable 1 } fabric iscsi target iqn.2003-01.org.linux-iscsi.targetcli.x8664:sn.caa307436d89 tpgt 1 { attribute authentication 1 attribute cache_dynamic_acls 0 attribute default_cmdsn_depth 16 attribute demo_mode_write_protect 1 attribute generate_node_acls 0 attribute login_timeout 15 attribute netif_timeout 2 attribute prod_mode_write_protect 0 parameter AuthMethod CHAP parameter DataDigest "CRC32C,None" parameter DataPDUInOrder Yes parameter DataSequenceInOrder Yes parameter DefaultTime2Retain 20 parameter DefaultTime2Wait 2 parameter ErrorRecoveryLevel 0 parameter FirstBurstLength 65536 parameter HeaderDigest "CRC32C,None" parameter IFMarkInt "2048~65535" parameter IFMarker No parameter ImmediateData Yes parameter InitialR2T Yes parameter MaxBurstLength 262144 parameter MaxConnections 1 parameter MaxOutstandingR2T 1 parameter MaxRecvDataSegmentLength 8192 parameter OFMarkInt "2048~65535" parameter OFMarker No parameter TargetAlias "LIO Target" lun 0 backend rd_mcp:test lun 1 backend rd_mcp:test2 enable 1 } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_sample_1.lio0000664000000000000000000001451012443074135017775 0ustar storage fileio disk test { buffered no path /tmp/test.img size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } storage rd_mcp { disk test { nullio no size 10.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write no emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 0 optimal_sectors 8192 queue_depth 128 unmap_granularity 0 unmap_granularity_alignment 0 } } disk test2 { nullio no size 10.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 0 optimal_sectors 8192 queue_depth 128 unmap_granularity 0 unmap_granularity_alignment 0 } } disk test3 { nullio no size 10.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 0 optimal_sectors 8192 queue_depth 128 unmap_granularity 0 unmap_granularity_alignment 0 } } disk test_nullio { nullio yes size 10.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 0 optimal_sectors 8192 queue_depth 128 unmap_granularity 0 unmap_granularity_alignment 0 } } } fabric iscsi { discovery_auth { enable no mutual_password "" mutual_userid "" password "" userid "" } target iqn.2003-01.org.linux-iscsi.ws0.x8664:sn.690f8dd50f79 tpgt 1 { enable yes attribute { authentication yes cache_dynamic_acls no default_cmdsn_depth 64 default_erl 0 demo_mode_discovery yes demo_mode_write_protect yes generate_node_acls no login_timeout 15 netif_timeout 2 prod_mode_write_protect no } auth { password "" password_mutual "" userid "" userid_mutual "" } parameter { AuthMethod CHAP DataDigest "CRC32C,None" DataPDUInOrder yes DataSequenceInOrder yes DefaultTime2Retain 20 DefaultTime2Wait 2 ErrorRecoveryLevel no FirstBurstLength 65536 HeaderDigest "CRC32C,None" IFMarkInt "2048~65535" IFMarker no ImmediateData yes InitialR2T yes MaxBurstLength 262144 MaxConnections 1 MaxOutstandingR2T 1 MaxRecvDataSegmentLength 8192 MaxXmitDataSegmentLength 262144 OFMarkInt "2048~65535" OFMarker no TargetAlias "LIO Target" } lun 0 backend fileio:test acl iqn.2003-01.org.linux-iscsi.ws0.x8664:client1 { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { password "" password_mutual "" userid "" userid_mutual "" } mapped_lun 0 { target_lun 0 write_protect no } } } } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_sample_2.lio0000664000000000000000000001407712443074135020006 0ustar storage fileio { disk file_buffered_1MB { buffered yes path /tmp/file_buffered_1MB size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache yes enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } disk file_no_option_2MB { buffered yes path /tmp/file_no_option_2MB size 2.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache yes enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } disk file_sparse_1MB { buffered yes path /tmp/file_sparse_1MB size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache yes enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } } fabric iscsi { discovery_auth { enable no mutual_password "" mutual_userid "" password "" userid "" } target iqn.2003-01.org.linux-iscsi.ws0.x8664:sn.31631c361eba tpgt 1 { enable yes attribute { authentication yes cache_dynamic_acls no default_cmdsn_depth 64 default_erl 0 demo_mode_discovery yes demo_mode_write_protect yes generate_node_acls no login_timeout 15 netif_timeout 2 prod_mode_write_protect no } auth { password "" password_mutual "" userid "" userid_mutual "" } parameter { AuthMethod CHAP DataDigest "CRC32C,None" DataPDUInOrder yes DataSequenceInOrder yes DefaultTime2Retain 20 DefaultTime2Wait 2 ErrorRecoveryLevel no FirstBurstLength 65536 HeaderDigest "CRC32C,None" IFMarkInt "2048~65535" IFMarker no ImmediateData yes InitialR2T yes MaxBurstLength 262144 MaxConnections 1 MaxOutstandingR2T 1 MaxRecvDataSegmentLength 8192 MaxXmitDataSegmentLength 262144 OFMarkInt "2048~65535" OFMarker no TargetAlias "LIO Target" } lun 0 backend fileio:file_buffered_1MB lun 1 backend fileio:file_no_option_2MB lun 2 backend fileio:file_sparse_1MB acl iqn.2003-01.org.linux-iscsi.ws0.x8664:client1 { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { password "" password_mutual "" userid "" userid_mutual "" } mapped_lun 0 { target_lun 0 write_protect no } mapped_lun 1 { target_lun 1 write_protect no } mapped_lun 2 { target_lun 2 write_protect no } } acl iqn.2003-01.org.linux-iscsi.ws0.x8664:client2 { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { password "" password_mutual "" userid "" userid_mutual "" } mapped_lun 0 { target_lun 0 write_protect no } mapped_lun 1 { target_lun 1 write_protect no } mapped_lun 2 { target_lun 2 write_protect no } } } } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_sample_3.lio0000664000000000000000000001376612443074135020013 0ustar storage fileio { disk file_no_option_2MB { buffered yes path /tmp/file_no_option_2MB size 2.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache yes enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } disk file_sparse_1MB { buffered yes path /tmp/file_sparse_1MB size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache yes enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } disk file_unbuffered_1MB { buffered yes path /tmp/file_unbuffered_1MB size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache yes enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } } fabric iscsi { discovery_auth { enable no mutual_password "" mutual_userid "" password "" userid "" } target iqn.2003-01.org.linux-iscsi.ws0.x8664:sn.31631c361eba tpgt 1 { enable yes attribute { authentication yes cache_dynamic_acls no default_cmdsn_depth 64 default_erl 0 demo_mode_discovery yes demo_mode_write_protect yes generate_node_acls no login_timeout 15 netif_timeout 2 prod_mode_write_protect no } auth { password "" password_mutual "" userid "" userid_mutual "" } parameter { AuthMethod CHAP DataDigest "CRC32C,None" DataPDUInOrder yes DataSequenceInOrder yes DefaultTime2Retain 20 DefaultTime2Wait 2 ErrorRecoveryLevel no FirstBurstLength 65536 HeaderDigest "CRC32C,None" IFMarkInt "2048~65535" IFMarker no ImmediateData yes InitialR2T yes MaxBurstLength 262144 MaxConnections 1 MaxOutstandingR2T 1 MaxRecvDataSegmentLength 8192 MaxXmitDataSegmentLength 262144 OFMarkInt "2048~65535" OFMarker no TargetAlias "LIO Target" } lun 1 backend fileio:file_no_option_2MB lun 2 backend fileio:file_sparse_1MB acl iqn.2003-01.org.linux-iscsi.ws0.x8664:client1 { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { password "" password_mutual "" userid "" userid_mutual "" } mapped_lun 1 { target_lun 1 write_protect no } mapped_lun 2 { target_lun 2 write_protect no } } acl iqn.2003-01.org.linux-iscsi.ws0.x8664:client2 { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { password "" password_mutual "" userid "" userid_mutual "" } mapped_lun 1 { target_lun 1 write_protect no } mapped_lun 2 { target_lun 2 write_protect no } } } } fabric loopback target naa.60014054793b60dd { lun 0 backend fileio:file_no_option_2MB lun 1 backend fileio:file_sparse_1MB lun 2 backend fileio:file_unbuffered_1MB } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_sample_4.lio0000664000000000000000000001411612443074135020002 0ustar storage fileio { disk file_no_option_2MB { buffered yes path /tmp/file_no_option_2mb size 2.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache yes enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } disk file_sparse_1MB { buffered yes path /tmp/file_sparse_1mb size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache yes enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } disk file_unbuffered_1MB { buffered yes path /tmp/file_unbuffered_1mb size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache yes enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } } fabric iscsi { discovery_auth { enable no mutual_password "" mutual_userid "" password "" userid "" } target iqn.2003-01.org.linux-iscsi.ws0.x8664:sn.31631c361eba tpgt 1 { enable yes attribute { authentication yes cache_dynamic_acls no default_cmdsn_depth 64 default_erl 0 demo_mode_discovery yes demo_mode_write_protect yes generate_node_acls no login_timeout 15 netif_timeout 2 prod_mode_write_protect no } auth { password "" password_mutual "" userid "" userid_mutual "" } parameter { AuthMethod CHAP DataDigest "CRC32C,None" DataPDUInOrder yes DataSequenceInOrder yes DefaultTime2Retain 20 DefaultTime2Wait 2 ErrorRecoveryLevel no FirstBurstLength 65536 HeaderDigest "CRC32C,None" IFMarkInt "2048~65535" IFMarker no ImmediateData yes InitialR2T yes MaxBurstLength 262144 MaxConnections 1 MaxOutstandingR2T 1 MaxRecvDataSegmentLength 8192 MaxXmitDataSegmentLength 262144 OFMarkInt "2048~65535" OFMarker no TargetAlias "LIO Target" } lun 1 backend fileio:file_no_option_2MB lun 2 backend fileio:file_sparse_1MB acl iqn.2003-01.org.linux-iscsi.ws0.x8664:client1 { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { password "" password_mutual "" userid "" userid_mutual "" } mapped_lun 1 { target_lun 1 write_protect no } mapped_lun 2 { target_lun 2 write_protect no } } acl iqn.2003-01.org.linux-iscsi.ws0.x8664:client2 { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { password "" password_mutual "" userid "" userid_mutual "" } mapped_lun 1 { target_lun 1 write_protect no } mapped_lun 2 { target_lun 2 write_protect no } } } } fabric loopback target naa.60014054793b60dd { lun 0 backend fileio:file_no_option_2MB lun 1 backend fileio:file_sparse_1MB lun 2 backend fileio:file_unbuffered_1MB } fabric vhost target naa.6001405d7e35b513 tpgt 1 lun 0 backend fileio:file_no_option_2MB rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_sample_5.lio0000664000000000000000000001653512443074135020012 0ustar storage fileio { disk disk1 { buffered yes path /tmp/disk1.img size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 0 max_unmap_lba_count 0 max_write_same_len 4096 optimal_sectors 1024 queue_depth 32 unmap_granularity 0 unmap_granularity_alignment 0 } } disk disk2 { buffered yes path /tmp/disk2.img size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 0 max_unmap_lba_count 0 max_write_same_len 4096 optimal_sectors 1024 queue_depth 32 unmap_granularity 0 unmap_granularity_alignment 0 } } } storage rd_mcp { disk test { nullio no size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 0 optimal_sectors 1024 queue_depth 32 unmap_granularity 0 unmap_granularity_alignment 0 } } disk test2 { nullio no size 1.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 0 optimal_sectors 1024 queue_depth 32 unmap_granularity 0 unmap_granularity_alignment 0 } } } fabric iscsi { discovery_auth { enable yes mutual_password no mutual_userid no password kjh45fDf_ userid target1 } target iqn.2003-01.org.linux-iscsi.targetcli.x8664:sn.4699f8812c88 tpgt 1 { enable yes attribute { authentication no cache_dynamic_acls no default_cmdsn_depth 16 default_erl 0 demo_mode_discovery yes demo_mode_write_protect no generate_node_acls no login_timeout 15 netif_timeout 2 prod_mode_write_protect no } auth { password "" password_mutual "" userid "" userid_mutual "" } parameter { AuthMethod CHAP DataDigest "CRC32C,None" DataPDUInOrder yes DataSequenceInOrder yes DefaultTime2Retain 20 DefaultTime2Wait 2 ErrorRecoveryLevel no FirstBurstLength 65536 HeaderDigest "CRC32C,None" IFMarkInt "2048~65535" IFMarker no ImmediateData yes InitialR2T yes MaxBurstLength 262144 MaxConnections 12 MaxOutstandingR2T 34 MaxRecvDataSegmentLength 8192 MaxXmitDataSegmentLength 262144 OFMarkInt "2048~65535" OFMarker no TargetAlias "LIO Target" } lun 1 backend fileio:disk1 lun 2 backend fileio:disk2 acl iqn.2003-01.org.linux-iscsi.targetcli.x8664:client1 { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { password foobar password_mutual mutupass userid jerome userid_mutual just_the2ofus } mapped_lun 1 { target_lun 1 write_protect no } mapped_lun 2 { target_lun 1 write_protect no } mapped_lun 3 { target_lun 1 write_protect no } } portal 0.0.0.0:3260 } target iqn.2003-01.org.linux-iscsi.targetcli.x8664:sn.caa307436d89 tpgt 1 { enable yes attribute { authentication yes cache_dynamic_acls no default_cmdsn_depth 16 default_erl 0 demo_mode_discovery yes demo_mode_write_protect yes generate_node_acls no login_timeout 15 netif_timeout 2 prod_mode_write_protect no } auth { password "" password_mutual "" userid "" userid_mutual "" } parameter { AuthMethod CHAP DataDigest "CRC32C,None" DataPDUInOrder yes DataSequenceInOrder yes DefaultTime2Retain 20 DefaultTime2Wait 2 ErrorRecoveryLevel no FirstBurstLength 65536 HeaderDigest "CRC32C,None" IFMarkInt "2048~65535" IFMarker no ImmediateData yes InitialR2T yes MaxBurstLength 262144 MaxConnections 1 MaxOutstandingR2T 1 MaxRecvDataSegmentLength 8192 MaxXmitDataSegmentLength 262144 OFMarkInt "2048~65535" OFMarker no TargetAlias "LIO Target" } lun 0 backend rd_mcp:test lun 1 backend rd_mcp:test2 } } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_sample_6.lio0000664000000000000000000002016612443074135020006 0ustar storage iblock { disk test0 { path /tmp/test_blockdev_0 attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } disk test1 { path /tmp/test_blockdev_1 attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } disk test2 { path /tmp/test_blockdev_2 attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 4096 optimal_sectors 8192 queue_depth 128 unmap_granularity 1 unmap_granularity_alignment 0 } } } storage rd_mcp { disk test { nullio no size 10.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write no emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 0 optimal_sectors 8192 queue_depth 128 unmap_granularity 0 unmap_granularity_alignment 0 } } disk test2 { nullio no size 10.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 0 optimal_sectors 8192 queue_depth 128 unmap_granularity 0 unmap_granularity_alignment 0 } } disk test3 { nullio no size 10.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 8192 max_write_same_len 0 optimal_sectors 8192 queue_depth 128 unmap_granularity 0 unmap_granularity_alignment 0 } } disk test_nullio { nullio yes size 10.0MB attribute { block_size 512 emulate_3pc yes emulate_caw yes emulate_dpo no emulate_fua_read no emulate_fua_write yes emulate_model_alias no emulate_rest_reord no emulate_tas yes emulate_tpu no emulate_tpws no emulate_ua_intlck_ctrl no emulate_write_cache no enforce_pr_isids yes fabric_max_sectors 8192 is_nonrot no max_unmap_block_desc_count 1 max_unmap_lba_count 4096 max_write_same_len 0 optimal_sectors 8192 queue_depth 128 unmap_granularity 0 unmap_granularity_alignment 0 } } } fabric iscsi { discovery_auth { enable no mutual_password "" mutual_userid "" password "" userid "" } target iqn.2003-01.org.linux-iscsi.ws0.x8664:sn.690f8dd50f79 tpgt 1 { enable yes attribute { authentication yes cache_dynamic_acls no default_cmdsn_depth 64 default_erl 0 demo_mode_discovery yes demo_mode_write_protect yes generate_node_acls no login_timeout 15 netif_timeout 2 prod_mode_write_protect no } auth { password "" password_mutual "" userid "" userid_mutual "" } parameter { AuthMethod CHAP DataDigest "CRC32C,None" DataPDUInOrder yes DataSequenceInOrder yes DefaultTime2Retain 20 DefaultTime2Wait 2 ErrorRecoveryLevel no FirstBurstLength 65536 HeaderDigest "CRC32C,None" IFMarkInt "2048~65535" IFMarker no ImmediateData yes InitialR2T yes MaxBurstLength 262144 MaxConnections 1 MaxOutstandingR2T 1 MaxRecvDataSegmentLength 8192 MaxXmitDataSegmentLength 262144 OFMarkInt "2048~65535" OFMarker no TargetAlias "LIO Target" } lun 0 backend iblock:test0 lun 1 backend iblock:test1 acl iqn.2003-01.org.linux-iscsi.ws0.x8664:client1 { attribute { dataout_timeout 3 dataout_timeout_retries 5 default_erl 0 nopin_response_timeout 30 nopin_timeout 15 random_datain_pdu_offsets no random_datain_seq_offsets no random_r2t_offsets no } auth { password "" password_mutual "" userid "" userid_mutual "" } mapped_lun 0 { target_lun 0 write_protect no } } } } rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_strings.ast0000664000000000000000000000270012443074135017767 0ustar (lp0 (lp1 (dp2 S'line' p3 I2 sS'type' p4 S'obj' p5 sS'col' p6 I1 sS'key' p7 (S'storage' p8 S'fileio' p9 tp10 sa(dp11 S'statements' p12 (lp13 (lp14 (dp15 g3 I3 sg4 g5 sg6 I5 sg7 (S'disk' p16 S'This:is:a_long_name_for_disk' p17 tp18 sa(dp19 g12 (lp20 (lp21 (dp22 g3 I4 sg4 S'attr' p23 sg6 I9 sg7 (S'path' p24 S'/tmp/disk1.img' p25 tp26 saa(lp27 (dp28 g3 I5 sg4 g23 sg6 I9 sg7 (S'size' p29 S'1MB' p30 tp31 saa(lp32 (dp33 g3 I7 sg4 S'group' p34 sg6 I9 sg7 (S'attribute' p35 tp36 sa(dp37 g12 (lp38 (lp39 (dp40 g3 I9 sg4 g23 sg6 I13 sg7 (S'block_size' p41 S'512' p42 tp43 saa(lp44 (dp45 g3 I10 sg4 g23 sg6 I13 sg7 (S'optimal_sectors' p46 S'1024' p47 tp48 saa(lp49 (dp50 g3 I11 sg4 g23 sg6 I13 sg7 (S'queue_depth' p51 S'32' p52 tp53 saa(lp54 (dp55 g3 I12 sg4 g23 sg6 I13 sg7 (S'fancy_attribute' p56 S'This is a fancy attribute that takes a long value' p57 tp58 saa(lp59 (dp60 g3 I14 sg4 g23 sg6 I13 sg7 (S'emulate_tas' p61 S'yes_I_do_want_to_enable_this_%!?@_functionnality!' p62 tp63 saa(lp64 (dp65 S'comment' p66 S'EOL comment' p67 sg3 I15 sg4 g23 sg6 I13 sg7 (S'enforce_pr_isids' p68 S'yes' p69 tp70 saa(lp71 (dp72 g3 I17 sg4 g23 sg6 I13 sg7 (S'emulate_dpo' p73 S'no' p74 tp75 saa(lp76 (dp77 g66 S'Hello there!' p78 sg3 I18 sg4 g23 sg6 I13 sg7 (S'emulate_tpu' p79 S'no' p80 tp81 saa(lp82 (dp83 g66 S'Does what it says?' p84 sg3 I19 sg4 g23 sg6 I13 sg7 (S'is_nonrot' p85 S'no' p86 tp87 saasg3 I7 sg4 S'block' p88 sg6 I19 saasg3 I3 sg4 g88 sg6 I39 saasg3 I2 sg4 g88 sg6 I16 saa.rtslib-3.0.pre4.1~g1b33ceb/tests/data/config_strings.lio0000664000000000000000000000134412443074135017766 0ustar # This is a comment before the first statement storage fileio { disk This:is:a_long_name_for_disk { path /tmp/disk1.img size 1MB # This is an indented comment after size and before a group attribute { # This is an indented comment after a group block_size 512 optimal_sectors 1024 queue_depth 32 fancy_attribute "This is a fancy attribute that takes a long value" emulate_tas yes_I_do_want_to_enable_this_%!?@_functionnality! enforce_pr_isids yes # EOL comment emulate_dpo no emulate_tpu no # Hello there! is_nonrot no # Does what it says? } } } # Last words comment rtslib-3.0.pre4.1~g1b33ceb/tests/safe/test_config_parser.py0000664000000000000000000000777412443074135020517 0ustar import sys, pprint, logging, unittest, cPickle from rtslib import config_parser # TODO Add PolicyParser tests logging.basicConfig() log = logging.getLogger('TestConfigParser') log.setLevel(logging.INFO) class TestConfigParser(unittest.TestCase): parser = config_parser.ConfigParser() samples_dir = '../data' def test_one_line(self): print log.info(self._testMethodName) config = "%s/config_one_line.lio" % self.samples_dir parse_tree = self.parser.parse_file(config) for statement in parse_tree: log.debug(pprint.pformat(statement)) # with open("%s.ast" % config[:-4], 'w') as f: # cPickle.dump(parse_tree, f) with open("%s.ast" % config[:-4], 'r') as f: expected_tree = cPickle.load(f) self.failUnless(parse_tree == expected_tree) def test_basic(self): print log.info(self._testMethodName) config = "%s/config_basic.lio" % self.samples_dir parse_tree = self.parser.parse_file(config) for statement in parse_tree: log.debug(pprint.pformat(statement)) # with open("%s.ast" % config[:-4], 'w') as f: # cPickle.dump(parse_tree, f) with open("%s.ast" % config[:-4], 'r') as f: expected_tree = cPickle.load(f) self.failUnless(parse_tree == expected_tree) def test_attribute_group(self): print log.info(self._testMethodName) config = "%s/config_attribute_group.lio" % self.samples_dir parse_tree = self.parser.parse_file(config) for statement in parse_tree: log.debug(pprint.pformat(statement)) # with open("%s.ast" % config[:-4], 'w') as f: # cPickle.dump(parse_tree, f) with open("%s.ast" % config[:-4], 'r') as f: expected_tree = cPickle.load(f) self.failUnless(parse_tree == expected_tree) def test_nested_blocks(self): print log.info(self._testMethodName) config = "%s/config_nested_blocks.lio" % self.samples_dir parse_tree = self.parser.parse_file(config) for statement in parse_tree: log.debug(pprint.pformat(statement)) # with open("%s.ast" % config[:-4], 'w') as f: # cPickle.dump(parse_tree, f) with open("%s.ast" % config[:-4], 'r') as f: expected_tree = cPickle.load(f) self.failUnless(parse_tree == expected_tree) def test_comments(self): print log.info(self._testMethodName) config = "%s/config_comments.lio" % self.samples_dir parse_tree = self.parser.parse_file(config) for statement in parse_tree: log.debug(pprint.pformat(statement)) # with open("%s.ast" % config[:-4], 'w') as f: # cPickle.dump(parse_tree, f) with open("%s.ast" % config[:-4], 'r') as f: expected_tree = cPickle.load(f) self.failUnless(parse_tree == expected_tree) def test_strings(self): print log.info(self._testMethodName) config = "%s/config_strings.lio" % self.samples_dir parse_tree = self.parser.parse_file(config) for statement in parse_tree: log.debug(pprint.pformat(statement)) # with open("%s.ast" % config[:-4], 'w') as f: # cPickle.dump(parse_tree, f) with open("%s.ast" % config[:-4], 'r') as f: expected_tree = cPickle.load(f) self.failUnless(parse_tree == expected_tree) def test_complete(self): print log.info(self._testMethodName) config = "%s/config_complete.lio" % self.samples_dir parse_tree = self.parser.parse_file(config) for statement in parse_tree: log.debug(pprint.pformat(statement)) # with open("%s.ast" % config[:-4], 'w') as f: # cPickle.dump(parse_tree, f) with open("%s.ast" % config[:-4], 'r') as f: expected_tree = cPickle.load(f) self.failUnless(parse_tree == expected_tree) if __name__ == '__main__': unittest.main() rtslib-3.0.pre4.1~g1b33ceb/tests/safe/test_config.py0000664000000000000000000001221012443074135017120 0ustar import sys, pprint, logging, unittest, tempfile from pyparsing import ParseException from rtslib import config logging.basicConfig() log = logging.getLogger('TestConfig') log.setLevel(logging.INFO) class TestConfig(unittest.TestCase): samples_dir = '../data' def test_load_basic(self): print log.info(self._testMethodName) filepath = "%s/config_basic.lio" % self.samples_dir lio = config.Config() lio.load(filepath) tests = [("storage fileio", 'obj', 1), ("storage fileio disk vm1 path /tmp/vm1.img", 'attr', 1), ("storage fileio disk vm1 size 1.0MB", 'attr', 1), ("storage .* disk .* .* .*", 'attr', 3)] for pattern, node_type, arity in tests: results = lio.search(pattern) log.debug("config.current.search(%s) -> (%d) %s" % (pattern, len(results), results)) self.failUnless(len(results) == arity) for result in results: self.failUnless(result.data['type'] == node_type) self.failUnless(lio.search("storage fileio disk vm1 path") == []) def test_load_complete(self): print log.info(self._testMethodName) filepath = "%s/config_complete.lio" % self.samples_dir lio = config.Config() lio.load(filepath) tests = [("storage fileio", 'obj', 1), ("storage fileio disk disk1 path", None, 0), ("storage fileio disk disk1 path /tmp/disk1.img", 'attr', 1), ("storage fileio disk disk1 path /tmp/disk2.img", 'attr', 0), ("storage fileio disk disk1 size 1.0MB", 'attr', 1), ("storage fileio disk disk2 path /tmp/disk2.img", 'attr', 1), ("storage .* disk .* .* .* .*", 'attr', 46), ("storage .* disk .* attribute .* .*", 'attr', 46), ("storage .* disk .* .* .*", 'attr', 6)] for pattern, node_type, arity in tests: results = lio.search(pattern) log.debug("config.current.search(%s) -> (%d) %s" % (pattern, len(results), results)) self.failUnless(len(results) == arity) for result in results: self.failUnless(result.data['type'] == node_type) def test_clear_undo(self): print log.info(self._testMethodName) filepath = "%s/config_complete.lio" % self.samples_dir lio = config.Config() log.info("Load config") lio.load(filepath) self.failUnless(len(lio.search("storage fileio disk disk2")) == 1) lio.clear() self.failUnless(len(lio.search("storage fileio disk disk2")) == 0) lio.undo() self.failUnless(len(lio.search("storage fileio disk disk2")) == 1) def test_load_save(self): print log.info(self._testMethodName) filepath = "%s/config_complete.lio" % self.samples_dir lio = config.Config() lio.load(filepath) with tempfile.NamedTemporaryFile(delete=False) as temp: log.debug("Saving initial config to %s" % temp.name) dump1 = lio.save(temp.name) lio.load(temp.name) with tempfile.NamedTemporaryFile(delete=False) as temp: log.debug("Saving reloaded config to %s" % temp.name) dump2 = lio.save(temp.name) self.failUnless(dump1 == dump2) def test_set_delete(self): print log.info(self._testMethodName) filepath = "%s/config_complete.lio" % self.samples_dir lio = config.Config() set1 = lio.search("storage fileio disk mydisk") set2 = lio.search("fabric iscsi discovery_auth enable yes") self.failUnless(len(set1) == len(set2) == 0) iqn = '"iqn.2003-01.org.linux-iscsi.targetcli.x8664:sn.foo"' lio.set("fabric iscsi target " + iqn) self.assertRaises(ParseException, lio.set, "fabric iscsi discovery_auth") lio.set("fabric iscsi discovery_auth enable yes") lio.set("storage fileio disk vm1 {path /foo.img; size 1MB;}") self.assertRaises(ParseException, lio.set, "storage fileio disk vm1 {path /foo.img; size 1MB}") lio.set("storage fileio disk mydisk") set1 = lio.search("storage fileio disk mydisk") set2 = lio.search("fabric iscsi discovery_auth enable yes") self.failUnless(len(set1) == len(set2) == 1) lio.delete("storage fileio disk mydisk") lio.delete("fabric iscsi discovery_auth enable yes") set1 = lio.search("storage fileio disk mydisk") set2 = lio.search("fabric iscsi discovery_auth enable yes") self.failUnless(len(set1) == 0) self.failUnless(len(set2) == 0) def test_invalid_reference(self): print log.info(self._testMethodName) filepath = "%s/config_invalid_reference.lio" % self.samples_dir lio = config.Config() self.assertRaisesRegexp(config.ConfigError, ".*Invalid.*disk3.*", lio.load, filepath) lio = config.Config() if __name__ == '__main__': unittest.main() rtslib-3.0.pre4.1~g1b33ceb/tests/safe/test_config_tree.py0000664000000000000000000000764312443074135020155 0ustar import re, sys, pprint, logging, unittest from rtslib import config_tree logging.basicConfig() log = logging.getLogger('TestConfigTree') log.setLevel(logging.INFO) class TestConfigTree(unittest.TestCase): def test_create(self): print log.info(self._testMethodName) tree = config_tree.ConfigTree() self.failUnless(tree.get(None) is None) self.failUnless(tree.get_path(None) is None) self.failUnless(tree.get_path([]) is None) self.failUnless(tree.get(()) is None) self.failUnless(tree.delete(None) is None) self.failUnless(tree.get_path(('a',)) is None) self.failUnless(tree.get_path([('a',), ('b',), ('c',)]) is None) def test_add_get_delete(self): print log.info(self._testMethodName) tree = config_tree.ConfigTree() n1 = tree.set(('1', '2'), {'info': 'n1'}) nA = tree.set(('a', 'b'), {'info': 'nA'}) n2 = n1.set(('3', '4'), {'info': 'n2'}) nB = nA.set(('c', 'd'), {'info': 'nB'}) node = tree.get([('1', '2'), ('3', '4')]) self.failUnless(node.data['info'] == 'n2') node = tree.get([('1', '2')]) self.failUnless(node.data['info'] == 'n1') node = tree.get([('a', 'b'), ('c', 'd')]) self.failUnless(node.data['info'] == 'nB') self.failUnless(node.is_root == False) self.failUnless(tree.is_root == True) def test_add_get_delete(self): print log.info(self._testMethodName) tree = config_tree.ConfigTree() n1 = tree.set(('1', '2'), {'info': 'n1'}) nA = tree.set(('a', 'b'), {'info': 'nA'}) n2 = n1.set(('3', '4'), {'info': 'n2'}) nB = nA.set(('c', 'd'), {'info': 'nB'}) log.debug("root path: %s" % tree.path) log.debug("Node [1 2] path: %s" % n1.path) log.debug("Node [1 2 3 4] path: %s" % n2.path) log.debug("Node [a b] path: %s" % nA.path) log.debug("Node [a b c d] path: %s" % nB.path) def test_search(self): print log.info(self._testMethodName) tree = config_tree.ConfigTree() fileio = tree.set(('storage', 'fileio')) fileio.set(('disk', 'vm1')) fileio.set(('disk', 'vm2')) fileio.set(('disk', 'test1')) fileio.set(('disk', 'test2')) iblock = tree.set(('storage', 'iblock')) iblock.set(('disk', 'vm3')) iblock.set(('disk', 'vm4')) iblock.set(('disk', 'test1')) iblock.set(('disk', 'test2')) tests = [([("storage", ".*"), ("disk", "vm1")], 1), ([("storage", ".*"), ("disk", "vm2")], 1), ([("storage", ".*"), ("disk", "vm1")], 1), ([("storage", "fileio"), ("disk", "vm[0-9]")], 2), ([("storage", "file.*"), ("disk", "vm[0-9]")], 2), ([("storage", ".*"), ("disk", "vm[0-9]")], 4), ([("storage", ".*"), ("disk", ".*[12]")], 6), ([("storage", ".*"), ("disk", ".*")], 8)] for search_path, arity in tests: nodes = tree.search(search_path) self.failUnless(len(nodes) == arity) log.debug("Deleting iblock subtree") for node in tree.search([(".*", "iblock")]): tree.delete(node.path) tests = [([(".*", ".*"), ("disk", "vm1")], 1), ([(".*", ".*"), ("disk", "vm2")], 1), ([("storage", ".*"), ("disk", "vm1")], 1), ([(".*", "fileio"), ("disk", "vm[0-9]")], 2), ([(".*", "file.*"), ("disk", "vm[0-9]")], 2), ([(".*", ".*"), ("disk", "vm[0-9]")], 2), ([(".*", ".*"), (".*", ".*[12]")], 4), ([(".*", ".*"), (".*", ".*")], 4)] for search_path, arity in tests: nodes = tree.search(search_path) log.debug("search(%s) -> %s" % (search_path, nodes)) self.failUnless(len(nodes) == arity) if __name__ == '__main__': unittest.main() rtslib-3.0.pre4.1~g1b33ceb/tests/system/test_dump_restore.py0000664000000000000000000000632112443074135020777 0ustar import os, sys, glob, logging, unittest, tempfile, difflib, rtslib from pyparsing import ParseException logging.basicConfig() log = logging.getLogger('TestDumpRestore') log.setLevel(logging.INFO) def diffs(a, b): differ = difflib.Differ() context = [] result = [] for line in differ.compare(a.splitlines(), b.splitlines()): if line[0] in "+-": result.extend(context[-5:]) result.append(line) elif line[0] == "?": result.append(line[:-1]) context = [] else: context.append(line) return '\n'.join(result) class TestDumpRestore(unittest.TestCase): samples_dir = '../data' def cleanup(self): # Clear configfs list(rtslib.Config().apply()) # Remove test scsi_debug symlinks for test_blockdev in glob.glob("/tmp/test_blockdev_*"): os.unlink(test_blockdev) os.system("rmmod scsi_debug 2> /dev/null") def setUp(self): # Backup system config self.config_backup = rtslib.Config() self.config_backup.load_live() self.cleanup() # Create scsi_debug devices os.system("modprobe scsi_debug dev_size_mb=1 add_host=4") scsi_debug_blockdevs = "/sys/devices/pseudo_*/adapter*" \ "/host*/target*/*/block" test_blockdevs = ["/dev/%s" % name for path in glob.glob(scsi_debug_blockdevs) for name in os.listdir(path)] for i, test_blockdev in enumerate(test_blockdevs): os.symlink(test_blockdev, "/tmp/test_blockdev_%d" % i) print log.info(self._testMethodName) def tearDown(self): print("Restoring initial config...") self.cleanup() for step in self.config_backup.apply(): print(step) def test_load_apply_config(self): filepath = "%s/config_ramdisk_fileio_iscsi.lio" % self.samples_dir config = rtslib.Config() config.load(filepath) for step in config.apply(): print(step) def test_clear_apply_config(self): config = rtslib.Config() config.verify() for step in config.apply(): print(step) def test_config_samples(self): samples = ["%s/%s" % (self.samples_dir, name) for name in sorted(os.listdir(self.samples_dir)) if name.startswith("config_sample_") if name.endswith(".lio")] for sample in samples: with open(sample) as fd: orig = fd.read() config = rtslib.Config() print("Loading %s" % sample) config.load(sample) diff = diffs(orig, config.dump()) print(diff) self.failIf(diff) print("Verifying %s" % sample) config.verify() print("Applying %s" % sample) for step in config.apply(): print(step) config = rtslib.Config() print("Reloading %s from live" % sample) config.load_live() diff = diffs(orig, config.dump()) print(diff) self.failIf(diff) if __name__ == '__main__': unittest.main()