KiokuDB-Backend-DBI-1.22000755001750000144 011775631363 13776 5ustar00doyusers000000000000README100644001750000144 50311775631363 14715 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22 This archive contains the distribution KiokuDB-Backend-DBI, version 1.22: L backend for L This software is copyright (c) 2012 by Yuval Kogman, Infinity Interactive. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. Changes100644001750000144 552611775631363 15362 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.221.22 - Fix lookup of invalid entries that differ from a valid entry only in case 1.21 - Fix undocumented gin_index option to define_kiokudb_schema (frew) 1.20 - Fix some stuff with recent DBIx::Class (arcanez) 1.19 - Merge releases 1.16 (NUFFIN) - Updated to support change in DBIC object hierarchy 1.18 - Fix set subquery building. Bad use of parenthesis. - Includes an actual test for the set query. 1.17 - Small fix in release process 1.16 (DRUOSO) - Adds support for Set GIN queries 1.15 - Constructing a KiokuDB directory handle is no longer mandatory, you can get it from the DBIC schema (domm, nothingmuch) 1.14 - documentation fixes 1.13 - fix ordered loading of DBIC Core component in example code - FOR UPDATE added to SELECT queries on MySQL, PG and Oracle - compound operations now create a txn guard by default 1.12 - Fix FLORA's botched upload ;-) (typo in dist.ini) 1.11 - Support for DBIx::Class integration (KiokuDB objects and DBIC rows can point to each other, DBIC resultsets can be serialized in KiokuDB) - All SQL operations that use placeholders now support limiting of the number of placeholders, splitting the operation into batches (this is enabled by default for SQLite which limits SQL statements to 999 placeholders) (Jason May) - DBIx::Class::Optional::Dependencies now used in Makefile.PL to avoid problems with 'create => 1' 1.10 - NOTE: this was supposed to be 0.10... oops - Added mysql_strict attribute, true by default, causes SQL's strict mode to be set - Now specifies 'longblob' instead of 'blob' as the type for the 'data' column when deploying to MySQL because the "large" in blob apparently means 64k and MySQL was truncating it by default - Added more bitter comments about MySQL to the documentation - Make the table_info fix from 0.09 only apply to SQLite 0.09 - Fix double deployment with recent versions of DBD::SQLite - Pod fixes 0.08 - Fix GIN key update (FIXME no coverage in KiokuDB test suite yet) - uses done_testing in test suite 0.07 - Various documentation fixes - ID stream optimization - use the new sqlt_deploy_callback - Don't use execute_array for DELETE - Explicitly call $schema->disconnect on destruction - Allow coderef in DBI DSN - Don't reuse DBIC's txn_do 0.06 - Skip tests if SQL::Translator is missing 0.05 - Use an update statement when an entry the 'prev' attribute, and insert otherwise. - Add a 'schema_hook' attribute, to allow modification of the cloned schema before connecting 0.04 - Remove DBD::Pg usage (DBIC handles those parts now) 0.03 - Switch to Serialize::Delegate - add 'create' attribute 0.02 - Skip tests if DBD::SQLite is not available 0.01 - Initial release LICENSE100644001750000144 4375511775631363 15122 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22This software is copyright (c) 2012 by Yuval Kogman, Infinity Interactive. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. Terms of the Perl programming language system itself a) the GNU General Public License as published by the Free Software Foundation; either version 1, or (at your option) any later version, or b) the "Artistic License" --- The GNU General Public License, Version 1, February 1989 --- This software is Copyright (c) 2012 by Yuval Kogman, Infinity Interactive. This is free software, licensed under: The GNU General Public License, Version 1, February 1989 GNU GENERAL PUBLIC LICENSE Version 1, February 1989 Copyright (C) 1989 Free Software Foundation, Inc. 51 Franklin St, Suite 500, Boston, MA 02110-1335 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The license agreements of most software companies try to keep users at the mercy of those companies. By contrast, our General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. The General Public License applies to the Free Software Foundation's software and to any other program whose authors commit to using it. You can use it for your programs, too. When we speak of free software, we are referring to freedom, not price. Specifically, the General Public License is designed to make sure that you have the freedom to give away or sell copies of free software, that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of a such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must tell them their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License Agreement applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any work containing the Program or a portion of it, either verbatim or with modifications. Each licensee is addressed as "you". 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this General Public License and to the absence of any warranty; and give any other recipients of the Program a copy of this General Public License along with the Program. You may charge a fee for the physical act of transferring a copy. 2. You may modify your copy or copies of the Program or any portion of it, and copy and distribute such modifications under the terms of Paragraph 1 above, provided that you also do the following: a) cause the modified files to carry prominent notices stating that you changed the files and the date of any change; and b) cause the whole of any work that you distribute or publish, that in whole or in part contains the Program or any part thereof, either with or without modifications, to be licensed at no charge to all third parties under the terms of this General Public License (except that you may choose to grant warranty protection to some or all third parties, at your option). c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the simplest and most usual way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this General Public License. d) You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. Mere aggregation of another independent work with the Program (or its derivative) on a volume of a storage or distribution medium does not bring the other work under the scope of these terms. 3. You may copy and distribute the Program (or a portion or derivative of it, under Paragraph 2) in object code or executable form under the terms of Paragraphs 1 and 2 above provided that you also do one of the following: a) accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Paragraphs 1 and 2 above; or, b) accompany it with a written offer, valid for at least three years, to give any third party free (except for a nominal charge for the cost of distribution) a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Paragraphs 1 and 2 above; or, c) accompany it with the information you received as to where the corresponding source code may be obtained. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form alone.) Source code for a work means the preferred form of the work for making modifications to it. For an executable file, complete source code means all the source code for all modules it contains; but, as a special exception, it need not include source code for modules which are standard libraries that accompany the operating system on which the executable file runs, or for standard header files or definitions files that accompany that operating system. 4. You may not copy, modify, sublicense, distribute or transfer the Program except as expressly provided under this General Public License. Any attempt otherwise to copy, modify, sublicense, distribute or transfer the Program is void, and will automatically terminate your rights to use the Program under this License. However, parties who have received copies, or rights to use copies, from you under this General Public License will not have their licenses terminated so long as such parties remain in full compliance. 5. By copying, distributing or modifying the Program (or any work based on the Program) you indicate your acceptance of this license to do so, and all its terms and conditions. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. 7. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of the license which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the license, you may choose any version ever published by the Free Software Foundation. 8. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 9. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 10. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS Appendix: How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to humanity, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 1, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19xx name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (a program to direct compilers to make passes at assemblers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice That's all there is to it! --- The Artistic License 1.0 --- This software is Copyright (c) 2012 by Yuval Kogman, Infinity Interactive. This is free software, licensed under: The Artistic License 1.0 The Artistic License Preamble The intent of this document is to state the conditions under which a Package may be copied, such that the Copyright Holder maintains some semblance of artistic control over the development of the package, while giving the users of the package the right to use and distribute the Package in a more-or-less customary fashion, plus the right to make reasonable modifications. Definitions: - "Package" refers to the collection of files distributed by the Copyright Holder, and derivatives of that collection of files created through textual modification. - "Standard Version" refers to such a Package if it has not been modified, or has been modified in accordance with the wishes of the Copyright Holder. - "Copyright Holder" is whoever is named in the copyright or copyrights for the package. - "You" is you, if you're thinking about copying or distributing this Package. - "Reasonable copying fee" is whatever you can justify on the basis of media cost, duplication charges, time of people involved, and so on. (You will not be required to justify it to the Copyright Holder, but only to the computing community at large as a market that must bear the fee.) - "Freely Available" means that no fee is charged for the item itself, though there may be fees involved in handling the item. It also means that recipients of the item may redistribute it under the same conditions they received it. 1. You may make and give away verbatim copies of the source form of the Standard Version of this Package without restriction, provided that you duplicate all of the original copyright notices and associated disclaimers. 2. You may apply bug fixes, portability fixes and other modifications derived from the Public Domain or from the Copyright Holder. A Package modified in such a way shall still be considered the Standard Version. 3. You may otherwise modify your copy of this Package in any way, provided that you insert a prominent notice in each changed file stating how and when you changed that file, and provided that you do at least ONE of the following: a) place your modifications in the Public Domain or otherwise make them Freely Available, such as by posting said modifications to Usenet or an equivalent medium, or placing the modifications on a major archive site such as ftp.uu.net, or by allowing the Copyright Holder to include your modifications in the Standard Version of the Package. b) use the modified Package only within your corporation or organization. c) rename any non-standard executables so the names do not conflict with standard executables, which must also be provided, and provide a separate manual page for each non-standard executable that clearly documents how it differs from the Standard Version. d) make other distribution arrangements with the Copyright Holder. 4. You may distribute the programs of this Package in object code or executable form, provided that you do at least ONE of the following: a) distribute a Standard Version of the executables and library files, together with instructions (in the manual page or equivalent) on where to get the Standard Version. b) accompany the distribution with the machine-readable source of the Package with your modifications. c) accompany any non-standard executables with their corresponding Standard Version executables, giving the non-standard executables non-standard names, and clearly documenting the differences in manual pages (or equivalent), together with instructions on where to get the Standard Version. d) make other distribution arrangements with the Copyright Holder. 5. You may charge a reasonable copying fee for any distribution of this Package. You may charge any fee you choose for support of this Package. You may not charge a fee for this Package itself. However, you may distribute this Package in aggregate with other (possibly commercial) programs as part of a larger (possibly commercial) software distribution provided that you do not advertise this Package as a product of your own. 6. The scripts and library files supplied as input to or produced as output from the programs of this Package do not automatically fall under the copyright of this Package, but belong to whomever generated them, and may be sold commercially, and may be aggregated with this Package. 7. C or perl subroutines supplied by you and linked into this Package shall not be considered part of this Package. 8. The name of the Copyright Holder may not be used to endorse or promote products derived from this software without specific prior written permission. 9. THIS PACKAGE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE. The End dist.ini100644001750000144 142111775631363 15521 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22name = KiokuDB-Backend-DBI version = 1.22 author = Yuval Kogman license = Perl_5 copyright_holder = Yuval Kogman, Infinity Interactive ; authordep Dist::Zilla::PluginBundle::NUFFIN [@Filter] -bundle = @NUFFIN -remove = MakeMaker -remove = Signature -remove = PodWeaver ; don't break shit yet, next release dist = KiokuDB-Backend-DBI repository_at = github ; authordep Dist::Zilla::Plugin::MakeMaker::Awesome [=inc::DBICOptionalDeps] [Prereqs / ConfigureRequires] DBIx::Class::Optional::Dependencies = 0 [=inc::DistMeta] dynamic_config = 1 [Prereqs] Moose = 0 MooseX::Types = 0.08 MooseX::Types::Moose = 0 KiokuDB = 0.46 DBIx::Class = 0.08127 DBI = 1.607 Data::Stream::Bulk = 0.07 Test::use::ok = 0 Test::More = 0.88 Test::TempDir = 0 SQL::Abstract = 0 Search::GIN = 0.07 t000755001750000144 011775631363 14162 5ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22dbic.t100644001750000144 1441711775631363 15437 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t#!/usr/bin/perl use strict; use warnings; use Scalar::Util qw(refaddr); use Test::More; use Test::Exception; use KiokuDB; BEGIN { plan skip_all => "DBD::SQLite are required" unless eval { require DBI; require DBD::SQLite }; } { package MyApp::DB::Result::Foo; use base qw(DBIx::Class::Core); __PACKAGE__->load_components(qw(KiokuDB)); __PACKAGE__->table('foo'); __PACKAGE__->add_columns(qw(id name object)); __PACKAGE__->set_primary_key('id'); __PACKAGE__->kiokudb_column('object'); package MyApp::DB; use base qw(DBIx::Class::Schema); __PACKAGE__->load_components(qw(Schema::KiokuDB)); __PACKAGE__->register_class( Foo => qw(MyApp::DB::Result::Foo)); package Foo; use Moose; has name => ( isa => "Str", is => "ro" ); has obj => ( isa => "Object", is => "ro", weak_ref => 1 ); __PACKAGE__->meta->make_immutable; } my $dir = KiokuDB->connect( 'dbi:SQLite:dbname=:memory:', schema => "MyApp::DB", create => 1, live_objects => { clear_leaks => 1, leak_tracker => sub { my $i = $Test::Builder::Level || 1; $i++ until (caller($i))[1] eq __FILE__; local $Test::Builder::Level = $i + 2; fail("no leaks"); diag("leaked @_"), }, }, ); $dir->txn_do( scope => 1, body => sub { $dir->insert( foo => my $obj = Foo->new ); $dir->backend->schema->resultset("Foo")->create({ id => 1, name => "foo", object => $obj }); my $row = $dir->backend->schema->resultset("Foo")->create({ id => 2, name => "foo", object => "foo" }); isa_ok( $row->object, 'Foo', 'inflated from constructor' ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); foreach my $id ( 1, 2 ) { $dir->txn_do( scope => 1, body => sub { my $row = $dir->backend->schema->resultset("Foo")->find(1); isa_ok( $row, "MyApp::DB::Result::Foo" ); isa_ok( $row->object, "Foo" ); is( $dir->object_to_id( $row->object ), "foo", "kiokudb ID" ); }); } is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { $dir->backend->schema->resultset("Foo")->create({ id => 3, name => "foo", object => Foo->new }); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { my $row = $dir->backend->schema->resultset("Foo")->find(3); isa_ok( $row, "MyApp::DB::Result::Foo" ); isa_ok( $row->object, "Foo" ); isnt( $dir->object_to_id( $row->object ), "foo", "kiokudb ID" ); $row->object( Foo->new ); isa_ok( $row->object, "Foo", "weakened object with no other refs" ); throws_ok { $row->update; } qr/not in storage/, "can't update object without related KiokuDB objects being in storage"; lives_ok { $row->store } "store method works"; }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { my $row = $dir->backend->schema->resultset("Foo")->find(1); my $foo = Foo->new( obj => $row ); $dir->insert( with_dbic => $foo ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { my $foo = $dir->lookup("with_dbic"); isa_ok( $foo->obj, "DBIx::Class::Row" ); is( $foo->obj->id, 1, "ID" ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { ok( $dir->exists('dbic:row:["Foo",3]'), "dbic row exists" ); my $foo = $dir->lookup('dbic:row:["Foo",3]'); isa_ok( $foo, "DBIx::Class::Row" ); is( $foo->id, 3, "ID" ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { my $row = $dir->backend->schema->resultset("Foo")->find(2); my $foo = Foo->new( obj => $row ); $dir->insert( another => $foo ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { # to cover the ->search branch (as opposed to ->find) my @foo = $dir->lookup("with_dbic", "another"); isa_ok( $foo[0]->obj, "DBIx::Class::Row" ); is( $foo[0]->obj->id, 1, "ID" ); isa_ok( $foo[1]->obj, "DBIx::Class::Row" ); is( $foo[1]->obj->id, 2, "ID" ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { my $obj = $dir->backend->schema->resultset("entries")->find('with_dbic'); is( $dir->object_to_id($obj), 'with_dbic', "object to ID of row fetched using 'find'"); isa_ok( $obj, "Foo" ); isa_ok( $obj->obj, "DBIx::Class::Row" ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { my $rs = $dir->backend->schema->resultset("Foo")->search({ id => [ 1, 3 ] }); my $foo = Foo->new( obj => $rs ); $dir->insert( with_rs => $foo ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { my $foo = $dir->lookup("with_rs"); isa_ok( $foo, "Foo" ); my $rs = $foo->obj; isa_ok( $rs, "DBIx::Class::ResultSet" ); is( refaddr($rs->result_source->schema), refaddr($dir->backend->schema), "schema restored in resultset handle" ); is_deeply( [ sort { $a->id <=> $b->id } $rs->all ], [ sort { $a->id <=> $b->id } $dir->backend->schema->resultset("Foo")->search({ id => [ 1, 3 ]})->all ], "result set works" ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { my $rs = $dir->backend->schema->resultset("Foo")->search({ id => [ 1, 3 ] }); my $foo = Foo->new( obj => $dir->backend->schema ); $dir->insert( with_schema => $foo ); }); # FIXME register it as immutable is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); $dir->txn_do( scope => 1, body => sub { my $foo = $dir->lookup("with_schema"); isa_ok( $foo, "Foo" ); my $rs = $foo->obj; my $schema = $foo->obj; isa_ok( $schema, "DBIx::Class::Schema" ); is( refaddr($schema), refaddr($dir->backend->schema), "schema restored" ); }); is_deeply( [ $dir->live_objects->live_objects ], [], "no live objects" ); done_testing; META.yml100644001750000144 1332511775631363 15354 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22--- abstract: 'L backend for L' author: - 'Yuval Kogman' build_requires: DBD::SQLite: 0 DBI: 0 KiokuDB::Test: 0 Search::GIN::Extract::Callback: 0 Search::GIN::Extract::Class: 0 Search::GIN::Query::Manual: 0 Search::GIN::Query::Set: 0 Test::Exception: 0 Test::More: 0 Test::TempDir: 0 YAML::XS: 0 constant: 0 ok: 0 configure_requires: DBIx::Class::Optional::Dependencies: 0 ExtUtils::MakeMaker: 6.30 dynamic_config: 1 generated_by: 'Dist::Zilla version 4.300020, CPAN::Meta::Converter version 2.120921' license: perl meta-spec: url: http://module-build.sourceforge.net/META-spec-v1.4.html version: 1.4 name: KiokuDB-Backend-DBI requires: Carp: 0 Class::Accessor::Grouped: 0 DBI: 1.607 DBIx::Class: 0.08127 DBIx::Class::Core: 0 DBIx::Class::ResultSource::Table: 0 DBIx::Class::Schema: 0 Data::Stream::Bulk: 0.07 Data::Stream::Bulk::DBI: 0 JSON: 0 KiokuDB: 0.46 KiokuDB::Backend: 0 KiokuDB::Backend::Role::Clear: 0 KiokuDB::Backend::Role::Concurrency::POSIX: 0 KiokuDB::Backend::Role::GC: 0 KiokuDB::Backend::Role::Query::GIN: 0 KiokuDB::Backend::Role::Query::Simple: 0 KiokuDB::Backend::Role::Scan: 0 KiokuDB::Backend::Role::TXN: 0 KiokuDB::Backend::Serialize::Delegate: 0 KiokuDB::TypeMap: 0 KiokuDB::TypeMap::Entry: 0 KiokuDB::TypeMap::Entry::Naive: 0 List::MoreUtils: 0 Moose: 0 Moose::Util::TypeConstraints: 0 MooseX::Types: 0.08 MooseX::Types::Moose: 0 SQL::Abstract: 0 Scalar::Util: 0 Search::GIN: 0.07 Search::GIN::Extract::Delegate: 0 Test::More: 0.88 Test::TempDir: 0 Test::use::ok: 0 Try::Tiny: 0 base: 0 namespace::autoclean: 0 namespace::clean: 0 strict: 0 warnings: 0 resources: bugtracker: http://rt.cpan.org/Public/Dist/Display.html?Name=KiokuDB-Backend-DBI homepage: http://metacpan.org/release/KiokuDB-Backend-DBI repository: git://github.com/nothingmuch/kiokudb-backend-dbi.git version: 1.22 x_Dist_Zilla: plugins: - class: Dist::Zilla::Plugin::GatherDir name: '@Filter/@Basic/GatherDir' version: 4.300020 - class: Dist::Zilla::Plugin::PruneCruft name: '@Filter/@Basic/PruneCruft' version: 4.300020 - class: Dist::Zilla::Plugin::ManifestSkip name: '@Filter/@Basic/ManifestSkip' version: 4.300020 - class: Dist::Zilla::Plugin::MetaYAML name: '@Filter/@Basic/MetaYAML' version: 4.300020 - class: Dist::Zilla::Plugin::License name: '@Filter/@Basic/License' version: 4.300020 - class: Dist::Zilla::Plugin::Readme name: '@Filter/@Basic/Readme' version: 4.300020 - class: Dist::Zilla::Plugin::ExtraTests name: '@Filter/@Basic/ExtraTests' version: 4.300020 - class: Dist::Zilla::Plugin::ExecDir name: '@Filter/@Basic/ExecDir' version: 4.300020 - class: Dist::Zilla::Plugin::ShareDir name: '@Filter/@Basic/ShareDir' version: 4.300020 - class: Dist::Zilla::Plugin::Manifest name: '@Filter/@Basic/Manifest' version: 4.300020 - class: Dist::Zilla::Plugin::TestRelease name: '@Filter/@Basic/TestRelease' version: 4.300020 - class: Dist::Zilla::Plugin::ConfirmRelease name: '@Filter/@Basic/ConfirmRelease' version: 4.300020 - class: Dist::Zilla::Plugin::UploadToCPAN name: '@Filter/@Basic/UploadToCPAN' version: 4.300020 - class: Dist::Zilla::Plugin::MetaConfig name: '@Filter/MetaConfig' version: 4.300020 - class: Dist::Zilla::Plugin::MetaJSON name: '@Filter/MetaJSON' version: 4.300020 - class: Dist::Zilla::Plugin::PkgVersion name: '@Filter/PkgVersion' version: 4.300020 - class: Dist::Zilla::Plugin::PodSyntaxTests name: '@Filter/PodSyntaxTests' version: 4.300020 - class: Dist::Zilla::Plugin::NoTabsTests name: '@Filter/NoTabsTests' version: 0.01 - class: Dist::Zilla::Plugin::PodCoverageTests name: '@Filter/PodCoverageTests' version: 4.300020 - class: Dist::Zilla::Plugin::MetaResources name: '@Filter/MetaResources' version: 4.300020 - class: Dist::Zilla::Plugin::Authority name: '@Filter/Authority' version: 1.006 - class: Dist::Zilla::Plugin::EOLTests name: '@Filter/EOLTests' version: 0.02 - class: Dist::Zilla::Plugin::AutoPrereqs name: '@Filter/AutoPrereqs' version: 4.300020 - class: inc::DBICOptionalDeps name: '=inc::DBICOptionalDeps' version: ~ - class: Dist::Zilla::Plugin::Prereqs config: Dist::Zilla::Plugin::Prereqs: phase: configure type: requires name: ConfigureRequires version: 4.300020 - class: inc::DistMeta name: '=inc::DistMeta' version: ~ - class: Dist::Zilla::Plugin::Prereqs config: Dist::Zilla::Plugin::Prereqs: phase: runtime type: requires name: Prereqs version: 4.300020 - class: Dist::Zilla::Plugin::FinderCode name: ':InstallModules' version: 4.300020 - class: Dist::Zilla::Plugin::FinderCode name: ':IncModules' version: 4.300020 - class: Dist::Zilla::Plugin::FinderCode name: ':TestFiles' version: 4.300020 - class: Dist::Zilla::Plugin::FinderCode name: ':ExecFiles' version: 4.300020 - class: Dist::Zilla::Plugin::FinderCode name: ':ShareFiles' version: 4.300020 - class: Dist::Zilla::Plugin::FinderCode name: ':MainModule' version: 4.300020 zilla: class: Dist::Zilla::Dist::Builder config: is_trial: 0 version: 4.300020 x_authority: cpan:NUFFIN MANIFEST100644001750000144 123611775631363 15212 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22Changes LICENSE MANIFEST MANIFEST.SKIP META.json META.yml Makefile.PL README dist.ini examples/job_queue.pl inc/DBICOptionalDeps.pm inc/DistMeta.pm lib/DBIx/Class/KiokuDB.pm lib/DBIx/Class/KiokuDB/EntryProxy.pm lib/DBIx/Class/Schema/KiokuDB.pm lib/KiokuDB/Backend/DBI.pm lib/KiokuDB/Backend/DBI.pm.orig lib/KiokuDB/Backend/DBI/Schema.pm lib/KiokuDB/TypeMap/Entry/DBIC/ResultSet.pm lib/KiokuDB/TypeMap/Entry/DBIC/ResultSource.pm lib/KiokuDB/TypeMap/Entry/DBIC/Row.pm lib/KiokuDB/TypeMap/Entry/DBIC/Schema.pm t/01load.t t/autovivify_handle.t t/basic.t t/dbic.t t/fixtures.t t/release-eol.t t/release-no-tabs.t t/release-pod-coverage.t t/release-pod-syntax.t t/set_query.t basic.t100644001750000144 216111775631363 15570 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t#!/usr/bin/perl use strict; use warnings; use Test::More; BEGIN { plan skip_all => "DBD::SQLite are required" unless eval { require DBI; require DBD::SQLite }; } use ok 'KiokuDB::Backend::DBI'; use ok 'KiokuDB::Entry'; my $b = KiokuDB::Backend::DBI->new( dsn => 'dbi:SQLite:dbname=:memory:', columns => [qw(oi)], ); my $entry = KiokuDB::Entry->new( id => "foo", root => 1, class => "Foo", data => { oi => "vey" }, ); my %c = map { $_ => [] } qw(id class data tied root oi);; $b->entry_to_row($entry, \%c); is( $c{id}[0], $entry->id, "ID" ); is( $c{class}[0], $entry->class, "class" ); ok( $c{root}[0], "root entry" ); like( $c{data}[0], qr/vey/, "data" ); ok( $c{oi}[0], "extracted column" ); is( $c{oi}[0], "vey", "column data" ); SKIP: { skip "SQL::Translator >= 0.11005 is required", 2 unless eval "use SQL::Translator 0.11005"; $b->deploy; $b->txn_do(sub { $b->insert( $entry ); }); my ( $loaded_entry ) = $b->get("foo"); isnt( $loaded_entry, $entry, "entries are different" ); is_deeply( $loaded_entry, $entry, "but eq deeply" ); } done_testing; META.json100644001750000144 2200711775631363 15521 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22{ "abstract" : "L backend for L", "author" : [ "Yuval Kogman" ], "dynamic_config" : "1", "generated_by" : "Dist::Zilla version 4.300020, CPAN::Meta::Converter version 2.120921", "license" : [ "perl_5" ], "meta-spec" : { "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec", "version" : "2" }, "name" : "KiokuDB-Backend-DBI", "prereqs" : { "configure" : { "requires" : { "DBIx::Class::Optional::Dependencies" : "0", "ExtUtils::MakeMaker" : "6.30" } }, "runtime" : { "requires" : { "Carp" : "0", "Class::Accessor::Grouped" : "0", "DBI" : "1.607", "DBIx::Class" : "0.08127", "DBIx::Class::Core" : "0", "DBIx::Class::ResultSource::Table" : "0", "DBIx::Class::Schema" : "0", "Data::Stream::Bulk" : "0.07", "Data::Stream::Bulk::DBI" : "0", "JSON" : "0", "KiokuDB" : "0.46", "KiokuDB::Backend" : "0", "KiokuDB::Backend::Role::Clear" : "0", "KiokuDB::Backend::Role::Concurrency::POSIX" : "0", "KiokuDB::Backend::Role::GC" : "0", "KiokuDB::Backend::Role::Query::GIN" : "0", "KiokuDB::Backend::Role::Query::Simple" : "0", "KiokuDB::Backend::Role::Scan" : "0", "KiokuDB::Backend::Role::TXN" : "0", "KiokuDB::Backend::Serialize::Delegate" : "0", "KiokuDB::TypeMap" : "0", "KiokuDB::TypeMap::Entry" : "0", "KiokuDB::TypeMap::Entry::Naive" : "0", "List::MoreUtils" : "0", "Moose" : "0", "Moose::Util::TypeConstraints" : "0", "MooseX::Types" : "0.08", "MooseX::Types::Moose" : "0", "SQL::Abstract" : "0", "Scalar::Util" : "0", "Search::GIN" : "0.07", "Search::GIN::Extract::Delegate" : "0", "Test::More" : "0.88", "Test::TempDir" : "0", "Test::use::ok" : "0", "Try::Tiny" : "0", "base" : "0", "namespace::autoclean" : "0", "namespace::clean" : "0", "strict" : "0", "warnings" : "0" } }, "test" : { "requires" : { "DBD::SQLite" : "0", "DBI" : "0", "KiokuDB::Test" : "0", "Search::GIN::Extract::Callback" : "0", "Search::GIN::Extract::Class" : "0", "Search::GIN::Query::Manual" : "0", "Search::GIN::Query::Set" : "0", "Test::Exception" : "0", "Test::More" : "0", "Test::TempDir" : "0", "YAML::XS" : "0", "constant" : "0", "ok" : "0" } } }, "release_status" : "stable", "resources" : { "bugtracker" : { "mailto" : "bug-KiokuDB-Backend-DBI@rt.cpan.org", "web" : "http://rt.cpan.org/Public/Dist/Display.html?Name=KiokuDB-Backend-DBI" }, "homepage" : "http://metacpan.org/release/KiokuDB-Backend-DBI", "repository" : { "type" : "git", "url" : "git://github.com/nothingmuch/kiokudb-backend-dbi.git", "web" : "http://github.com/nothingmuch/kiokudb-backend-dbi" } }, "version" : "1.22", "x_Dist_Zilla" : { "plugins" : [ { "class" : "Dist::Zilla::Plugin::GatherDir", "name" : "@Filter/@Basic/GatherDir", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::PruneCruft", "name" : "@Filter/@Basic/PruneCruft", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::ManifestSkip", "name" : "@Filter/@Basic/ManifestSkip", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::MetaYAML", "name" : "@Filter/@Basic/MetaYAML", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::License", "name" : "@Filter/@Basic/License", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::Readme", "name" : "@Filter/@Basic/Readme", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::ExtraTests", "name" : "@Filter/@Basic/ExtraTests", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::ExecDir", "name" : "@Filter/@Basic/ExecDir", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::ShareDir", "name" : "@Filter/@Basic/ShareDir", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::Manifest", "name" : "@Filter/@Basic/Manifest", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::TestRelease", "name" : "@Filter/@Basic/TestRelease", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::ConfirmRelease", "name" : "@Filter/@Basic/ConfirmRelease", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::UploadToCPAN", "name" : "@Filter/@Basic/UploadToCPAN", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::MetaConfig", "name" : "@Filter/MetaConfig", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::MetaJSON", "name" : "@Filter/MetaJSON", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::PkgVersion", "name" : "@Filter/PkgVersion", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::PodSyntaxTests", "name" : "@Filter/PodSyntaxTests", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::NoTabsTests", "name" : "@Filter/NoTabsTests", "version" : "0.01" }, { "class" : "Dist::Zilla::Plugin::PodCoverageTests", "name" : "@Filter/PodCoverageTests", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::MetaResources", "name" : "@Filter/MetaResources", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::Authority", "name" : "@Filter/Authority", "version" : "1.006" }, { "class" : "Dist::Zilla::Plugin::EOLTests", "name" : "@Filter/EOLTests", "version" : "0.02" }, { "class" : "Dist::Zilla::Plugin::AutoPrereqs", "name" : "@Filter/AutoPrereqs", "version" : "4.300020" }, { "class" : "inc::DBICOptionalDeps", "name" : "=inc::DBICOptionalDeps", "version" : null }, { "class" : "Dist::Zilla::Plugin::Prereqs", "config" : { "Dist::Zilla::Plugin::Prereqs" : { "phase" : "configure", "type" : "requires" } }, "name" : "ConfigureRequires", "version" : "4.300020" }, { "class" : "inc::DistMeta", "name" : "=inc::DistMeta", "version" : null }, { "class" : "Dist::Zilla::Plugin::Prereqs", "config" : { "Dist::Zilla::Plugin::Prereqs" : { "phase" : "runtime", "type" : "requires" } }, "name" : "Prereqs", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::FinderCode", "name" : ":InstallModules", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::FinderCode", "name" : ":IncModules", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::FinderCode", "name" : ":TestFiles", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::FinderCode", "name" : ":ExecFiles", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::FinderCode", "name" : ":ShareFiles", "version" : "4.300020" }, { "class" : "Dist::Zilla::Plugin::FinderCode", "name" : ":MainModule", "version" : "4.300020" } ], "zilla" : { "class" : "Dist::Zilla::Dist::Builder", "config" : { "is_trial" : "0" }, "version" : "4.300020" } }, "x_authority" : "cpan:NUFFIN" } 01load.t100644001750000144 15011775631363 15543 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t#!/usr/bin/perl use strict; use warnings; use Test::More tests => 1; use ok 'KiokuDB::Backend::DBI'; Makefile.PL100644001750000144 601611775631363 16034 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22# This Makefile.PL for was generated by Dist::Zilla. # Don't edit it but the dist.ini used to construct it. use strict; use warnings; use ExtUtils::MakeMaker 6.30; my %WriteMakefileArgs = ( 'ABSTRACT' => 'L backend for L', 'AUTHOR' => 'Yuval Kogman', 'BUILD_REQUIRES' => { 'DBD::SQLite' => '0', 'DBI' => '0', 'KiokuDB::Test' => '0', 'Search::GIN::Extract::Callback' => '0', 'Search::GIN::Extract::Class' => '0', 'Search::GIN::Query::Manual' => '0', 'Search::GIN::Query::Set' => '0', 'Test::Exception' => '0', 'Test::More' => '0', 'Test::TempDir' => '0', 'YAML::XS' => '0', 'constant' => '0', 'ok' => '0' }, 'CONFIGURE_REQUIRES' => { 'DBIx::Class::Optional::Dependencies' => '0', 'ExtUtils::MakeMaker' => '6.30' }, 'DISTNAME' => 'KiokuDB-Backend-DBI', 'EXE_FILES' => [], 'LICENSE' => 'perl', 'NAME' => 'KiokuDB::Backend::DBI', 'PREREQ_PM' => { 'Carp' => '0', 'Class::Accessor::Grouped' => '0', 'DBI' => '1.607', 'DBIx::Class' => '0.08127', 'DBIx::Class::Core' => '0', 'DBIx::Class::ResultSource::Table' => '0', 'DBIx::Class::Schema' => '0', 'Data::Stream::Bulk' => '0.07', 'Data::Stream::Bulk::DBI' => '0', 'JSON' => '0', 'KiokuDB' => '0.46', 'KiokuDB::Backend' => '0', 'KiokuDB::Backend::Role::Clear' => '0', 'KiokuDB::Backend::Role::Concurrency::POSIX' => '0', 'KiokuDB::Backend::Role::GC' => '0', 'KiokuDB::Backend::Role::Query::GIN' => '0', 'KiokuDB::Backend::Role::Query::Simple' => '0', 'KiokuDB::Backend::Role::Scan' => '0', 'KiokuDB::Backend::Role::TXN' => '0', 'KiokuDB::Backend::Serialize::Delegate' => '0', 'KiokuDB::TypeMap' => '0', 'KiokuDB::TypeMap::Entry' => '0', 'KiokuDB::TypeMap::Entry::Naive' => '0', 'List::MoreUtils' => '0', 'Moose' => '0', 'Moose::Util::TypeConstraints' => '0', 'MooseX::Types' => '0.08', 'MooseX::Types::Moose' => '0', 'SQL::Abstract' => '0', 'Scalar::Util' => '0', 'Search::GIN' => '0.07', 'Search::GIN::Extract::Delegate' => '0', 'Test::More' => '0.88', 'Test::TempDir' => '0', 'Test::use::ok' => '0', 'Try::Tiny' => '0', 'base' => '0', 'namespace::autoclean' => '0', 'namespace::clean' => '0', 'strict' => '0', 'warnings' => '0' }, 'VERSION' => '1.22', 'test' => { 'TESTS' => 't/*.t' } ); unless ( eval { ExtUtils::MakeMaker->VERSION(6.56) } ) { my $br = delete $WriteMakefileArgs{BUILD_REQUIRES}; my $pp = $WriteMakefileArgs{PREREQ_PM}; for my $mod ( keys %$br ) { if ( exists $pp->{$mod} ) { $pp->{$mod} = $br->{$mod} if $br->{$mod} > $pp->{$mod}; } else { $pp->{$mod} = $br->{$mod}; } } } delete $WriteMakefileArgs{CONFIGURE_REQUIRES} unless eval { ExtUtils::MakeMaker->VERSION(6.52) }; require DBIx::Class::Optional::Dependencies; $WriteMakefileArgs{PREREQ_PM} = { %{ $WriteMakefileArgs{PREREQ_PM} || {} }, %{ DBIx::Class::Optional::Dependencies->req_list_for ('deploy') }, }; WriteMakefile(%WriteMakefileArgs); fixtures.t100644001750000144 346111775631363 16364 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t#!/usr/bin/perl use Test::More; BEGIN { plan skip_all => "DBD::SQLite and SQL::Translator >= 0.11005 are required" unless eval "use DBI; use DBD::SQLite; use DBIx::Class::Optional::Dependencies; 1"; plan skip_all => DBIx::Class::Optional::Dependencies->req_missing_for("deploy") unless DBIx::Class::Optional::Dependencies->req_ok_for("deploy"); } use Test::TempDir; use ok 'KiokuDB'; use ok 'KiokuDB::Backend::DBI'; use KiokuDB::Test; use Search::GIN::Extract::Class; # real file needed for concurrency tests my $sqlite = "dbi:SQLite:dbname=" . temp_root->file("db"); foreach my $dsn ( [ $sqlite ], #[ "dbi:mysql:test" ], #[ "dbi:Pg:dbname=test" ], ) { foreach my $serializer (qw(json storable), eval { require YAML::XS; "yaml" }) { #diag "testing against $dsn->[0] with $serializer\n"; my $connect = sub { KiokuDB->connect( @$dsn, create => 1, serializer => $serializer, columns => [ name => { is_nullable => 1, data_type => "varchar", }, age => { is_nullable => 1, data_type => "integer", }, ], extract => Search::GIN::Extract::Class->new, sqlite_sync_mode => 'OFF', ); }; { my $dir = $connect->(); $dir->txn_do(sub { $dir->backend->clear }); } run_all_fixtures($connect); if ( grep { !$_ } Test::Builder->new->summary ) { diag "Leaving tables in $dsn->[0] due to test failures"; } else { $connect->()->backend->drop_tables; } } } done_testing; MANIFEST.SKIP100644001750000144 112511775631363 15754 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22# Avoid version control files. \bRCS\b \bCVS\b \bSCCS\b ,v$ \B\.svn\b \B\.git\b \b_darcs\b # Avoid Makemaker generated and utility files. \bMANIFEST\.bak \bMakefile$ \bblib/ \bMakeMaker-\d \bpm_to_blib\.ts$ \bpm_to_blib$ \bblibdirs\.ts$ # 6.18 through 6.25 generated this # Avoid Module::Build generated and utility files. \bBuild$ \b_build/ # Avoid temp and backup files. ~$ \.old$ \#$ \b\.# \.bak$ # Avoid Devel::Cover files. \bcover_db\b ### DEFAULT MANIFEST.SKIP ENDS HERE #### \.DS_Store$ \.sw.$ (\w+-)*(\w+)-\d\.\d+(?:\.tar\.gz)?$ \.t\.log$ \.prove$ # XS shit \.(?:bs|c|o)$ set_query.t100644001750000144 405611775631363 16534 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t#!/usr/bin/perl use strict; use warnings; use Scalar::Util qw(refaddr); use Test::More; use Test::Exception; use Search::GIN::Query::Set; use Search::GIN::Query::Manual; use Search::GIN::Extract::Callback; use KiokuDB; use constant TESTDB => 't/set_query.db'; sub cleanup { unlink TESTDB } BEGIN { plan skip_all => "DBD::SQLite are required" unless eval { require DBI; require DBD::SQLite }; } END { cleanup; } { package TestClass; use Moose; has a => (is => 'rw'); has b => (is => 'rw'); has c => (is => 'rw'); has d => (is => 'rw'); } cleanup; my $kiokudb = KiokuDB->connect ( 'dbi:SQLite:'.TESTDB, create => 1, extract => Search::GIN::Extract::Callback->new ( extract => sub { my ($obj) = @_; return { a => $obj->a, b => $obj->b, c => $obj->c, d => $obj->d }; }) ); { my @a = 'a'..'e'; my @b = 'f'..'j'; my @c = 'k'..'o'; my @d = 'p'..'t'; my $s = $kiokudb->new_scope; $kiokudb->store(map { TestClass->new( a => $a[$_], b => $b[$_], c => $c[$_], d => $d[$_] ) } 0..4); }; # now we can, finally, do some searches... # we're going to ask for: lives_ok { # 1: (a:a or a:b or a:e) INTERSECT (c:k and d:p) # should return just the object with a=a my $results = $kiokudb->search ( Search::GIN::Query::Set->new ( operation => 'INTERSECT', subqueries => [ Search::GIN::Query::Manual->new ( values => { a => [qw(a b e)] } ), Search::GIN::Query::Manual->new ( values => { c => 'k', d => 'p' }, method => 'all', ) ])); my $item = $results->next; my @objects = @$item; is(scalar @objects, 1, 'one object in the bulk'); is($objects[0]->a, 'a', 'Found the correct object'); ok(!$results->next, 'no more posts'); }; done_testing(); inc000755001750000144 011775631363 14470 5ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22DistMeta.pm100644001750000144 104311775631363 16676 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/incpackage inc::DistMeta; use Moose; has metadata => ( is => 'ro', isa => 'HashRef', required => 1, ); with 'Dist::Zilla::Role::MetaProvider'; around BUILDARGS => sub { my $orig = shift; my $self = shift; my $params = $self->$orig(@_); my $zilla = delete $params->{zilla}; my $plugin_name = delete $params->{plugin_name}; return { zilla => $zilla, plugin_name => $plugin_name, metadata => $params, }; }; __PACKAGE__->meta->make_immutable; no Moose; 1; release-eol.t100644001750000144 47611775631363 16673 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use strict; use warnings; use Test::More; eval 'use Test::EOL'; plan skip_all => 'Test::EOL required' if $@; all_perl_files_ok({ trailing_whitespace => 1 }); release-no-tabs.t100644001750000144 45011775631363 17447 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use strict; use warnings; use Test::More; eval 'use Test::NoTabs'; plan skip_all => 'Test::NoTabs required' if $@; all_perl_files_ok(); examples000755001750000144 011775631363 15535 5ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22job_queue.pl100644001750000144 755611775631363 20225 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/examples#!/usr/bin/perl use strict; use warnings; =pod This script demonstrates using L to properly serialize a closure, including maintaining the proper identity of all the referenced objects in the captured variables. This feature is used to implement a simple job queue, where the queue management is handled by DBIC, but the job body is a closure. Actual job queue features are missing (e.g. marking a job as in progress, etc), but the point is to show off KiokuDB, not to write a job queue ;-) =cut use KiokuDB; { # this is just a mock data package MyApp::DB::Result::DataPoint; use base qw(DBIx::Class); __PACKAGE__->load_components(qw(Core)); __PACKAGE__->table('data_point'); __PACKAGE__->add_columns( id => { data_type => "integer" }, value => { data_type => "integer" }, ); __PACKAGE__->set_primary_key('id'); # and a mock result data (the output of a job) package MyApp::DB::Result::Output; use base qw(DBIx::Class); __PACKAGE__->load_components(qw(Core)); __PACKAGE__->table('output'); __PACKAGE__->add_columns( id => { data_type => "integer" }, value => { data_type => "integer" }, ); __PACKAGE__->set_primary_key('id'); # this represents a queued or finished job package MyApp::DB::Result::Job; use base qw(DBIx::Class); __PACKAGE__->load_components(qw(KiokuDB Core)); __PACKAGE__->table('foo'); __PACKAGE__->add_columns( id => { data_type => "integer" }, description => { data_type => "varchar" }, action => { data_type => "varchar" }, finished => { data_type => "bool", default_value => 0 }, result => { data_type => "integer", is_nullable => 1, }, ); __PACKAGE__->set_primary_key('id'); __PACKAGE__->kiokudb_column('action'); __PACKAGE__->belongs_to( result => "MyApp::DB::Result::Output" ); sub run { my $self = shift; # run the actual action $self->action->($self); # mark the job as finished $self->finished(1); $self->update; } package MyApp::DB; use base qw(DBIx::Class::Schema); __PACKAGE__->load_components(qw(Schema::KiokuDB)); __PACKAGE__->register_class( Job => qw(MyApp::DB::Result::Job)); __PACKAGE__->register_class( Output => qw(MyApp::DB::Result::Output)); __PACKAGE__->register_class( DataPoint => qw(MyApp::DB::Result::DataPoint)); } my $dir = KiokuDB->connect( 'dbi:SQLite:dbname=:memory:', schema => "MyApp::DB", create => 1, ); my $schema = $dir->backend->schema; # create some data $schema->txn_do(sub { my $rs = $schema->resultset("DataPoint"); $rs->create({ value => 4 }); $rs->create({ value => 3 }); $rs->create({ value => 2 }); $rs->create({ value => 50 }); }); # queue a job $dir->txn_do( scope => 1, body => sub { my $small_numbers = $schema->resultset("DataPoint")->search({ value => { "<=", 10 } }); # create a closure for the job: my $action = sub { my $self = shift; my $sum = 0; # small_numbers is a closure variable, which will be saved implicitly # as a KiokuDB object while ( my $data_point = $small_numbers->next ) { $sum += $data_point->value; } # $schema is also restored properly $self->result( $schema->resultset("Output")->create({ value => $sum }) ); }; # we can simply store the closure in the DB $schema->resultset("Job")->create({ description => "sum some small numbers", action => $action, }); }); # run a job # this can be done in worker process, obviously (just change :memory: to a real # file) $dir->txn_do( scope => 1, body => sub { my $jobs = $schema->resultset("Job")->search({ finished => 0 }); my $job = $jobs->search(undef, { limit => 1 })->single; $job->run(); my $result = $job->result; # ... }); autovivify_handle.t100644001750000144 453311775631363 20234 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t#!/usr/bin/perl use strict; use warnings; use Scalar::Util qw(refaddr); use Test::More;use Test::Exception; use Test::TempDir; use KiokuDB; BEGIN { plan skip_all => "DBD::SQLite and SQL::Translator >= 0.11005 are required" unless eval "use DBI; use DBD::SQLite; use DBIx::Class::Optional::Dependencies; 1"; plan skip_all => DBIx::Class::Optional::Dependencies->req_missing_for("deploy") unless DBIx::Class::Optional::Dependencies->req_ok_for("deploy"); } { package MyApp::DB::Result::Gaplonk; # Acme::MetaSyntacic::donmartin ++ use base qw(DBIx::Class::Core); __PACKAGE__->load_components(qw(KiokuDB)); __PACKAGE__->table('gaplonk'); __PACKAGE__->add_columns(qw(id name object)); __PACKAGE__->set_primary_key('id'); __PACKAGE__->kiokudb_column('object'); package MyApp::DB; use base qw(DBIx::Class::Schema); __PACKAGE__->load_components(qw(Schema::KiokuDB)); __PACKAGE__->define_kiokudb_schema(); __PACKAGE__->register_class( Gaplonk => qw(MyApp::DB::Result::Gaplonk)); package Patawee; use Moose; has sproing => ( isa => "Str", is => "ro" ); __PACKAGE__->meta->make_immutable; } my $sqlite = "dbi:SQLite:dbname=" . temp_root->file("db"); my $schema = MyApp::DB->connect($sqlite); { my $refaddr; { isa_ok( my $k = $schema->kiokudb_handle, "KiokuDB" ); $refaddr = refaddr($k); } { is( refaddr($schema->kiokudb_handle), $refaddr, "KiokuDB handle not weak when autovivified" ); } } my $dir = $schema->kiokudb_handle; isa_ok( $dir, 'KiokuDB', 'got autovived directory handle from schema'); $dir->backend->deploy; my $id; lives_ok { $dir->txn_do( scope => 1, body => sub { my $object = Patawee->new( sproing=> 'kalloon' ); my $thing = $schema->resultset('Gaplonk')->create({ id => 1, name =>'VOOMAROOMA', object => $object }); $id = $thing->id; }); } 'create row in DB'; $dir->txn_do( scope => 1, body => sub { my $fetch_again = $schema->resultset('Gaplonk')->find( $id ); isa_ok( $fetch_again, 'MyApp::DB::Result::Gaplonk', 'got DBIC row object back' ); is($fetch_again->name,'VOOMAROOMA','row->name'); my $object = $fetch_again->object; isa_ok( $object, 'Patawee' ); is( $object->sproing, 'kalloon', 'object attribute' ); }); done_testing(); release-pod-syntax.t100644001750000144 45011775631363 20212 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval "use Test::Pod 1.41"; plan skip_all => "Test::Pod 1.41 required for testing POD" if $@; all_pod_files_ok(); DBICOptionalDeps.pm100644001750000144 104611775631363 20212 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/incpackage inc::DBICOptionalDeps; use Moose; extends 'Dist::Zilla::Plugin::MakeMaker::Awesome'; override _build_MakeFile_PL_template => sub { my ($self) = @_; my $template = super(); my $injected = <<'INJECT'; require DBIx::Class::Optional::Dependencies; $WriteMakefileArgs{PREREQ_PM} = { %{ $WriteMakefileArgs{PREREQ_PM} || {} }, %{ DBIx::Class::Optional::Dependencies->req_list_for ('deploy') }, }; INJECT $template =~ s{(?=WriteMakefile\s*\()}{$injected}; return $template; }; __PACKAGE__->meta->make_immutable; release-pod-coverage.t100644001750000144 76511775631363 20470 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval "use Test::Pod::Coverage 1.08"; plan skip_all => "Test::Pod::Coverage 1.08 required for testing POD coverage" if $@; eval "use Pod::Coverage::TrustPod"; plan skip_all => "Pod::Coverage::TrustPod required for testing POD coverage" if $@; all_pod_coverage_ok({ coverage_class => 'Pod::Coverage::TrustPod' }); Class000755001750000144 011775631363 16320 5ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/DBIxKiokuDB.pm100644001750000144 1374211775631363 20335 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/DBIx/Classpackage DBIx::Class::KiokuDB; BEGIN { $DBIx::Class::KiokuDB::AUTHORITY = 'cpan:NUFFIN'; } { $DBIx::Class::KiokuDB::VERSION = '1.22'; } use strict; use warnings; use Carp; use Scalar::Util qw(weaken); use namespace::clean; use base qw(DBIx::Class::Core); sub new { my $self = shift->next::method(@_); foreach my $key ( $self->result_source->columns ) { my $col_info = $self->column_info($key); if ( $col_info->{_kiokudb_info} and ref( my $obj = $self->get_column($key) ) ) { $self->store_kiokudb_column( $key => $obj ); } } return $self; } sub insert { my ( $self, @args ) = @_; my $schema = $self->result_source->schema; my $g = $schema->txn_scope_guard; my $dir = $schema->kiokudb_handle; my $lo = $dir->live_objects; if ( my @insert = grep { ref and not $lo->object_to_entry($_) } values %{ $self->{_kiokudb_column} } ) { $dir->insert(@insert); } my $ret = $self->next::method(@args); $g->commit; return $ret; } sub update { my ( $self, @args ) = @_; my $dir = $self->result_source->schema->kiokudb_handle; my $lo = $dir->live_objects; if ( my @insert = grep { ref and not $lo->object_to_entry($_) } values %{ $self->{_kiokudb_column} } ) { croak("Can't update object, related KiokuDB objects are not in storage"); } $self->next::method(@args); } sub store { my ( $self, @args ) = @_; my $schema = $self->result_source->schema; my $g = $schema->txn_scope_guard; if ( my @objects = grep { ref } values %{ $self->{_kiokudb_column} } ) { $schema->kiokudb_handle->store(@objects); } my $ret = $self->insert_or_update; $g->commit; return $ret; } sub kiokudb_column { my ($self, $rel, $cond, $attrs) = @_; # assume a foreign key contraint unless defined otherwise $attrs->{is_foreign_key_constraint} = 1 if not exists $attrs->{is_foreign_key_constraint}; my $fk = defined $cond ? $cond : $rel; $self->add_relationship( $rel, 'entries', { 'foreign.id' => "self.$fk" }, $attrs ); # FIXME hardcoded 'entries' my $col_info = $self->column_info($fk); $col_info->{_kiokudb_info} = {}; my $accessor = $col_info->{accessor}; $accessor = $rel unless defined $accessor; $self->mk_group_accessors('kiokudb_column' => [ $accessor, $fk]); } sub _kiokudb_id_to_object { my ( $self, $id ) = @_; if ( ref( my $obj = $self->result_source->schema->kiokudb_handle->lookup($id) ) ) { return $obj; } else { croak("No object with ID '$id' found") unless ref $obj; } } sub _kiokudb_object_to_id { my ( $self, $object ) = @_; confess unless ref $object; my $dir = $self->result_source->schema->kiokudb_handle; if ( my $id = $dir->object_to_id($object) ) { return $id; } else { # generate an ID my $collapser = $dir->collapser; my $id_method = $collapser->id_method(ref $object); my $id = $id = $collapser->$id_method($object); # register the ID $dir->live_objects->insert( $id => $object ); return $id; } } sub get_kiokudb_column { my ( $self, $col ) = @_; $self->throw_exception("$col is not a KiokuDB column") unless exists $self->column_info($col)->{_kiokudb_info}; return $self->{_kiokudb_column}{$col} if defined $self->{_kiokudb_column}{$col}; if ( defined( my $val = $self->get_column($col) ) ) { my $obj = ref $val ? $val : $self->_kiokudb_id_to_object($val); # weaken by default, in case there are cycles, the live object scope will # take care of this weaken( $self->{_kiokudb_column}{$col} = $obj ); return $obj; } else { return; } } sub _set_kiokudb_column { my ( $self, $method, $col, $obj ) = @_; if ( ref $obj ) { $self->$method( $col, $self->_kiokudb_object_to_id($obj) ); $self->{_kiokudb_column}{$col} = $obj; } else { $self->$method( $col, undef ); delete $self->{_kiokudb_column}{$col}; } return $obj; } sub set_kiokudb_column { my ( $self, @args ) = @_; $self->_set_kiokudb_column( set_column => @args ); } sub store_kiokudb_column { my ( $self, @args ) = @_; $self->_set_kiokudb_column( store_column => @args ); } # ex: set sw=4 et: __PACKAGE__ __END__ =pod =head1 NAME DBIx::Class::KiokuDB - Refer to L objects from L tables. =head1 SYNOPSIS See L. package MyApp::DB::Result::Album; use base qw(DBIx::Class); __PACKAGE__>load_components(qw(Core KiokuDB)); __PACKAGE__->table('album'); __PACKAGE__->add_columns( id => { data_type => "integer" }, title => { data_type => "varchar" }, # the foreign key for the KiokuDB object: metadata => { data_type => "varchar" }, ); __PACKAGE__->set_primary_key('id'); # enable a KiokuDB rel on the column: __PACKAGE__->kiokudb_column('metadata'); =head1 DESCRIPTION This L component provides the code necessary for L objects to refer to L objects stored in L. =head1 CLASS METHODS =over 4 =item kiokudb_column $rel Declares a relationship to any L object. In future versions adding relationships to different sub-collections will be possible as well. =back =head1 METHODS =over 4 =item store A convenience method that calls L on all referenced L objects, and then invokes C on C<$self>. =item get_kiokudb_column $col =item set_kiokudb_column $col, $obj =item store_kiokudb_column $col, $obj See L. =back =head1 OVERRIDDEN METHODS =over 4 =item new Recognizes objects passed in as column values, much like standard relationships do. =item insert Also calls L on all referenced objects that are not in the L storage. =item update Adds a check to ensure that all referenced L objects are in storage. =back Backend000755001750000144 011775631363 17304 5ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDBDBI.pm100644001750000144 10275211775631363 20447 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDB/Backend#!/usr/bin/perl package KiokuDB::Backend::DBI; BEGIN { $KiokuDB::Backend::DBI::AUTHORITY = 'cpan:NUFFIN'; } { $KiokuDB::Backend::DBI::VERSION = '1.22'; } use Moose; use Moose::Util::TypeConstraints; use MooseX::Types -declare => [qw(ValidColumnName SchemaProto)]; use MooseX::Types::Moose qw(ArrayRef HashRef Str Defined); use Moose::Util::TypeConstraints qw(enum); use Try::Tiny; use Data::Stream::Bulk::DBI; use SQL::Abstract; use JSON; use Scalar::Util qw(weaken refaddr); use List::MoreUtils qw(any); use KiokuDB::Backend::DBI::Schema; use KiokuDB::TypeMap; use KiokuDB::TypeMap::Entry::DBIC::Row; use KiokuDB::TypeMap::Entry::DBIC::ResultSource; use KiokuDB::TypeMap::Entry::DBIC::ResultSet; use KiokuDB::TypeMap::Entry::DBIC::Schema; use namespace::clean -except => 'meta'; with qw( KiokuDB::Backend KiokuDB::Backend::Serialize::Delegate KiokuDB::Backend::Role::Clear KiokuDB::Backend::Role::TXN KiokuDB::Backend::Role::Scan KiokuDB::Backend::Role::Query::Simple KiokuDB::Backend::Role::Query::GIN KiokuDB::Backend::Role::Concurrency::POSIX KiokuDB::Backend::Role::GC Search::GIN::Extract::Delegate ); # KiokuDB::Backend::Role::TXN::Nested is not supported by many DBs # we don't really care though my @std_cols = qw(id class root tied); my @reserved_cols = ( @std_cols, 'data' ); my %reserved_cols = ( map { $_ => 1 } @reserved_cols ); subtype ValidColumnName, as Str, where { not exists $reserved_cols{$_} }; subtype SchemaProto, as Defined, where { Class::MOP::load_class($_) unless ref; !ref($_) || blessed($_) and $_->isa("DBIx::Class::Schema::KiokuDB"); }; sub new_from_dsn { my ( $self, $dsn, @args ) = @_; @args = %{ $args[0] } if @args == 1 and ref $args[0] eq 'HASH'; $self->new( dsn => "dbi:$dsn", @args ); } sub BUILD { my $self = shift; $self->schema; # connect early if ( $self->create ) { $self->create_tables; } } has '+serializer' => ( default => "json" ); # to make dumps readable has json => ( isa => "Object", is => "ro", default => sub { JSON->new }, ); has create => ( isa => "Bool", is => "ro", default => 0, ); has 'dsn' => ( isa => "Str|CodeRef", is => "ro", ); has [qw(user password)] => ( isa => "Str", is => "ro", ); has dbi_attrs => ( isa => HashRef, is => "ro", ); has mysql_strict => ( isa => "Bool", is => "ro", default => 1, ); has sqlite_sync_mode => ( isa => enum([qw(0 1 2 OFF NORMAL FULL off normal full)]), is => "ro", predicate => "has_sqlite_fsync_mode", ); has on_connect_call => ( isa => "ArrayRef", is => "ro", lazy_build => 1, ); sub _build_on_connect_call { my $self = shift; my @call; if ( $self->mysql_strict ) { push @call, sub { my $storage = shift; if ( $storage->can("connect_call_set_strict_mode") ) { $storage->connect_call_set_strict_mode; } }; }; if ( $self->has_sqlite_fsync_mode ) { push @call, sub { my $storage = shift; if ( $storage->sqlt_type eq 'SQLite' ) { $storage->dbh_do(sub { $_[1]->do("PRAGMA synchronous=" . $self->sqlite_sync_mode) }); } }; } return \@call; } has dbic_attrs => ( isa => "HashRef", is => "ro", lazy_build => 1, ); sub _build_dbic_attrs { my $self = shift; return { on_connect_call => $self->on_connect_call, }; } has connect_info => ( isa => ArrayRef, is => "ro", lazy_build => 1, ); sub _build_connect_info { my $self = shift; return [ $self->dsn, $self->user, $self->password, $self->dbi_attrs, $self->dbic_attrs ]; } has schema => ( isa => "DBIx::Class::Schema", is => "ro", lazy_build => 1, init_arg => "connected_schema", handles => [qw(deploy kiokudb_entries_source_name)], ); has _schema_proto => ( isa => SchemaProto, is => "ro", init_arg => "schema", default => "KiokuDB::Backend::DBI::Schema", ); has schema_hook => ( isa => "CodeRef|Str", is => "ro", predicate => "has_schema_hook", ); sub _build_schema { my $self = shift; my $schema = $self->_schema_proto->clone; unless ( $schema->kiokudb_entries_source_name ) { $schema->define_kiokudb_schema( extra_entries_columns => $self->columns ); } if ( $self->has_schema_hook ) { my $h = $self->schema_hook; $self->$h($schema); } $schema->connect(@{ $self->connect_info }); } has storage => ( isa => "DBIx::Class::Storage::DBI", is => "rw", lazy_build => 1, handles => [qw(dbh_do)], ); sub _build_storage { shift->schema->storage } has for_update => ( isa => "Bool", is => "ro", default => 1, ); has _for_update => ( isa => "Bool", is => "ro", lazy_build => 1, ); sub _build__for_update { my $self = shift; return ( $self->for_update and $self->storage->sqlt_type =~ /^(?:MySQL|Oracle|PostgreSQL)$/ ); } has columns => ( isa => ArrayRef[ValidColumnName|HashRef], is => "ro", default => sub { [] }, ); has _columns => ( isa => HashRef, is => "ro", lazy_build => 1, ); sub _build__columns { my $self = shift; my $rs = $self->schema->source( $self->kiokudb_entries_source_name ); my @user_cols = grep { not exists $reserved_cols{$_} } $rs->columns; return { map { $_ => $rs->column_info($_)->{extract} || undef } @user_cols }; } has _ordered_columns => ( isa => "ArrayRef", is => "ro", lazy_build => 1, ); sub _build__ordered_columns { my $self = shift; return [ @reserved_cols, sort keys %{ $self->_columns } ]; } has _column_order => ( isa => "HashRef", is => "ro", lazy_build => 1, ); sub _build__column_order { my $self = shift; my $cols = $self->_ordered_columns; return { map { $cols->[$_] => $_ } 0 .. $#$cols } } has '+extract' => ( required => 0, ); has sql_abstract => ( isa => "SQL::Abstract", is => "ro", lazy_build => 1, ); sub _build_sql_abstract { my $self = shift; SQL::Abstract->new; } # use a Maybe so we can force undef in the builder has batch_size => ( isa => "Maybe[Int]", is => "ro", lazy => 1, builder => '_build_batch_size', ); sub _build_batch_size { my $self = shift; if ($self->storage->sqlt_type eq 'SQLite') { return 999; } else { return undef; } } sub has_batch_size { defined shift->batch_size } sub register_handle { my ( $self, $kiokudb ) = @_; $self->schema->_kiokudb_handle($kiokudb); } sub default_typemap { KiokuDB::TypeMap->new( isa_entries => { # redirect to schema row 'DBIx::Class::Row' => KiokuDB::TypeMap::Entry::DBIC::Row->new, # actual serialization 'DBIx::Class::ResultSet' => KiokuDB::TypeMap::Entry::DBIC::ResultSet->new, # fake, the entries never get written to the db 'DBIx::Class::ResultSource' => KiokuDB::TypeMap::Entry::DBIC::ResultSource->new, 'DBIx::Class::Schema' => KiokuDB::TypeMap::Entry::DBIC::Schema->new, }, ); } sub insert { my ( $self, @entries ) = @_; return unless @entries; my $g = $self->schema->txn_scope_guard; $self->insert_rows( $self->entries_to_rows(@entries) ); # hopefully we're in a transaction, otherwise this totally sucks if ( $self->extract ) { my %gin_index; foreach my $entry ( @entries ) { my $id = $entry->id; if ( $entry->deleted || !$entry->has_object ) { $gin_index{$id} = []; } else { my $d = $entry->backend_data || $entry->backend_data({}); $gin_index{$id} = [ $self->extract_values( $entry->object, entry => $entry ) ]; } } $self->update_index(\%gin_index); } $g->commit; } sub entries_to_rows { my ( $self, @entries ) = @_; my ( %insert, %update, @dbic ); foreach my $t ( \%insert, \%update ) { foreach my $col ( @{ $self->_ordered_columns } ) { $t->{$col} = []; } } foreach my $entry ( @entries ) { my $id = $entry->id; if ( $id =~ /^dbic:schema/ ) { next; } elsif ( $id =~ /^dbic:row:/ ) { push @dbic, $entry->data; } else { my $targ = $entry->prev ? \%update : \%insert; my $row = $self->entry_to_row($entry, $targ); } } return \( %insert, %update, @dbic ); } sub entry_to_row { my ( $self, $entry, $collector ) = @_; for (qw(id class tied)) { push @{ $collector->{$_} }, $entry->$_; } push @{ $collector->{root} }, $entry->root ? 1 : 0; push @{ $collector->{data} }, $self->serialize($entry); my $cols = $self->_columns; foreach my $column ( keys %$cols ) { my $c = $collector->{$column}; if ( my $extract = $cols->{$column} ) { if ( my $obj = $entry->object ) { push @$c, $obj->$extract($column); next; } } elsif ( ref( my $data = $entry->data ) eq 'HASH' ) { if ( exists $data->{$column} and not ref( my $value = $data->{$column} ) ) { push @$c, $value; next; } } push @$c, undef; } } sub insert_rows { my ( $self, $insert, $update, $dbic ) = @_; my $g = $self->schema->txn_scope_guard; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; if ( $self->extract ) { if ( my @ids = map { @{ $_->{id} || [] } } $insert, $update ) { my $batch_size = $self->batch_size || scalar(@ids); my @ids_copy = @ids; while ( my @batch_ids = splice @ids_copy, 0, $batch_size ) { my $del_gin_sth = $dbh->prepare_cached("DELETE FROM gin_index WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $del_gin_sth->execute(@batch_ids); $del_gin_sth->finish; } } } my $colinfo = $self->schema->source('entries')->columns_info; my %rows = ( insert => $insert, update => $update ); foreach my $op (qw(insert update)) { my $prepare = "prepare_$op"; my ( $sth, @cols ) = $self->$prepare($dbh); my $i = 1; foreach my $column_name (@cols) { my $attributes = {}; if ( exists $colinfo->{$column_name} ) { my $dt = $colinfo->{$column_name}{data_type}; $attributes = $self->storage->bind_attribute_by_data_type($dt); } $sth->bind_param_array( $i, $rows{$op}->{$column_name}, $attributes ); $i++; } $sth->execute_array({ArrayTupleStatus => []}) or die; $sth->finish; } $_->insert_or_update for @$dbic; }); $g->commit; } sub prepare_select { my ( $self, $dbh, $stmt ) = @_; $dbh->prepare_cached($stmt . ( $self->_for_update ? " FOR UPDATE" : "" ), {}, 3); # 3 = don't use if still Active } sub prepare_insert { my ( $self, $dbh ) = @_; my @cols = @{ $self->_ordered_columns }; my $ins = $dbh->prepare_cached("INSERT INTO entries (" . join(", ", @cols) . ") VALUES (" . join(", ", ('?') x @cols) . ")"); return ( $ins, @cols ); } sub prepare_update { my ( $self, $dbh ) = @_; my ( $id, @cols ) = @{ $self->_ordered_columns }; my $upd = $dbh->prepare_cached("UPDATE entries SET " . join(", ", map { "$_ = ?" } @cols) . " WHERE $id = ?"); return ( $upd, @cols, $id ); } sub update_index { my ( $self, $entries ) = @_; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $i_sth = $dbh->prepare_cached("INSERT INTO gin_index (id, value) VALUES (?, ?)"); foreach my $id ( keys %$entries ) { my $rv = $i_sth->execute_array( {ArrayTupleStatus => []}, $id, $entries->{$id}, ); } $i_sth->finish; }); } sub _parse_dbic_key { my ( $self, $key ) = @_; @{ $self->json->decode(substr($key,length('dbic:row:'))) }; } sub _part_rows_and_ids { my ( $self, $rows_and_ids ) = @_; my ( @rows, @ids, @special ); for ( @$rows_and_ids ) { if ( /^dbic:schema/ ) { push @special, $_; } elsif ( /^dbic:row:/ ) { push @rows, $_; } else { push @ids, $_; } } return \( @rows, @ids, @special ); } sub _group_dbic_keys { my ( $self, $keys, $mkey_handler ) = @_; my ( %keys, %ids ); foreach my $id ( @$keys ) { my ( $rs_name, @key ) = $self->_parse_dbic_key($id); if ( @key > 1 ) { $mkey_handler->($id, $rs_name, @key); } else { # for other objects we queue up IDs for a single SELECT push @{ $keys{$rs_name} ||= [] }, $key[0]; push @{ $ids{$rs_name} ||= [] }, $id; } } return \( %keys, %ids ); } sub get { my ( $self, @rows_and_ids ) = @_; return unless @rows_and_ids; my %entries; my ( $rows, $ids, $special ) = $self->_part_rows_and_ids(\@rows_and_ids); if ( @$ids ) { $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my @ids_copy = @$ids; my $batch_size = $self->batch_size || scalar(@$ids); while ( my @batch_ids = splice(@ids_copy, 0, $batch_size) ) { my $sth = $self->prepare_select($dbh, "SELECT id, data FROM entries WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $sth->execute(@batch_ids); $sth->bind_columns( \my ( $id, $data ) ); # not actually necessary but i'm keeping it around for reference: #my ( $id, $data ); #use DBD::Pg qw(PG_BYTEA); #$sth->bind_col(1, \$id); #$sth->bind_col(2, \$data, { pg_type => PG_BYTEA }); while ( $sth->fetch ) { $entries{$id} = $data; } } }); } if ( @$rows ) { my $schema = $self->schema; my $err = \"foo"; my ( $rs_keys, $rs_ids ) = try { $self->_group_dbic_keys( $rows, sub { my ( $id, $rs_name, @key ) = @_; # multi column primary keys need 'find' my $obj = $schema->resultset($rs_name)->find(@key) or die $err; # die to stop search $entries{$id} = KiokuDB::Entry->new( id => $id, class => ref($obj), data => $obj, ); }); } catch { die $_ if ref $_ and refaddr($_) == refaddr($err); } or return; foreach my $rs_name ( keys %$rs_keys ) { my $rs = $schema->resultset($rs_name); my $ids = $rs_ids->{$rs_name}; my @objs; if ( @$ids == 1 ) { my $id = $ids->[0]; my $obj = $rs->find($rs_keys->{$rs_name}[0]) or return; $entries{$id} = KiokuDB::Entry->new( id => $id, class => ref($obj), data => $obj, ); } else { my ($pk) = $rs->result_source->primary_columns; my $keys = $rs_keys->{$rs_name}; my @objs = $rs->search({ $pk => $keys })->all; return if @objs != @$ids; # this key lookup is because it's not returned in the same order my %pk_to_id; @pk_to_id{@$keys} = @$ids; foreach my $obj ( @objs ) { my $id = $pk_to_id{$obj->id}; $entries{$id} = KiokuDB::Entry->new( id => $id, class => ref($obj), data => $obj, ); } } } } for ( @$special ) { $entries{$_} = KiokuDB::Entry->new( id => $_, $_ eq 'dbic:schema' ? ( data => $self->schema, class => "DBIx::Class::Schema" ) : ( data => undef, class => "DBIx::Class::ResultSource" ) ); } # ->rows only works after we're done return if @rows_and_ids != keys %entries; # case sensitivity differences, possibly? return if any { !defined } @entries{@rows_and_ids}; map { ref($_) ? $_ : $self->deserialize($_) } @entries{@rows_and_ids}; } sub delete { my ( $self, @ids_or_entries ) = @_; # FIXME special DBIC rows my @ids = map { ref($_) ? $_->id : $_ } @ids_or_entries; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $g = $self->schema->txn_scope_guard; my $batch_size = $self->batch_size || scalar(@ids); my @ids_copy = @ids; while ( my @batch_ids = splice @ids_copy, 0, $batch_size ) { if ( $self->extract ) { # FIXME rely on cascade delete? my $sth = $dbh->prepare_cached("DELETE FROM gin_index WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $sth->execute(@batch_ids); $sth->finish; } my $sth = $dbh->prepare_cached("DELETE FROM entries WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $sth->execute(@batch_ids); $sth->finish; } $g->commit; }); return; } sub exists { my ( $self, @rows_and_ids ) = @_; return unless @rows_and_ids; my $schema = $self->schema; my %entries; my ( $rows, $ids, $special ) = $self->_part_rows_and_ids(\@rows_and_ids); if ( @$ids ) { $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $batch_size = $self->batch_size || scalar(@$ids); my @ids_copy = @$ids; while ( my @batch_ids = splice @ids_copy, 0, $batch_size ) { my $sth = $self-> prepare_select ( $dbh, "SELECT id FROM entries WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $sth->execute(@batch_ids); $sth->bind_columns( \( my $id ) ); $entries{$id} = 1 while $sth->fetch; } }); } if ( @$rows ) { my ( $rs_keys, $rs_ids ) = $self->_group_dbic_keys( $rows, sub { my ( $id, $rs_name, @key ) = @_; $entries{$id} = defined $schema->resultset($rs_name)->find(@key); # FIXME slow }); foreach my $rs_name ( keys %$rs_keys ) { my $rs = $schema->resultset($rs_name); my $ids = $rs_ids->{$rs_name}; my $keys = $rs_keys->{$rs_name}; my ( $pk ) = $rs->result_source->primary_columns; my @exists = $rs->search({ $pk => $keys })->get_column($pk)->all; my %pk_to_id; @pk_to_id{@$keys} = @$ids; @entries{@pk_to_id{@exists}} = ( (1) x @exists ); } } for ( @$special ) { if ( $_ eq 'dbic:schema' ) { $entries{$_} = 1; } elsif ( /^dbic:schema:(.*)/ ) { $entries{$_} = defined try { $schema->source($1) }; } } return @entries{@rows_and_ids}; } sub txn_begin { shift->storage->txn_begin(@_) } sub txn_commit { shift->storage->txn_commit(@_) } sub txn_rollback { shift->storage->txn_rollback(@_) } sub clear { my $self = shift; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; $dbh->do("DELETE FROM gin_index"); $dbh->do("DELETE FROM entries"); }); } sub _sth_stream { my ( $self, $sql, @bind ) = @_; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $sth = $self->prepare_select($dbh, $sql); $sth->execute(@bind); Data::Stream::Bulk::DBI->new( sth => $sth ); }); } sub _select_entry_stream { my ( $self, @args ) = @_; my $stream = $self->_sth_stream(@args); return $stream->filter(sub { [ map { $self->deserialize($_->[0]) } @$_ ] }); } sub all_entries { my $self = shift; $self->_select_entry_stream("SELECT data FROM entries"); } sub root_entries { my $self = shift; $self->_select_entry_stream("SELECT data FROM entries WHERE root"); } sub child_entries { my $self = shift; $self->_select_entry_stream("SELECT data FROM entries WHERE not root"); } sub _select_id_stream { my ( $self, @args ) = @_; my $stream = $self->_sth_stream(@args); return $stream->filter(sub {[ map { $_->[0] } @$_ ]}); } sub all_entry_ids { my $self = shift; $self->_select_id_stream("SELECT id FROM entries"); } sub root_entry_ids { my $self = shift; $self->_select_id_stream("SELECT id FROM entries WHERE root"); } sub child_entry_ids { my $self = shift; $self->_select_id_stream("SELECT id FROM entries WHERE not root"); } sub simple_search { my ( $self, $proto ) = @_; my ( $where_clause, @bind ) = $self->sql_abstract->where($proto); $self->_select_entry_stream("SELECT data FROM entries $where_clause", @bind); } sub search { my ( $self, $query, @args ) = @_; my %args = ( distinct => $self->distinct, @args, ); my %spec = $query->extract_values($self); my @binds; my $inner_sql = $self->_search_gin_subquery(\%spec, \@binds); return $self->_select_entry_stream("SELECT data FROM entries WHERE id IN (".$inner_sql.")",@binds); } sub _search_gin_subquery { my ($self, $spec, $binds) = @_; my @v = ref $spec->{values} eq 'ARRAY' ? @{ $spec->{values} } : (); if ( $spec->{method} eq 'set' ) { my $op = $spec->{operation}; die 'gin set query received bad operation' unless $op =~ /^(UNION|INTERSECT|EXCEPT)$/i; die 'gin set query missing subqueries' unless ref $spec->{subqueries} eq 'ARRAY' && scalar @{ $spec->{subqueries} }; return "(". ( join ' '.$op.' ', map { $self->_search_gin_subquery($_, $binds) } @{ $spec->{subqueries} } ).")"; } elsif ( $spec->{method} eq 'all' and @v > 1) { # for some reason count(id) = ? doesn't work push @$binds, @v; return "SELECT id FROM gin_index WHERE value IN ". "(" . join(", ", ('?') x @v) . ")" . "GROUP BY id HAVING COUNT(id) = " . scalar(@v); } else { push @$binds, @v; return "SELECT DISTINCT id FROM gin_index WHERE value IN ". "(" . join(", ", ('?') x @v) . ")"; } } sub fetch_entry { die "TODO" } sub remove_ids { my ( $self, @ids ) = @_; die "Deletion the GIN index is handled implicitly"; } sub insert_entry { my ( $self, $id, @keys ) = @_; die "Insertion to the GIN index is handled implicitly"; } sub _table_info { my ( $self, $catalog, $schema, $table ) = @_; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $filter = ( $self->storage->sqlt_type eq 'SQLite' ? '%' : '' ); foreach my $arg ( $catalog, $schema, $table ) { $arg = $filter unless defined $arg; } $dbh->table_info($catalog, $schema, $table, 'TABLE')->fetchall_arrayref; }); } sub tables_exist { my $self = shift; return ( @{ $self->_table_info(undef, undef, 'entries') } > 0 ); } sub create_tables { my $self = shift; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; unless ( $self->tables_exist ) { $self->deploy({ producer_args => { mysql_version => 4.1 } }); } }); } sub drop_tables { my $self = shift; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; $dbh->do("DROP TABLE gin_index"); $dbh->do("DROP TABLE entries"); }); } sub DEMOLISH { my $self = shift; return if $_[0]; if ( $self->has_storage ) { $self->storage->disconnect; } } sub new_garbage_collector { my ( $self, %args ) = @_; if ( grep { $_ !~ /^(?:entries|gin_index)/ } map { $_->[2] } @{ $self->_table_info } ) { die "\nRefusing to GC a database with additional tables.\n\nThis is ecause the root set and referencing scheme might be ambiguous (it's not yet clear what garbage collection should actually do on a mixed schema).\n"; } else { my $cmd = $args{command}; my $class = $args{class} || $cmd ? $cmd->class : "KiokuDB::GC::Naive"; Class::MOP::load_class($class); return $class->new( %args, backend => $self, ( $cmd ? ( verbose => $cmd->verbose ) : $cmd ), ); } } __PACKAGE__->meta->make_immutable; __PACKAGE__ __END__ =pod =head1 NAME KiokuDB::Backend::DBI - L backend for L =head1 SYNOPSIS my $dir = KiokuDB->connect( "dbi:mysql:foo", user => "blah", password => "moo', columns => [ # specify extra columns for the 'entries' table # in the same format you pass to DBIC's add_columns name => { data_type => "varchar", is_nullable => 1, # probably important }, ], ); $dir->search({ name => "foo" }); # SQL::Abstract =head1 DESCRIPTION This backend for L leverages existing L accessible databases. The schema is based on two tables, C and C (the latter is only used if a L extractor is specified). The C table has two main columns, C and C (currently in JSPON format, in the future the format will be pluggable), and additional user specified columns. The user specified columns are extracted from inserted objects using a callback (or just copied for simple scalars), allowing SQL where clauses to be used for searching. =head1 COLUMN EXTRACTIONS The columns are specified using a L instance. One additional column info parameter is used, C, which is called as a method on the inserted object with the column name as the only argument. The return value from this callback will be used to populate the column. If the column extractor is omitted then the column will contain a copy of the entry data key by the same name, if it is a plain scalar. Otherwise the column will be C. These columns are only used for lookup purposes, only C is consulted when loading entries. =head1 DBIC INTEGRATION This backend is layered on top of L and reused L for DDL. Because of this objects from a L can refer to objects in the KiokuDB entries table, and vice versa. For more details see L. =head1 SUPPORTED DATABASES This driver has been tested with MySQL 5 (4.1 should be the minimal supported version), SQLite 3, and PostgreSQL 8.3. The SQL code is reasonably portable and should work with most databases. Binary column support is required when using the L serializer. =head2 Transactions For reasons of performance and ease of use database vendors ship with read committed transaction isolation by default. This means that read locks are B acquired when data is fetched from the database, allowing it to be updated by another writer. If the current transaction then updates the value it will be silently overwritten. IMHO this is a much bigger problem when the data is unstructured. This is because data is loaded and fetched in potentially smaller chunks, increasing the risk of phantom reads. Unfortunately enabling truly isolated transaction semantics means that C may fail due to a lock contention, forcing you to repeat your transaction. Arguably this is more correct "read comitted", which can lead to race conditions. Enabling repeatable read or serializable transaction isolation prevents transactions from interfering with eachother, by ensuring all data reads are performed with a shared lock. For more information on isolation see L =head3 SQLite SQLite provides serializable isolation by default. L =head3 MySQL MySQL provides read committed isolation by default. Serializable level isolation can be enabled by by default by changing the C global variable, L =head3 PostgreSQL PostgreSQL provides read committed isolation by default. Repeatable read or serializable isolation can be enabled by setting the default transaction isolation level, or using the C SQL statement. L, L =head1 ATTRIBUTES =over 4 =item schema Created automatically. This is L object that is used for schema deployment, connectivity and transaction handling. =item connect_info An array reference whose contents are passed to L. If omitted will be created from the attrs C, C, C and C. =item dsn =item user =item password =item dbi_attrs Convenience attrs for connecting using L. User in C's builder. =item columns Additional columns, see L. =item serializer L. Coerces from a string, too: KiokuDB->connect("dbi:...", serializer => "storable"); Defaults to L. =item create If true the existence of the tables will be checked for and the DB will be deployed if not. Defaults to false. =item extract An optional L used to create the C entries. Usually L. =item schema_hook A hook that is called on the backend object as a method with the schema as the argument just before connecting. If you need to modify the schema in some way (adding indexes or constraints) this is where it should be done. =item for_update If true (the defaults), will cause all select statement to be issued with a C modifier on MySQL, Postgres and Oracle. This is highly reccomended because these database provide low isolation guarantees as configured out the box, and highly interlinked graph databases are much more susceptible to corruption because of lack of transcational isolation than normalized relational databases. =item sqlite_sync_mode If this attribute is set and the underlying database is SQLite, then C will be issued with this value. Can be C, C or C (SQLite's default), or 0, 1, or 2. See L. =item mysql_strict If true (the default), sets MySQL's strict mode. This is B reccomended, or you may enjoy some of MySQL's more interesting features, like automatic data loss when the columns are too narrow. See L and L for more details. =item on_connect_call See L. This attribute is constructed based on the values of C and C, but may be overridden if you need more control. =item dbic_attrs See L. Defaults to { on_connect_call => $self->on_connect_call } =item batch_size SQL that deals with entries run in batches of the amount provided in C. If it is not provided, the statements will run in a single batch. This solves the issue with SQLite where lists can only handle 999 elements at a time. C will be set to 999 by default if the driver in use is SQLite. =back =head1 METHODS See L and the various roles for more info. =over 4 =item deploy Calls L. Deployment to MySQL requires that you specify something like: $dir->backend->deploy({ producer_args => { mysql_version => 4 } }); because MySQL versions before 4 did not have support for boolean types, and the schema emitted by L will not work with the queries used. =item drop_tables Drops the C and C tables. =back =head1 TROUBLESHOOTING =head2 I get C You are problably using MySQL, which comes with a helpful data compression feature: when your serialized objects are larger than the maximum size of a C column MySQL will simply shorten it for you. Why C defaults to 64k, and how on earth someone would consider silent data truncation a sane default I could never fathom, but nevertheless MySQL does allow you to disable this by setting the "strict" SQL mode in the configuration. To resolve the actual problem (though this obviously won't repair your lost data), alter the entries table so that the C column uses the nonstandard C datatype. =head1 VERSION CONTROL L =head1 AUTHOR Yuval Kogman Enothingmuch@woobling.orgE =head1 COPYRIGHT Copyright (c) 2008, 2009 Yuval Kogman, Infinity Interactive. All rights reserved This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =begin Pod::Coverage BUILD DEMOLISH all_entries all_entry_ids child_entries child_entry_ids clear create_tables default_typemap delete entries_to_rows entry_to_row exists fetch_entry has_batch_size insert insert_entry insert_rows new_from_dsn new_garbage_collector prepare_insert prepare_select prepare_update register_handle remove_ids root_entries root_entry_ids search simple_search tables_exist txn_begin txn_commit txn_rollback update_index =end Pod::Coverage =cut DBI.pm.orig100644001750000144 10257411775631363 21410 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDB/Backend#!/usr/bin/perl package KiokuDB::Backend::DBI; use Moose; use Moose::Util::TypeConstraints; use MooseX::Types -declare => [qw(ValidColumnName SchemaProto)]; use MooseX::Types::Moose qw(ArrayRef HashRef Str Defined); use Moose::Util::TypeConstraints qw(enum); use Try::Tiny; use Data::Stream::Bulk::DBI; use SQL::Abstract; use JSON; use Scalar::Util qw(weaken refaddr); use List::MoreUtils qw(any); use KiokuDB::Backend::DBI::Schema; use KiokuDB::TypeMap; use KiokuDB::TypeMap::Entry::DBIC::Row; use KiokuDB::TypeMap::Entry::DBIC::ResultSource; use KiokuDB::TypeMap::Entry::DBIC::ResultSet; use KiokuDB::TypeMap::Entry::DBIC::Schema; use namespace::clean -except => 'meta'; with qw( KiokuDB::Backend KiokuDB::Backend::Serialize::Delegate KiokuDB::Backend::Role::Clear KiokuDB::Backend::Role::TXN KiokuDB::Backend::Role::Scan KiokuDB::Backend::Role::Query::Simple KiokuDB::Backend::Role::Query::GIN KiokuDB::Backend::Role::Concurrency::POSIX KiokuDB::Backend::Role::GC Search::GIN::Extract::Delegate ); # KiokuDB::Backend::Role::TXN::Nested is not supported by many DBs # we don't really care though my @std_cols = qw(id class root tied); my @reserved_cols = ( @std_cols, 'data' ); my %reserved_cols = ( map { $_ => 1 } @reserved_cols ); subtype ValidColumnName, as Str, where { not exists $reserved_cols{$_} }; subtype SchemaProto, as Defined, where { Class::MOP::load_class($_) unless ref; !ref($_) || blessed($_) and $_->isa("DBIx::Class::Schema::KiokuDB"); }; sub new_from_dsn { my ( $self, $dsn, @args ) = @_; @args = %{ $args[0] } if @args == 1 and ref $args[0] eq 'HASH'; $self->new( dsn => "dbi:$dsn", @args ); } sub BUILD { my $self = shift; $self->schema; # connect early if ( $self->create ) { $self->create_tables; } } has '+serializer' => ( default => "json" ); # to make dumps readable has json => ( isa => "Object", is => "ro", default => sub { JSON->new }, ); has create => ( isa => "Bool", is => "ro", default => 0, ); has 'dsn' => ( isa => "Str|CodeRef", is => "ro", ); has [qw(user password)] => ( isa => "Str", is => "ro", ); has dbi_attrs => ( isa => HashRef, is => "ro", ); has mysql_strict => ( isa => "Bool", is => "ro", default => 1, ); has sqlite_sync_mode => ( isa => enum([qw(0 1 2 OFF NORMAL FULL off normal full)]), is => "ro", predicate => "has_sqlite_fsync_mode", ); has on_connect_call => ( isa => "ArrayRef", is => "ro", lazy_build => 1, ); sub _build_on_connect_call { my $self = shift; my @call; if ( $self->mysql_strict ) { push @call, sub { my $storage = shift; if ( $storage->can("connect_call_set_strict_mode") ) { $storage->connect_call_set_strict_mode; } }; }; if ( $self->has_sqlite_fsync_mode ) { push @call, sub { my $storage = shift; if ( $storage->sqlt_type eq 'SQLite' ) { $storage->dbh_do(sub { $_[1]->do("PRAGMA synchronous=" . $self->sqlite_sync_mode) }); } }; } return \@call; } has dbic_attrs => ( isa => "HashRef", is => "ro", lazy_build => 1, ); sub _build_dbic_attrs { my $self = shift; return { on_connect_call => $self->on_connect_call, }; } has connect_info => ( isa => ArrayRef, is => "ro", lazy_build => 1, ); sub _build_connect_info { my $self = shift; return [ $self->dsn, $self->user, $self->password, $self->dbi_attrs, $self->dbic_attrs ]; } has schema => ( isa => "DBIx::Class::Schema", is => "ro", lazy_build => 1, init_arg => "connected_schema", handles => [qw(deploy kiokudb_entries_source_name)], ); has _schema_proto => ( isa => SchemaProto, is => "ro", init_arg => "schema", default => "KiokuDB::Backend::DBI::Schema", ); has schema_hook => ( isa => "CodeRef|Str", is => "ro", predicate => "has_schema_hook", ); sub _build_schema { my $self = shift; my $schema = $self->_schema_proto->clone; unless ( $schema->kiokudb_entries_source_name ) { $schema->define_kiokudb_schema( extra_entries_columns => $self->columns ); } if ( $self->has_schema_hook ) { my $h = $self->schema_hook; $self->$h($schema); } $schema->connect(@{ $self->connect_info }); } has storage => ( isa => "DBIx::Class::Storage::DBI", is => "rw", lazy_build => 1, handles => [qw(dbh_do)], ); sub _build_storage { shift->schema->storage } has for_update => ( isa => "Bool", is => "ro", default => 1, ); has _for_update => ( isa => "Bool", is => "ro", lazy_build => 1, ); sub _build__for_update { my $self = shift; return ( $self->for_update and $self->storage->sqlt_type =~ /^(?:MySQL|Oracle|PostgreSQL)$/ ); } has columns => ( isa => ArrayRef[ValidColumnName|HashRef], is => "ro", default => sub { [] }, ); has _columns => ( isa => HashRef, is => "ro", lazy_build => 1, ); sub _build__columns { my $self = shift; my $rs = $self->schema->source( $self->kiokudb_entries_source_name ); my @user_cols = grep { not exists $reserved_cols{$_} } $rs->columns; return { map { $_ => $rs->column_info($_)->{extract} || undef } @user_cols }; } has _ordered_columns => ( isa => "ArrayRef", is => "ro", lazy_build => 1, ); sub _build__ordered_columns { my $self = shift; return [ @reserved_cols, sort keys %{ $self->_columns } ]; } has _column_order => ( isa => "HashRef", is => "ro", lazy_build => 1, ); sub _build__column_order { my $self = shift; my $cols = $self->_ordered_columns; return { map { $cols->[$_] => $_ } 0 .. $#$cols } } has '+extract' => ( required => 0, ); has sql_abstract => ( isa => "SQL::Abstract", is => "ro", lazy_build => 1, ); sub _build_sql_abstract { my $self = shift; SQL::Abstract->new; } # use a Maybe so we can force undef in the builder has batch_size => ( isa => "Maybe[Int]", is => "ro", lazy => 1, builder => '_build_batch_size', ); sub _build_batch_size { my $self = shift; if ($self->storage->sqlt_type eq 'SQLite') { return 999; } else { return undef; } } sub has_batch_size { defined shift->batch_size } sub register_handle { my ( $self, $kiokudb ) = @_; $self->schema->_kiokudb_handle($kiokudb); } sub default_typemap { KiokuDB::TypeMap->new( isa_entries => { # redirect to schema row 'DBIx::Class::Row' => KiokuDB::TypeMap::Entry::DBIC::Row->new, # actual serialization 'DBIx::Class::ResultSet' => KiokuDB::TypeMap::Entry::DBIC::ResultSet->new, # fake, the entries never get written to the db 'DBIx::Class::ResultSource' => KiokuDB::TypeMap::Entry::DBIC::ResultSource->new, 'DBIx::Class::Schema' => KiokuDB::TypeMap::Entry::DBIC::Schema->new, }, ); } sub insert { my ( $self, @entries ) = @_; return unless @entries; my $g = $self->schema->txn_scope_guard; $self->insert_rows( $self->entries_to_rows(@entries) ); # hopefully we're in a transaction, otherwise this totally sucks if ( $self->extract ) { my %gin_index; foreach my $entry ( @entries ) { my $id = $entry->id; if ( $entry->deleted || !$entry->has_object ) { $gin_index{$id} = []; } else { my $d = $entry->backend_data || $entry->backend_data({}); $gin_index{$id} = [ $self->extract_values( $entry->object, entry => $entry ) ]; } } $self->update_index(\%gin_index); } $g->commit; } sub entries_to_rows { my ( $self, @entries ) = @_; my ( %insert, %update, @dbic ); foreach my $t ( \%insert, \%update ) { foreach my $col ( @{ $self->_ordered_columns } ) { $t->{$col} = []; } } foreach my $entry ( @entries ) { my $id = $entry->id; if ( $id =~ /^dbic:schema/ ) { next; } elsif ( $id =~ /^dbic:row:/ ) { push @dbic, $entry->data; } else { my $targ = $entry->prev ? \%update : \%insert; my $row = $self->entry_to_row($entry, $targ); } } return \( %insert, %update, @dbic ); } sub entry_to_row { my ( $self, $entry, $collector ) = @_; for (qw(id class tied)) { push @{ $collector->{$_} }, $entry->$_; } push @{ $collector->{root} }, $entry->root ? 1 : 0; push @{ $collector->{data} }, $self->serialize($entry); my $cols = $self->_columns; foreach my $column ( keys %$cols ) { my $c = $collector->{$column}; if ( my $extract = $cols->{$column} ) { if ( my $obj = $entry->object ) { push @$c, $obj->$extract($column); next; } } elsif ( ref( my $data = $entry->data ) eq 'HASH' ) { if ( exists $data->{$column} and not ref( my $value = $data->{$column} ) ) { push @$c, $value; next; } } push @$c, undef; } } sub insert_rows { my ( $self, $insert, $update, $dbic ) = @_; my $g = $self->schema->txn_scope_guard; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; if ( $self->extract ) { if ( my @ids = map { @{ $_->{id} || [] } } $insert, $update ) { my $batch_size = $self->batch_size || scalar(@ids); my @ids_copy = @ids; while ( my @batch_ids = splice @ids_copy, 0, $batch_size ) { my $del_gin_sth = $dbh->prepare_cached("DELETE FROM gin_index WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $del_gin_sth->execute(@batch_ids); $del_gin_sth->finish; } } } my $colinfo = $self->schema->source('entries')->columns_info; my %rows = ( insert => $insert, update => $update ); foreach my $op (qw(insert update)) { my $prepare = "prepare_$op"; my ( $sth, @cols ) = $self->$prepare($dbh); my $i = 1; foreach my $column_name (@cols) { my $attributes = {}; if ( exists $colinfo->{$column_name} ) { my $dt = $colinfo->{$column_name}{data_type}; $attributes = $self->storage->bind_attribute_by_data_type($dt); } $sth->bind_param_array( $i, $rows{$op}->{$column_name}, $attributes ); $i++; } $sth->execute_array({ArrayTupleStatus => []}) or die; $sth->finish; } $_->insert_or_update for @$dbic; }); $g->commit; } sub prepare_select { my ( $self, $dbh, $stmt ) = @_; $dbh->prepare_cached($stmt . ( $self->_for_update ? " FOR UPDATE" : "" ), {}, 3); # 3 = don't use if still Active } sub prepare_insert { my ( $self, $dbh ) = @_; my @cols = @{ $self->_ordered_columns }; my $ins = $dbh->prepare_cached("INSERT INTO entries (" . join(", ", @cols) . ") VALUES (" . join(", ", ('?') x @cols) . ")"); return ( $ins, @cols ); } sub prepare_update { my ( $self, $dbh ) = @_; my ( $id, @cols ) = @{ $self->_ordered_columns }; my $upd = $dbh->prepare_cached("UPDATE entries SET " . join(", ", map { "$_ = ?" } @cols) . " WHERE $id = ?"); return ( $upd, @cols, $id ); } sub update_index { my ( $self, $entries ) = @_; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $i_sth = $dbh->prepare_cached("INSERT INTO gin_index (id, value) VALUES (?, ?)"); foreach my $id ( keys %$entries ) { my $rv = $i_sth->execute_array( {ArrayTupleStatus => []}, $id, $entries->{$id}, ); } $i_sth->finish; }); } sub _parse_dbic_key { my ( $self, $key ) = @_; @{ $self->json->decode(substr($key,length('dbic:row:'))) }; } sub _part_rows_and_ids { my ( $self, $rows_and_ids ) = @_; my ( @rows, @ids, @special ); for ( @$rows_and_ids ) { if ( /^dbic:schema/ ) { push @special, $_; } elsif ( /^dbic:row:/ ) { push @rows, $_; } else { push @ids, $_; } } return \( @rows, @ids, @special ); } sub _group_dbic_keys { my ( $self, $keys, $mkey_handler ) = @_; my ( %keys, %ids ); foreach my $id ( @$keys ) { my ( $rs_name, @key ) = $self->_parse_dbic_key($id); if ( @key > 1 ) { $mkey_handler->($id, $rs_name, @key); } else { # for other objects we queue up IDs for a single SELECT push @{ $keys{$rs_name} ||= [] }, $key[0]; push @{ $ids{$rs_name} ||= [] }, $id; } } return \( %keys, %ids ); } sub get { my ( $self, @rows_and_ids ) = @_; return unless @rows_and_ids; my %entries; my ( $rows, $ids, $special ) = $self->_part_rows_and_ids(\@rows_and_ids); if ( @$ids ) { $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my @ids_copy = @$ids; my $batch_size = $self->batch_size || scalar(@$ids); while ( my @batch_ids = splice(@ids_copy, 0, $batch_size) ) { my $sth = $self->prepare_select($dbh, "SELECT id, data FROM entries WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $sth->execute(@batch_ids); $sth->bind_columns( \my ( $id, $data ) ); # not actually necessary but i'm keeping it around for reference: #my ( $id, $data ); #use DBD::Pg qw(PG_BYTEA); #$sth->bind_col(1, \$id); #$sth->bind_col(2, \$data, { pg_type => PG_BYTEA }); while ( $sth->fetch ) { $entries{$id} = $data; } } }); } if ( @$rows ) { my $schema = $self->schema; my $err = \"foo"; my ( $rs_keys, $rs_ids ) = try { $self->_group_dbic_keys( $rows, sub { my ( $id, $rs_name, @key ) = @_; # multi column primary keys need 'find' my $obj = $schema->resultset($rs_name)->find(@key) or die $err; # die to stop search $entries{$id} = KiokuDB::Entry->new( id => $id, class => ref($obj), data => $obj, ); }); } catch { die $_ if ref $_ and refaddr($_) == refaddr($err); } or return; foreach my $rs_name ( keys %$rs_keys ) { my $rs = $schema->resultset($rs_name); my $ids = $rs_ids->{$rs_name}; my @objs; if ( @$ids == 1 ) { my $id = $ids->[0]; my $obj = $rs->find($rs_keys->{$rs_name}[0]) or return; $entries{$id} = KiokuDB::Entry->new( id => $id, class => ref($obj), data => $obj, ); } else { my ($pk) = $rs->result_source->primary_columns; my $keys = $rs_keys->{$rs_name}; my @objs = $rs->search({ $pk => $keys })->all; return if @objs != @$ids; # this key lookup is because it's not returned in the same order my %pk_to_id; @pk_to_id{@$keys} = @$ids; foreach my $obj ( @objs ) { my $id = $pk_to_id{$obj->id}; $entries{$id} = KiokuDB::Entry->new( id => $id, class => ref($obj), data => $obj, ); } } } } for ( @$special ) { $entries{$_} = KiokuDB::Entry->new( id => $_, $_ eq 'dbic:schema' ? ( data => $self->schema, class => "DBIx::Class::Schema" ) : ( data => undef, class => "DBIx::Class::ResultSource" ) ); } # ->rows only works after we're done return if @rows_and_ids != keys %entries; # case sensitivity differences, possibly? return if any { !defined } @entries{@rows_and_ids}; map { ref($_) ? $_ : $self->deserialize($_) } @entries{@rows_and_ids}; } sub delete { my ( $self, @ids_or_entries ) = @_; # FIXME special DBIC rows my @ids = map { ref($_) ? $_->id : $_ } @ids_or_entries; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $g = $self->schema->txn_scope_guard; my $batch_size = $self->batch_size || scalar(@ids); my @ids_copy = @ids; while ( my @batch_ids = splice @ids_copy, 0, $batch_size ) { if ( $self->extract ) { # FIXME rely on cascade delete? my $sth = $dbh->prepare_cached("DELETE FROM gin_index WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $sth->execute(@batch_ids); $sth->finish; } my $sth = $dbh->prepare_cached("DELETE FROM entries WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $sth->execute(@batch_ids); $sth->finish; } $g->commit; }); return; } sub exists { my ( $self, @rows_and_ids ) = @_; return unless @rows_and_ids; my $schema = $self->schema; my %entries; my ( $rows, $ids, $special ) = $self->_part_rows_and_ids(\@rows_and_ids); if ( @$ids ) { $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $batch_size = $self->batch_size || scalar(@$ids); my @ids_copy = @$ids; while ( my @batch_ids = splice @ids_copy, 0, $batch_size ) { my $sth = $self-> prepare_select ( $dbh, "SELECT id FROM entries WHERE id IN (" . join(", ", ('?') x @batch_ids) . ")"); $sth->execute(@batch_ids); $sth->bind_columns( \( my $id ) ); $entries{$id} = 1 while $sth->fetch; } }); } if ( @$rows ) { my ( $rs_keys, $rs_ids ) = $self->_group_dbic_keys( $rows, sub { my ( $id, $rs_name, @key ) = @_; $entries{$id} = defined $schema->resultset($rs_name)->find(@key); # FIXME slow }); foreach my $rs_name ( keys %$rs_keys ) { my $rs = $schema->resultset($rs_name); my $ids = $rs_ids->{$rs_name}; my $keys = $rs_keys->{$rs_name}; my ( $pk ) = $rs->result_source->primary_columns; my @exists = $rs->search({ $pk => $keys })->get_column($pk)->all; my %pk_to_id; @pk_to_id{@$keys} = @$ids; @entries{@pk_to_id{@exists}} = ( (1) x @exists ); } } for ( @$special ) { if ( $_ eq 'dbic:schema' ) { $entries{$_} = 1; } elsif ( /^dbic:schema:(.*)/ ) { $entries{$_} = defined try { $schema->source($1) }; } } return @entries{@rows_and_ids}; } sub txn_begin { shift->storage->txn_begin(@_) } sub txn_commit { shift->storage->txn_commit(@_) } sub txn_rollback { shift->storage->txn_rollback(@_) } sub clear { my $self = shift; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; $dbh->do("DELETE FROM gin_index"); $dbh->do("DELETE FROM entries"); }); } sub _sth_stream { my ( $self, $sql, @bind ) = @_; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $sth = $self->prepare_select($dbh, $sql); $sth->execute(@bind); Data::Stream::Bulk::DBI->new( sth => $sth ); }); } sub _select_entry_stream { my ( $self, @args ) = @_; my $stream = $self->_sth_stream(@args); return $stream->filter(sub { [ map { $self->deserialize($_->[0]) } @$_ ] }); } sub all_entries { my $self = shift; $self->_select_entry_stream("SELECT data FROM entries"); } sub root_entries { my $self = shift; $self->_select_entry_stream("SELECT data FROM entries WHERE root"); } sub child_entries { my $self = shift; $self->_select_entry_stream("SELECT data FROM entries WHERE not root"); } sub _select_id_stream { my ( $self, @args ) = @_; my $stream = $self->_sth_stream(@args); return $stream->filter(sub {[ map { $_->[0] } @$_ ]}); } sub all_entry_ids { my $self = shift; $self->_select_id_stream("SELECT id FROM entries"); } sub root_entry_ids { my $self = shift; $self->_select_id_stream("SELECT id FROM entries WHERE root"); } sub child_entry_ids { my $self = shift; $self->_select_id_stream("SELECT id FROM entries WHERE not root"); } sub simple_search { my ( $self, $proto ) = @_; my ( $where_clause, @bind ) = $self->sql_abstract->where($proto); $self->_select_entry_stream("SELECT data FROM entries $where_clause", @bind); } sub search { my ( $self, $query, @args ) = @_; my %args = ( distinct => $self->distinct, @args, ); my %spec = $query->extract_values($self); my @binds; my $inner_sql = $self->_search_gin_subquery(\%spec, \@binds); return $self->_select_entry_stream("SELECT data FROM entries WHERE id IN (".$inner_sql.")",@binds); } sub _search_gin_subquery { my ($self, $spec, $binds) = @_; my @v = ref $spec->{values} eq 'ARRAY' ? @{ $spec->{values} } : (); if ( $spec->{method} eq 'set' ) { my $op = $spec->{operation}; die 'gin set query received bad operation' unless $op =~ /^(UNION|INTERSECT|EXCEPT)$/i; die 'gin set query missing subqueries' unless ref $spec->{subqueries} eq 'ARRAY' && scalar @{ $spec->{subqueries} }; return "(". ( join ' '.$op.' ', map { $self->_search_gin_subquery($_, $binds) } @{ $spec->{subqueries} } ).")"; } elsif ( $spec->{method} eq 'all' and @v > 1) { # for some reason count(id) = ? doesn't work push @$binds, @v; return "SELECT id FROM gin_index WHERE value IN ". "(" . join(", ", ('?') x @v) . ")" . "GROUP BY id HAVING COUNT(id) = " . scalar(@v); } else { push @$binds, @v; return "SELECT DISTINCT id FROM gin_index WHERE value IN ". "(" . join(", ", ('?') x @v) . ")"; } } sub fetch_entry { die "TODO" } sub remove_ids { my ( $self, @ids ) = @_; die "Deletion the GIN index is handled implicitly"; } sub insert_entry { my ( $self, $id, @keys ) = @_; die "Insertion to the GIN index is handled implicitly"; } sub _table_info { my ( $self, $catalog, $schema, $table ) = @_; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; my $filter = ( $self->storage->sqlt_type eq 'SQLite' ? '%' : '' ); foreach my $arg ( $catalog, $schema, $table ) { $arg = $filter unless defined $arg; } $dbh->table_info($catalog, $schema, $table, 'TABLE')->fetchall_arrayref; }); } sub tables_exist { my $self = shift; return ( @{ $self->_table_info(undef, undef, 'entries') } > 0 ); } sub create_tables { my $self = shift; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; unless ( $self->tables_exist ) { $self->deploy({ producer_args => { mysql_version => 4.1 } }); } }); } sub drop_tables { my $self = shift; $self->dbh_do(sub { my ( $storage, $dbh ) = @_; $dbh->do("DROP TABLE gin_index"); $dbh->do("DROP TABLE entries"); }); } sub DEMOLISH { my $self = shift; return if $_[0]; if ( $self->has_storage ) { $self->storage->disconnect; } } sub new_garbage_collector { my ( $self, %args ) = @_; if ( grep { $_ !~ /^(?:entries|gin_index)/ } map { $_->[2] } @{ $self->_table_info } ) { die "\nRefusing to GC a database with additional tables.\n\nThis is ecause the root set and referencing scheme might be ambiguous (it's not yet clear what garbage collection should actually do on a mixed schema).\n"; } else { my $cmd = $args{command}; my $class = $args{class} || $cmd ? $cmd->class : "KiokuDB::GC::Naive"; Class::MOP::load_class($class); return $class->new( %args, backend => $self, ( $cmd ? ( verbose => $cmd->verbose ) : $cmd ), ); } } __PACKAGE__->meta->make_immutable; __PACKAGE__ __END__ =pod =head1 NAME KiokuDB::Backend::DBI - L backend for L =head1 SYNOPSIS my $dir = KiokuDB->connect( "dbi:mysql:foo", user => "blah", password => "moo', columns => [ # specify extra columns for the 'entries' table # in the same format you pass to DBIC's add_columns name => { data_type => "varchar", is_nullable => 1, # probably important }, ], ); $dir->search({ name => "foo" }); # SQL::Abstract =head1 DESCRIPTION This backend for L leverages existing L accessible databases. The schema is based on two tables, C and C (the latter is only used if a L extractor is specified). The C table has two main columns, C and C (currently in JSPON format, in the future the format will be pluggable), and additional user specified columns. The user specified columns are extracted from inserted objects using a callback (or just copied for simple scalars), allowing SQL where clauses to be used for searching. =head1 COLUMN EXTRACTIONS The columns are specified using a L instance. One additional column info parameter is used, C, which is called as a method on the inserted object with the column name as the only argument. The return value from this callback will be used to populate the column. If the column extractor is omitted then the column will contain a copy of the entry data key by the same name, if it is a plain scalar. Otherwise the column will be C. These columns are only used for lookup purposes, only C is consulted when loading entries. =head1 DBIC INTEGRATION This backend is layered on top of L and reused L for DDL. Because of this objects from a L can refer to objects in the KiokuDB entries table, and vice versa. For more details see L. =head1 SUPPORTED DATABASES This driver has been tested with MySQL 5 (4.1 should be the minimal supported version), SQLite 3, and PostgresSQL 8.3. The SQL code is reasonably portable and should work with most databases. Binary column support is required when using the L serializer. =head2 Transactions For reasons of performance and ease of use database vendors ship with read committed transaction isolation by default. This means that read locks are B acquired when data is fetched from the database, allowing it to be updated by another writer. If the current transaction then updates the value it will be silently overwritten. IMHO this is a much bigger problem when the data is unstructured. This is because data is loaded and fetched in potentially smaller chunks, increasing the risk of phantom reads. Unfortunately enabling truly isolated transaction semantics means that C may fail due to a lock contention, forcing you to repeat your transaction. Arguably this is more correct "read comitted", which can lead to race conditions. Enabling repeatable read or serializable transaction isolation prevents transactions from interfering with eachother, by ensuring all data reads are performed with a shared lock. For more information on isolation see L =head3 SQLite SQLite provides serializable isolation by default. L =head3 MySQL MySQL provides read committed isolation by default. Serializable level isolation can be enabled by by default by changing the C global variable, L =head3 PostgreSQL PostgreSQL provides read committed isolation by default. Repeatable read or serializable isolation can be enabled by setting the default transaction isolation level, or using the C SQL statement. L, L =head1 ATTRIBUTES =over 4 =item schema Created automatically. This is L object that is used for schema deployment, connectivity and transaction handling. =item connect_info An array reference whose contents are passed to L. If omitted will be created from the attrs C, C, C and C. =item dsn =item user =item password =item dbi_attrs Convenience attrs for connecting using L. User in C's builder. =item columns Additional columns, see L. =item serializer L. Coerces from a string, too: KiokuDB->connect("dbi:...", serializer => "storable"); Defaults to L. =item create If true the existence of the tables will be checked for and the DB will be deployed if not. Defaults to false. =item extract An optional L used to create the C entries. Usually L. =item schema_hook A hook that is called on the backend object as a method with the schema as the argument just before connecting. If you need to modify the schema in some way (adding indexes or constraints) this is where it should be done. =item for_update If true (the defaults), will cause all select statement to be issued with a C modifier on MySQL, Postgres and Oracle. This is highly reccomended because these database provide low isolation guarantees as configured out the box, and highly interlinked graph databases are much more susceptible to corruption because of lack of transcational isolation than normalized relational databases. =item sqlite_sync_mode If this attribute is set and the underlying database is SQLite, then C will be issued with this value. Can be C, C or C (SQLite's default), or 0, 1, or 2. See L. =item mysql_strict If true (the default), sets MySQL's strict mode. This is B reccomended, or you may enjoy some of MySQL's more interesting features, like automatic data loss when the columns are too narrow. See L and L for more details. =item on_connect_call See L. This attribute is constructed based on the values of C and C, but may be overridden if you need more control. =item dbic_attrs See L. Defaults to { on_connect_call => $self->on_connect_call } =item batch_size SQL that deals with entries run in batches of the amount provided in C. If it is not provided, the statements will run in a single batch. This solves the issue with SQLite where lists can only handle 999 elements at a time. C will be set to 999 by default if the driver in use is SQLite. =back =head1 METHODS See L and the various roles for more info. =over 4 =item deploy Calls L. Deployment to MySQL requires that you specify something like: $dir->backend->deploy({ producer_args => { mysql_version => 4 } }); because MySQL versions before 4 did not have support for boolean types, and the schema emitted by L will not work with the queries used. =item drop_tables Drops the C and C tables. =back =head1 TROUBLESHOOTING =head2 I get C You are problably using MySQL, which comes with a helpful data compression feature: when your serialized objects are larger than the maximum size of a C column MySQL will simply shorten it for you. Why C defaults to 64k, and how on earth someone would consider silent data truncation a sane default I could never fathom, but nevertheless MySQL does allow you to disable this by setting the "strict" SQL mode in the configuration. To resolve the actual problem (though this obviously won't repair your lost data), alter the entries table so that the C column uses the nonstandard C datatype. =head1 VERSION CONTROL L =head1 AUTHOR Yuval Kogman Enothingmuch@woobling.orgE =head1 COPYRIGHT Copyright (c) 2008, 2009 Yuval Kogman, Infinity Interactive. All rights reserved This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =begin Pod::Coverage BUILD DEMOLISH all_entries all_entry_ids child_entries child_entry_ids clear create_tables default_typemap delete entries_to_rows entry_to_row exists fetch_entry has_batch_size insert insert_entry insert_rows new_from_dsn new_garbage_collector prepare_insert prepare_select prepare_update register_handle remove_ids root_entries root_entry_ids search simple_search tables_exist txn_begin txn_commit txn_rollback update_index =end Pod::Coverage =cut Schema000755001750000144 011775631363 17520 5ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/DBIx/ClassKiokuDB.pm100644001750000144 1754411775631363 21541 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/DBIx/Class/Schemapackage DBIx::Class::Schema::KiokuDB; BEGIN { $DBIx::Class::Schema::KiokuDB::AUTHORITY = 'cpan:NUFFIN'; } { $DBIx::Class::Schema::KiokuDB::VERSION = '1.22'; } use strict; use warnings; use Carp qw(croak); use DBIx::Class::KiokuDB::EntryProxy; use DBIx::Class::ResultSource::Table; use Scalar::Util qw(weaken refaddr); use namespace::clean; use base qw(Class::Accessor::Grouped); __PACKAGE__->mk_group_accessors( inherited => "kiokudb_entries_source_name" ); sub kiokudb_handle { my $self = shift; croak "Can't call kiokudb_handle on unconnected schema" unless ref $self; unless ( $self->{kiokudb_handle} ) { require KiokuDB; require KiokuDB::Backend::DBI; croak "Can't vivify KiokuDB handle without KiokuDB schema bits. " . "Add __PACKAGE__->define_kiokudb_schema() to your schema class" unless $self->kiokudb_entries_source_name; my $dir = KiokuDB->new( backend => my $backend = KiokuDB::Backend::DBI->new( connected_schema => $self, ), ); $backend->meta->get_attribute('schema')->_weaken_value($backend); # FIXME proper MOP api? # not weak $self->{kiokudb_handle} = $dir; } $self->{kiokudb_handle}; } sub _kiokudb_handle { my ( $self, $handle ) = @_; croak "Can't call _kiokudb_handle on unconnected schema" unless ref $self; croak "Can't vivify KiokuDB handle without KiokuDB schema bits. " . "Add __PACKAGE__->define_kiokudb_schema() to your schema class" unless $self->kiokudb_entries_source_name; if ( $self->{kiokudb_handle} ) { if ( refaddr($self->{kiokudb_handle}) != refaddr($handle) ) { croak "KiokuDB directory already registered"; } } else { $self->{kiokudb_handle} = $handle; weaken($self->{kiokudb_handle}); } return $handle; } sub define_kiokudb_schema { my ( $self, @args ) = @_; my %args = ( schema => $self, entries_table => "entries", gin_index_table => "gin_index", result_class => "DBIx::Class::KiokuDB::EntryProxy", gin_index => 1, @args, ); my $entries_source_name = $args{entries_source} ||= $args{entries_table}; my $gin_index_source_name = $args{gin_index_source} ||= $args{gin_index_table}; my $entries = $self->define_kiokudb_entries_resultsource(%args); my $schema = $args{schema}; $schema->register_source( $entries_source_name => $entries ); if ($args{gin_index}) { my $gin_index = $self->define_kiokudb_gin_index_resultsource(%args); $schema->register_source( $gin_index_source_name => $gin_index ); } $schema->kiokudb_entries_source_name($entries_source_name) unless $schema->kiokudb_entries_source_name; } sub define_kiokudb_entries_resultsource { my ( $self, %args ) = @_; my $entries = DBIx::Class::ResultSource::Table->new({ name => $args{entries_table} }); $entries->add_columns( id => { data_type => "varchar" }, data => { data_type => "blob", is_nullable => 0 }, # FIXME longblob for mysql class => { data_type => "varchar", is_nullable => 1 }, root => { data_type => "boolean", is_nullable => 0 }, tied => { data_type => "char", size => 1, is_nullable => 1 }, @{ $args{extra_entries_columns} || [] }, ); $entries->set_primary_key("id"); $entries->sqlt_deploy_callback(sub { my ($source, $sqlt_table) = @_; $sqlt_table->extra->{mysql_table_type} = "InnoDB"; if ( $source->schema->storage->sqlt_type eq 'MySQL' ) { $sqlt_table->get_field('data')->data_type('longblob'); } }); $entries->result_class($args{result_class}); return $entries; } sub define_kiokudb_gin_index_resultsource { my ( $self, %args ) = @_; my $gin_index = DBIx::Class::ResultSource::Table->new({ name => $args{gin_index_table} }); $gin_index->add_columns( id => { data_type => "varchar", is_foreign_key => 1 }, value => { data_type => "varchar" }, ); $gin_index->add_relationship('entry_ids', $args{entries_source}, { 'foreign.id' => 'self.id' }); $gin_index->sqlt_deploy_callback(sub { my ($source, $sqlt_table) = @_; $sqlt_table->extra->{mysql_table_type} = "InnoDB"; $sqlt_table->add_index( name => 'gin_index_ids', fields => ['id'] ) or die $sqlt_table->error; $sqlt_table->add_index( name => 'gin_index_values', fields => ['value'] ) or die $sqlt_table->error; }); return $gin_index; } # ex: set sw=4 et: __PACKAGE__ __END__ =pod =head1 NAME DBIx::Class::Schema::KiokuDB - Hybrid L/L schema support. =head1 SYNOPSIS Load this component into the schema: package MyApp::DB; use base qw(DBIx::Class::Schema); __PACKAGE__->load_components(qw(Schema::KiokuDB)); __PAKCAGE__->load_namespaces; Then load the L component into every table that wants to refer to arbitrary KiokuDB objects: package MyApp::DB::Result::Album; use base qw(DBIx::Class::Core); __PACKAGE__->load_components(qw(KiokuDB)); __PACKAGE__->table('album'); __PACKAGE__->add_columns( id => { data_type => "integer" }, title => { data_type => "varchar" }, # the foreign key for the KiokuDB object: metadata => { data_type => "varchar" }, ); __PACKAGE__->set_primary_key('id'); # enable a KiokuDB rel on the column: __PACKAGE__->kiokudb_column('metadata'); Connect to the DSN: my $dir = KiokuDB->connect( 'dbi:SQLite:dbname=:memory:', schema => "MyApp::DB", create => 1, ); # get the connect DBIC schema instance my $schema = $dir->backend->schema; Then you can freely refer to KiokuDB objects from your C class: $dir->txn_do(scope => 1, body => sub { $schema->resultset("Album")->create({ title => "Blah blah", metadata => $any_object, }); }); =head1 DESCRIPTION This class provides the schema definition support code required for integrating an arbitrary L with L. =head2 REUSING AN EXISTING DBIx::Class SCHEMA The example in the Synopis assumes that you want to first set up a L and than link that to some L classes. Another use case is that you already have a configured L Schema and want to tack L onto it. The trick here is to make sure to load the L schema using C<< __PACKAGE__->define_kiokudb_schema() >> in your Schema class: package MyApp::DB; use base qw(DBIx::Class::Schema); __PACKAGE__->load_components(qw(Schema::KiokuDB)); __PACKAGE__->define_kiokudb_schema(); __PAKCAGE__->load_namespaces; You can now get the L directory handle like so: my $dir = $schema->kiokudb_handle; For a complete example take a look at F. =head1 USAGE AND LIMITATIONS L managed objects may hold references to row objects, resultsets (treated as saved searches, or results or cursor state is saved), result source handles, and the schema. Foreign L objects, that is ones that originated from a schema that isn't the underlying schema are currently not supported, but this limitation may be lifted in the future. All DBIC operations which may implicitly cause a lookup of a L managed object require live object scope management, just as normal. It is reccomended to use L because that will invoke the appropriate transaction hooks on both layers, as opposed to just in L. =head1 SEE ALSO L, L. =begin Pod::Coverage define_kiokudb_entries_resultsource define_kiokudb_gin_index_resultsource define_kiokudb_schema kiokudb_handle =end Pod::Coverage =cut DBI000755001750000144 011775631363 17702 5ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDB/BackendSchema.pm100644001750000144 52711775631363 21564 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDB/Backend/DBI#!/usr/bin/perl package KiokuDB::Backend::DBI::Schema; BEGIN { $KiokuDB::Backend::DBI::Schema::AUTHORITY = 'cpan:NUFFIN'; } { $KiokuDB::Backend::DBI::Schema::VERSION = '1.22'; } use Moose; use namespace::clean -except => 'meta'; extends qw(DBIx::Class::Schema); __PACKAGE__->load_components(qw(Schema::KiokuDB)); __PACKAGE__ __END__ KiokuDB000755001750000144 011775631363 17610 5ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/DBIx/ClassEntryProxy.pm100644001750000144 221611775631363 22452 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/DBIx/Class/KiokuDBpackage DBIx::Class::KiokuDB::EntryProxy; BEGIN { $DBIx::Class::KiokuDB::EntryProxy::AUTHORITY = 'cpan:NUFFIN'; } { $DBIx::Class::KiokuDB::EntryProxy::VERSION = '1.22'; } use strict; use warnings; use namespace::clean; use base qw(DBIx::Class); sub inflate_result { my ( $self, $source, $data ) = @_; my $handle = $source->schema->kiokudb_handle; if ( ref( my $obj = $handle->id_to_object( $data->{id} ) ) ) { return $obj; } else { my $entry = $handle->backend->deserialize($data->{data}); return $handle->linker->expand_object($entry); } } sub new { croak("Creating new rows via the result set makes no sense, insert them with KiokuDB::insert instead"); } # ex: set sw=4 et: __PACKAGE__ __END__ =pod =head1 NAME DBIx::Class::KiokuDB::EntryProxy - A proxying result class for KiokuDB objects =head1 SYNOPSIS my $kiokudb_object = $schema->resultset("entries")->find($id); =head1 DESCRIPTION This class implements the necessary glue to properly inflate resultsets for L object into proper instances using L. =begin Pod::Coverage new inflate_result =end Pod::Coverage =cut DBIC000755001750000144 011775631363 21136 5ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDB/TypeMap/EntryRow.pm100644001750000144 461111775631363 22405 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDB/TypeMap/Entry/DBICpackage KiokuDB::TypeMap::Entry::DBIC::Row; BEGIN { $KiokuDB::TypeMap::Entry::DBIC::Row::AUTHORITY = 'cpan:NUFFIN'; } { $KiokuDB::TypeMap::Entry::DBIC::Row::VERSION = '1.22'; } use Moose; use JSON; use Scalar::Util qw(weaken); use namespace::autoclean; with qw(KiokuDB::TypeMap::Entry); has json => ( isa => "Object", is => "ro", default => sub { JSON->new }, ); sub compile { my ( $self, $class ) = @_; my $json = $self->json; return KiokuDB::TypeMap::Entry::Compiled->new( collapse_method => sub { my ( $collapser, @args ) = @_; $collapser->collapse_first_class( sub { my ( $collapser, %args ) = @_; my $obj = $args{object}; if ( my @objs = values %{ $obj->{_kiokudb_column} } ) { $collapser->visit(@objs); } return $collapser->make_entry( %args, data => $obj, ); }, @args, ); }, expand_method => sub { my ( $linker, $entry ) = @_; my $obj = $entry->data; $linker->register_object( $entry => $obj ); return $obj; }, id_method => sub { my ( $self, $object ) = @_; return 'dbic:row:' . $json->encode([ $object->result_source->source_name, $object->id ]); }, refresh_method => sub { my ( $linker, $object, $entry, @args ) = @_; $object->discard_changes; # FIXME avoid loading '$entry' alltogether }, entry => $self, class => $class, ); } __PACKAGE__->meta->make_immutable; # ex: set sw=4 et: __PACKAGE__ __END__ =pod =head1 NAME KiokuDB::TypeMap::Entry::DBIC::Row - L for L objects. =head1 DESCRIPTION L objects are resolved symbolically using the special ID format: dbic:row:$json The C<$json> string is a serialization of: [ $result_source_name, @primary_key_values ] The row objects are not actually written to the KiokuDB storage, as they are already present in the other tables. Looking up an object with such an ID is a dynamic lookup that delegates to the L and resultsets. =begin Pod::Coverage compile =end Pod::Coverage =cut Schema.pm100644001750000144 422711775631363 23041 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDB/TypeMap/Entry/DBICpackage KiokuDB::TypeMap::Entry::DBIC::Schema; BEGIN { $KiokuDB::TypeMap::Entry::DBIC::Schema::AUTHORITY = 'cpan:NUFFIN'; } { $KiokuDB::TypeMap::Entry::DBIC::Schema::VERSION = '1.22'; } use Moose; use Scalar::Util qw(weaken refaddr); use namespace::autoclean; with qw(KiokuDB::TypeMap::Entry); sub compile { my ( $self, $class ) = @_; return KiokuDB::TypeMap::Entry::Compiled->new( collapse_method => sub { my ( $collapser, @args ) = @_; $collapser->collapse_first_class( sub { my ( $collapser, %args ) = @_; if ( refaddr($collapser->backend->schema) == refaddr($args{object}) ) { return $collapser->make_entry( %args, data => undef, meta => { immortal => 1, }, ); } else { croak("Referring to foreign DBIC schemas is unsupported"); } }, @args, ); }, expand_method => sub { my ( $linker, $entry ) = @_; my $schema = $linker->backend->schema; $linker->register_object( $entry => $schema, immortal => 1 ); return $schema; }, id_method => sub { my ( $self, $object ) = @_; return 'dbic:schema'; # singleton }, refresh_method => sub { }, entry => $self, class => $class, ); } __PACKAGE__->meta->make_immutable; # ex: set sw=4 et: __PACKAGE__ __END__ =pod =head1 NAME KiokuDB::TypeMap::Entry::DBIC::Schema - L for L objects. =head1 DESCRIPTION This typemap entry handles references to L as a scoped singleton. The ID of the schema is always C. References to L objects which are not a part of the underlying L layout are currently not supported, but may be in the future. =begin Pod::Coverage compile =end Pod::Coverage =cut ResultSet.pm100644001750000144 233711775631363 23573 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDB/TypeMap/Entry/DBICpackage KiokuDB::TypeMap::Entry::DBIC::ResultSet; BEGIN { $KiokuDB::TypeMap::Entry::DBIC::ResultSet::AUTHORITY = 'cpan:NUFFIN'; } { $KiokuDB::TypeMap::Entry::DBIC::ResultSet::VERSION = '1.22'; } use Moose; use JSON; use Scalar::Util qw(weaken); use namespace::autoclean; extends qw(KiokuDB::TypeMap::Entry::Naive); sub compile_collapse_body { my ( $self, @args ) = @_; my $sub = $self->SUPER::compile_collapse_body(@args); return sub { my ( $self, %args ) = @_; my $rs = $args{object}; my $clone = $rs->search_rs; # clear all cached data $clone->set_cache; $self->$sub( %args, object => $clone ); }; } __PACKAGE__->meta->make_immutable; # ex: set sw=4 et: __PACKAGE__ __END__ =pod =head1 NAME KiokuDB::TypeMap::Entry::DBIC::ResultSet - L for L objects =head1 DESCRIPTION The result set is cloned, the clone will have its cache cleared, and then it is simply serialized normally. This is the only L related object that is literally stored in the database, as it represents a memory resident object, not a database resident one. =begin Pod::Coverage compile_collapse_body =end Pod::Coverage =cut ResultSource.pm100644001750000144 445611775631363 24304 0ustar00doyusers000000000000KiokuDB-Backend-DBI-1.22/lib/KiokuDB/TypeMap/Entry/DBICpackage KiokuDB::TypeMap::Entry::DBIC::ResultSource; BEGIN { $KiokuDB::TypeMap::Entry::DBIC::ResultSource::AUTHORITY = 'cpan:NUFFIN'; } { $KiokuDB::TypeMap::Entry::DBIC::ResultSource::VERSION = '1.22'; } use Moose; use Scalar::Util qw(weaken refaddr); use namespace::autoclean; with qw(KiokuDB::TypeMap::Entry); sub compile { my ( $self, $class ) = @_; return KiokuDB::TypeMap::Entry::Compiled->new( collapse_method => sub { my ( $collapser, @args ) = @_; $collapser->collapse_first_class( sub { my ( $collapser, %args ) = @_; if ( refaddr($collapser->backend->schema) == refaddr($args{object}->schema) ) { return $collapser->make_entry( %args, data => undef, meta => { immortal => 1, }, ); } else { croak("Referring to foreign DBIC schemas is unsupported"); } }, @args, ); }, expand_method => sub { my ( $linker, $entry ) = @_; my $schema = $linker->backend->schema; my $rs = $schema->source(substr($entry->id, length('dbic:schema:rs:'))); $linker->register_object( $entry => $rs, immortal => 1 ); return $rs; }, id_method => sub { my ( $self, $object ) = @_; return 'dbic:schema:rs:' . $object->source_name; }, refresh_method => sub { }, entry => $self, class => $class, ); } __PACKAGE__->meta->make_immutable; # ex: set sw=4 et: __PACKAGE__ __END__ =pod =head1 NAME KiokuDB::TypeMap::Entry::DBIC::ResultSource - L for L objects. =head1 DESCRIPTION This tyepmap entry resolves result source handles symbolically by name. References to the handle receive a special ID in the form: dbic:schema:rs:$name and are not actually written to storage. Looking up such an ID causes the backend to dynamically search for such a resultset in the L. =begin Pod::Coverage compile =end Pod::Coverage =cut