pg-1.5.5/0000755000004100000410000000000014563476204012173 5ustar www-datawww-datapg-1.5.5/Manifest.txt0000644000004100000410000000270514563476204014506 0ustar www-datawww-data.gemtest BSDL Contributors.rdoc History.rdoc LICENSE Manifest.txt POSTGRES README-OS_X.rdoc README-Windows.rdoc README.ja.rdoc README.rdoc Rakefile Rakefile.cross ext/errorcodes.def ext/errorcodes.rb ext/errorcodes.txt ext/extconf.rb ext/gvl_wrappers.c ext/gvl_wrappers.h ext/pg.c ext/pg.h ext/pg_binary_decoder.c ext/pg_binary_encoder.c ext/pg_coder.c ext/pg_connection.c ext/pg_copy_coder.c ext/pg_errors.c ext/pg_record_coder.c ext/pg_result.c ext/pg_text_decoder.c ext/pg_text_encoder.c ext/pg_tuple.c ext/pg_type_map.c ext/pg_type_map_all_strings.c ext/pg_type_map_by_class.c ext/pg_type_map_by_column.c ext/pg_type_map_by_mri_type.c ext/pg_type_map_by_oid.c ext/pg_type_map_in_ruby.c ext/pg_util.c ext/pg_util.h ext/vc/pg.sln ext/vc/pg_18/pg.vcproj ext/vc/pg_19/pg_19.vcproj lib/pg.rb lib/pg/basic_type_mapping.rb lib/pg/binary_decoder.rb lib/pg/coder.rb lib/pg/connection.rb lib/pg/constants.rb lib/pg/exceptions.rb lib/pg/result.rb lib/pg/text_decoder.rb lib/pg/text_encoder.rb lib/pg/tuple.rb lib/pg/type_map_by_column.rb spec/data/expected_trace.out spec/data/random_binary_data spec/helpers.rb spec/pg/basic_type_mapping_spec.rb spec/pg/connection_spec.rb spec/pg/connection_sync_spec.rb spec/pg/result_spec.rb spec/pg/tuple_spec.rb spec/pg/type_map_by_class_spec.rb spec/pg/type_map_by_column_spec.rb spec/pg/type_map_by_mri_type_spec.rb spec/pg/type_map_by_oid_spec.rb spec/pg/type_map_in_ruby_spec.rb spec/pg/type_map_spec.rb spec/pg/type_spec.rb spec/pg_spec.rb pg-1.5.5/.hgtags0000644000004100000410000000371214563476204013454 0ustar www-datawww-data7fbe4187e9e53e58baf6cd7c1c21e3a3c5b920e5 0.8.0 da726282493c57b4ef8e5be1a21e98cc028fda4c 0.9.0 1822a169c4fecac402335a64a484b5dc053994a3 0.10.0 1822a169c4fecac402335a64a484b5dc053994a3 v0.10.0 1822a169c4fecac402335a64a484b5dc053994a3 0.10.0 0000000000000000000000000000000000000000 0.10.0 de10b5d8e4429d22790976ec4de89f209e882906 v0.10.1 3cb8e57c6c80737c714dd7607a144ef12074c4fe v0.11.0 da726282493c57b4ef8e5be1a21e98cc028fda4c v0.9.0 7fbe4187e9e53e58baf6cd7c1c21e3a3c5b920e5 v0.8.0 b767401684d8a4051230874b0686a54537b10e2f v0.12.0 21f84883e5c206a3f2890905af68e08a0046ba1c v0.12.1 88bd78632f86f696dd3fa8904c1d3180216378cc v0.12.2 7b2da7e0815cce834cd60f9747209923952876ec v0.13.0 9e60b2c477cde450a088161ca8f3d72b52531aaf v0.13.1 c79cd308363d614f7ba32fd86294c9aa3117c361 v0.13.2 634e0a42a1010fc1dcd279fb28506873a47090c1 v0.14.0 2d83ce956f971c3aeb145c9ad68f426e78b852dd v0.14.1 065fd1f0e9dda58557de0efb2deb138e93ba7632 v0.15.0 4692c20bcbdeadd8a31283e234464c6e1c43765d v0.15.1 def8f41a76726cf7239ff6dbaa2828a881f93451 v0.16.0 30da9c169efc3985ad0464936483c229faba0e33 v0.17.0 78846e47d87b7ed5bb7397116070692b1cfa87d7 v0.17.1 cfb2bfc0f66181e67768c4313bcce473292a0825 v0.18.0 f97dd6cb4f34da6a62c4339887249115c7c25b9c v0.18.1 22a361201fd1d387d59a066b179124694a446f38 v0.18.2 01c42c68797e724507b76056b98981cb30748a36 v0.18.3 94ef4830540d8fa74b8912118fb8065f4a6a3563 v0.18.4 94ef4830540d8fa74b8912118fb8065f4a6a3563 v0.18.4 0000000000000000000000000000000000000000 v0.18.4 0000000000000000000000000000000000000000 v0.18.4 f61127650cd00a1154c591dcde85ebac01f2be9f v0.18.4 bd2aaa2c5797de78435977a1c60e450d6f22811b v0.19.0 e5eb92cca97abc0c6fc168acfad993c2ad314589 v0.20.0 deae742eacfa985bd20f47a12a8fee6ce2e0447c v0.21.0 9a388d1023ec145cb00e6e16f3a8cabd3cc81d16 v1.0.0 319c00d9d59e24ce06493715cff2701e3a2a8990 v1.1.0 c80083c5e395451d612d43323c40317eb63bcb54 v1.1.1 f54d10c5d98fd06d6fc70896107319901ae374ae v1.1.2 c7035371f972982c1716daf61861b9dde15de03e v1.1.3 11d3487e303cf0fc6af48086f3e9c0b1c8283039 v1.1.4 pg-1.5.5/certs/0000755000004100000410000000000014563476204013313 5ustar www-datawww-datapg-1.5.5/certs/kanis@comcard.de.pem0000644000004100000410000000221414563476204017142 0ustar www-datawww-data-----BEGIN CERTIFICATE----- MIIDLjCCAhagAwIBAgIBCzANBgkqhkiG9w0BAQsFADA9MQ4wDAYDVQQDDAVrYW5p czEXMBUGCgmSJomT8ixkARkWB2NvbWNhcmQxEjAQBgoJkiaJk/IsZAEZFgJkZTAe Fw0yMzA0MjgwOTI0NDhaFw0yNDA0MjcwOTI0NDhaMD0xDjAMBgNVBAMMBWthbmlz MRcwFQYKCZImiZPyLGQBGRYHY29tY2FyZDESMBAGCgmSJomT8ixkARkWAmRlMIIB IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApop+rNmg35bzRugZ21VMGqI6 HGzPLO4VHYncWn/xmgPU/ZMcZdfj6MzIaZJ/czXyt4eHpBk1r8QOV3gBXnRXEjVW 9xi+EdVOkTV2/AVFKThcbTAQGiF/bT1n2M+B1GTybRzMg6hyhOJeGPqIhLfJEpxn lJi4+ENAVT4MpqHEAGB8yFoPC0GqiOHQsdHxQV3P3c2OZqG+yJey74QtwA2tLcLn Q53c63+VLGsOjODl1yPn/2ejyq8qWu6ahfTxiIlSar2UbwtaQGBDFdb2CXgEufXT L7oaPxlmj+Q2oLOfOnInd2Oxop59HoJCQPsg8f921J43NCQGA8VHK6paxIRDLQID AQABozkwNzAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAdBgNVHQ4EFgQUvgTdT7fe x17ugO3IOsjEJwW7KP4wDQYJKoZIhvcNAQELBQADggEBACAxNXwfMGG7paZjnG/c smdi/ocW2GmCNtILaSzDZqlD5LoA68MiO7u5vwWyBaDJ6giUB330VJoGRbWMxvxN JU6Bnwa4yYp9YtF91wYIi7FXwIrCPKd9bk3bf4M5wECdsv+zvVceq2zRXqD7fci8 1LRG8ort/f4TgaT7B4aNwOaabA2UT6u0FGeglqxLkhir86MY3QQyBfJZUoTKWGkz S9a7GXsYpe+8HMOaE4+SZp8SORKPgATND5m/4VdzuO59VXjE5UP7QpXigbxAt7H7 ciK5Du2ZDhowmWzZwNzR7VvVmfAK6RQJlRB03VkkQRWGld5yApOrYDne6WbD8kE0 uM8= -----END CERTIFICATE----- pg-1.5.5/certs/ged.pem0000644000004100000410000000263414563476204014562 0ustar www-datawww-data-----BEGIN CERTIFICATE----- MIID+DCCAmCgAwIBAgIBBDANBgkqhkiG9w0BAQsFADAiMSAwHgYDVQQDDBdnZWQv REM9RmFlcmllTVVEL0RDPW9yZzAeFw0yMjAxMDcyMzU4MTRaFw0yMzAxMDcyMzU4 MTRaMCIxIDAeBgNVBAMMF2dlZC9EQz1GYWVyaWVNVUQvREM9b3JnMIIBojANBgkq hkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAvyVhkRzvlEs0fe7145BYLfN6njX9ih5H L60U0p0euIurpv84op9CNKF9tx+1WKwyQvQP7qFGuZxkSUuWcP/sFhDXL1lWUuIl M4uHbGCRmOshDrF4dgnBeOvkHr1fIhPlJm5FO+Vew8tSQmlDsosxLUx+VB7DrVFO 5PU2AEbf04GGSrmqADGWXeaslaoRdb1fu/0M5qfPTRn5V39sWD9umuDAF9qqil/x Sl6phTvgBrG8GExHbNZpLARd3xrBYLEFsX7RvBn2UPfgsrtvpdXjsHGfpT3IPN+B vQ66lts4alKC69TE5cuKasWBm+16A4aEe3XdZBRNmtOu/g81gvwA7fkJHKllJuaI dXzdHqq+zbGZVSQ7pRYHYomD0IiDe1DbIouFnPWmagaBnGHwXkDT2bKKP+s2v21m ozilJg4aar2okb/RA6VS87o+d7g6LpDDMMQjH4G9OPnJENLdhu8KnPw/ivSVvQw7 N2I4L/ZOIe2DIVuYH7aLHfjZDQv/mNgpAgMBAAGjOTA3MAkGA1UdEwQCMAAwCwYD VR0PBAQDAgSwMB0GA1UdDgQWBBRyjf55EbrHagiRLqt5YAd3yb8k4DANBgkqhkiG 9w0BAQsFAAOCAYEASrm1AbEoxACZ9WXJH3R5axV3U0CA4xaETlL2YT+2nOfVBMQ9 0ZlkPx6j4ghKJgAIi1TMfDM2JyPJsppQh8tiNccDjWc62UZRY/dq26cMqf/lcI+a 6YBuEYvzZfearwVs8tHnXtwYV3WSCoCOQaB+nq2lA1O+nkKNl41WOsVbNama5jx3 8cQtVSEEmZy6jIDJ8c5TmBJ7BQUDEUEWA/A3V42Xyctoj7DvUXWE0lP+X6ypAVSr lFh3TS64D7NTvxkmg7natUoCvobl6kGl4yMaqE4YRTlfuzhpf91TSOntClqrAOsS K1s56WndQj3IoBocdY9mQhDZLtLHofSkymoP8btBlj5SsN24TiF0VMSZlctSCYZg GKyHim/MMlIfGOWsgfioq5jzwmql7W4CDubbb8Lkg70v+hN2E/MnNVAcNE3gyaGc P5YP5BAbNW+gvd3QHRiWTTuhgHrdDnGdXg93N2M5KHn1ug8BtPLQwlcFwEpKnlLn btEP+7EplFuoiMfd -----END CERTIFICATE----- pg-1.5.5/certs/larskanis-2023.pem0000644000004100000410000000265414563476204016400 0ustar www-datawww-data-----BEGIN CERTIFICATE----- MIIEBDCCAmygAwIBAgIBAjANBgkqhkiG9w0BAQsFADAoMSYwJAYDVQQDDB1sYXJz L0RDPWdyZWl6LXJlaW5zZG9yZi9EQz1kZTAeFw0yMzAyMTUxNzQxMTVaFw0yNDAy MTUxNzQxMTVaMCgxJjAkBgNVBAMMHWxhcnMvREM9Z3JlaXotcmVpbnNkb3JmL0RD PWRlMIIBojANBgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAwum6Y1KznfpzXOT/ mZgJTBbxZuuZF49Fq3K0WA67YBzNlDv95qzSp7V/7Ek3NCcnT7G+2kSuhNo1FhdN eSDO/moYebZNAcu3iqLsuzuULXPLuoU0GsMnVMqV9DZPh7cQHE5EBZ7hlzDBK7k/ 8nBMvR0mHo77kIkapHc26UzVq/G0nKLfDsIHXVylto3PjzOumjG6GhmFN4r3cP6e SDfl1FSeRYVpt4kmQULz/zdSaOH3AjAq7PM2Z91iGwQvoUXMANH2v89OWjQO/NHe JMNDFsmHK/6Ji4Kk48Z3TyscHQnipAID5GhS1oD21/WePdj7GhmbF5gBzkV5uepd eJQPgWGwrQW/Z2oPjRuJrRofzWfrMWqbOahj9uth6WSxhNexUtbjk6P8emmXOJi5 chQPnWX+N3Gj+jjYxqTFdwT7Mj3pv1VHa+aNUbqSPpvJeDyxRIuo9hvzDaBHb/Cg 9qRVcm8a96n4t7y2lrX1oookY6bkBaxWOMtWlqIprq8JZXM9AgMBAAGjOTA3MAkG A1UdEwQCMAAwCwYDVR0PBAQDAgSwMB0GA1UdDgQWBBQ4h1tIyvdUWtMI739xMzTR 7EfMFzANBgkqhkiG9w0BAQsFAAOCAYEAQAcuTARfiiVUVx5KURICfdTM2Kd7LhOn qt3Vs4ANGvT226LEp3RnQ+kWGQYMRb3cw3LY2TNQRPlnZxE994mgjBscN4fbjXqO T0JbVpeszRZa5k1goggbnWT7CO7yU7WcHh13DaSubY7HUpAJn2xz9w2stxQfN/EE VMlnDJ1P7mUHAvpK8X9j9h7Xlc1niViT18MYwux8mboVTryrLr+clATUkkM3yBF0 RV+c34ReW5eXO9Tr6aKTxh/pFC9ggDT6jOxuJgSvG8HWJzVf4NDvMavIas4KYjiI BU6CpWaG5NxicqL3BERi52U43HV08br+LNVpb7Rekgve/PJuSFnAR015bhSRXe5U vBioD1qW2ZW9tXg8Ww2IfDaO5a1So5Xby51rhNlyo6ATj2NkuLWZUKPKHhAz0TKm Dzx/gFSOrRoCt2mXNgrmcAfr386AfaMvCh7cXqdxZwmVo7ILZCYXck0pajvubsDd NUIIFkVXvd1odFyK9LF1RFAtxn/iAmpx -----END CERTIFICATE----- pg-1.5.5/certs/larskanis-2022.pem0000644000004100000410000000302214563476204016365 0ustar www-datawww-data-----BEGIN CERTIFICATE----- MIIETTCCArWgAwIBAgIBATANBgkqhkiG9w0BAQsFADAoMSYwJAYDVQQDDB1sYXJz L0RDPWdyZWl6LXJlaW5zZG9yZi9EQz1kZTAeFw0yMjAyMTQxMzMwNTZaFw0yMzAy MTQxMzMwNTZaMCgxJjAkBgNVBAMMHWxhcnMvREM9Z3JlaXotcmVpbnNkb3JmL0RD PWRlMIIBojANBgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAwum6Y1KznfpzXOT/ mZgJTBbxZuuZF49Fq3K0WA67YBzNlDv95qzSp7V/7Ek3NCcnT7G+2kSuhNo1FhdN eSDO/moYebZNAcu3iqLsuzuULXPLuoU0GsMnVMqV9DZPh7cQHE5EBZ7hlzDBK7k/ 8nBMvR0mHo77kIkapHc26UzVq/G0nKLfDsIHXVylto3PjzOumjG6GhmFN4r3cP6e SDfl1FSeRYVpt4kmQULz/zdSaOH3AjAq7PM2Z91iGwQvoUXMANH2v89OWjQO/NHe JMNDFsmHK/6Ji4Kk48Z3TyscHQnipAID5GhS1oD21/WePdj7GhmbF5gBzkV5uepd eJQPgWGwrQW/Z2oPjRuJrRofzWfrMWqbOahj9uth6WSxhNexUtbjk6P8emmXOJi5 chQPnWX+N3Gj+jjYxqTFdwT7Mj3pv1VHa+aNUbqSPpvJeDyxRIuo9hvzDaBHb/Cg 9qRVcm8a96n4t7y2lrX1oookY6bkBaxWOMtWlqIprq8JZXM9AgMBAAGjgYEwfzAJ BgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAdBgNVHQ4EFgQUOIdbSMr3VFrTCO9/cTM0 0exHzBcwIgYDVR0RBBswGYEXbGFyc0BncmVpei1yZWluc2RvcmYuZGUwIgYDVR0S BBswGYEXbGFyc0BncmVpei1yZWluc2RvcmYuZGUwDQYJKoZIhvcNAQELBQADggGB AFWP7F/y3Oq3NgrqUOnjKOeDaBa7AqNhHS+PZg+C90lnJzMgOs4KKgZYxqSQVSab SCEmzIO/StkXY4NpJ4fYLrHemf/fJy1wPyu+fNdp5SEEUwEo+2toRFlzTe4u4LdS QC636nPPTMt8H3xz2wf/lUIUeo2Qc95Qt2BQM465ibbG9kmA3c7Sopx6yOabYOAl KPRbOSEPiWYcF9Suuz8Gdf8jxEtPlnZiwRvnYJ+IHMq3XQCJWPpMzdDMbtlgHbXE vq1zOTLMSYAS0UB3uionR4yo1hLz60odwkCm7qf0o2Ci/5OjtB0a89VuyqRU2vUJ QH95WBjDJ6lCCW7J0mrMPnJQSUFTmufsU6jOChvPaCeAzW1YwrsP/YKnvwueG7ip VOdW6RitjtFxhS7evRL0201+KUvLz12zZWWjOcujlQs64QprxOtiv/MiisKb1Ng+ oL1mUdzB8KrZL4/WbG5YNX6UTtJbIOu9qEFbBAy4/jtIkJX+dlNoFwd4GXQW1YNO nA== -----END CERTIFICATE----- pg-1.5.5/misc/0000755000004100000410000000000014563476204013126 5ustar www-datawww-datapg-1.5.5/misc/ruby-pg/0000755000004100000410000000000014563476204014513 5ustar www-datawww-datapg-1.5.5/misc/ruby-pg/Manifest.txt0000644000004100000410000000007414563476204017023 0ustar www-datawww-dataHistory.txt Manifest.txt README.txt Rakefile lib/ruby/pg.rb pg-1.5.5/misc/ruby-pg/README.txt0000644000004100000410000000105014563476204016205 0ustar www-datawww-data= ruby-pg * https://github.com/ged/ruby-pg == Description This is an old, deprecated version of the 'pg' gem that hasn't been maintained or supported since early 2008. You should install/require 'pg' instead. If you need ruby-pg for legacy code that can't be converted, you can still install it using an explicit version, like so: gem install ruby-pg -v '0.7.9.2008.01.28' gem uninstall ruby-pg -v '>0.7.9.2008.01.28' If you have any questions, the nice folks in the Google group can help: http://goo.gl/OjOPP / ruby-pg@googlegroups.com pg-1.5.5/misc/ruby-pg/lib/0000755000004100000410000000000014563476204015261 5ustar www-datawww-datapg-1.5.5/misc/ruby-pg/lib/ruby/0000755000004100000410000000000014563476204016242 5ustar www-datawww-datapg-1.5.5/misc/ruby-pg/lib/ruby/pg.rb0000644000004100000410000000036214563476204017176 0ustar www-datawww-data# -*- ruby -*- require 'pathname' module Pg VERSION = '0.8.0' gemdir = Pathname( __FILE__ ).dirname.parent.parent readme = gemdir + 'README.txt' header, message = readme.read.split( /^== Description/m ) abort( message.strip ) end pg-1.5.5/misc/ruby-pg/History.txt0000644000004100000410000000027014563476204016714 0ustar www-datawww-data== v0.8.0 [2012-02-09] Michael Granger This placeholder version. == v0.7.9.2008.01.28 [2008-01-28] Jeff Davis <> The last actual version. pg-1.5.5/misc/ruby-pg/Rakefile0000644000004100000410000000066214563476204016164 0ustar www-datawww-data# -*- ruby -*- require 'date' require 'rubygems' require 'hoe' require 'pp' Hoe.spec 'ruby-pg' do self.developer 'Michael Granger', 'ged@FaerieMUD.org' self.dependency 'pg', '~> 0' self.spec_extras[:date] = Date.parse( '2008/01/30' ) line = '-' * 75 msg = paragraphs_of( 'README.txt', 3..-1 ) msg.unshift( line ) msg.push( line ) self.spec_extras[:post_install_message] = msg.join( "\n\n" ) + "\n" end # vim: syntax=ruby pg-1.5.5/misc/openssl-pg-segfault.rb0000644000004100000410000000143614563476204017356 0ustar www-datawww-data# -*- ruby -*- PGHOST = 'localhost' PGDB = 'test' #SOCKHOST = 'github.com' SOCKHOST = 'it-trac.laika.com' # Load pg first, so the libssl.so that libpq is linked against is loaded. require 'pg' $stderr.puts "connecting to postgres://#{PGHOST}/#{PGDB}" conn = PG.connect( PGHOST, :dbname => PGDB ) # Now load OpenSSL, which might be linked against a different libssl. require 'socket' require 'openssl' $stderr.puts "Connecting to #{SOCKHOST}" sock = TCPSocket.open( SOCKHOST, 443 ) ctx = OpenSSL::SSL::SSLContext.new sock = OpenSSL::SSL::SSLSocket.new( sock, ctx ) sock.sync_close = true # The moment of truth... $stderr.puts "Attempting to connect..." begin sock.connect rescue Errno $stderr.puts "Got an error connecting, but no segfault." else $stderr.puts "Nope, no segfault!" end pg-1.5.5/misc/postgres/0000755000004100000410000000000014563476204014774 5ustar www-datawww-datapg-1.5.5/misc/postgres/Manifest.txt0000644000004100000410000000007514563476204017305 0ustar www-datawww-dataHistory.txt Manifest.txt README.txt Rakefile lib/postgres.rb pg-1.5.5/misc/postgres/README.txt0000644000004100000410000000110414563476204016466 0ustar www-datawww-data= postgres * https://github.com/ged/ruby-pg == Description This is an old, deprecated version of the Ruby PostgreSQL driver that hasn't been maintained or supported since early 2008. You should install/require 'pg' instead. If you need the 'postgres' gem for legacy code that can't be converted, you can still install it using an explicit version, like so: gem install postgres -v '0.7.9.2008.01.28' gem uninstall postgres -v '>0.7.9.2008.01.28' If you have any questions, the nice folks in the Google group can help: http://goo.gl/OjOPP / ruby-pg@googlegroups.com pg-1.5.5/misc/postgres/lib/0000755000004100000410000000000014563476204015542 5ustar www-datawww-datapg-1.5.5/misc/postgres/lib/postgres.rb0000644000004100000410000000036114563476204017735 0ustar www-datawww-data# -*- ruby -*- require 'pathname' module Postgres VERSION = '0.8.1' gemdir = Pathname( __FILE__ ).dirname.parent readme = gemdir + 'README.txt' header, message = readme.read.split( /^== Description/m ) abort( message.strip ) end pg-1.5.5/misc/postgres/History.txt0000644000004100000410000000027014563476204017175 0ustar www-datawww-data== v0.8.0 [2012-02-09] Michael Granger This placeholder version. == v0.7.9.2008.01.28 [2008-01-28] Jeff Davis <> The last actual version. pg-1.5.5/misc/postgres/Rakefile0000644000004100000410000000066314563476204016446 0ustar www-datawww-data# -*- ruby -*- require 'date' require 'rubygems' require 'hoe' require 'pp' Hoe.spec 'postgres' do self.developer 'Michael Granger', 'ged@FaerieMUD.org' self.dependency 'pg', '~> 0' self.spec_extras[:date] = Date.parse( '2008/01/30' ) line = '-' * 75 msg = paragraphs_of( 'README.txt', 3..-1 ) msg.unshift( line ) msg.push( line ) self.spec_extras[:post_install_message] = msg.join( "\n\n" ) + "\n" end # vim: syntax=ruby pg-1.5.5/.gitignore0000644000004100000410000000041014563476204014156 0ustar www-datawww-data*.lock *.orig *_BACKUP_* *_BASE_* *_LOCAL_* *_REMOTE_* /.test_symlink /build/ /ext/Makefile /ext/mkmf.log /ext/postgresql_lib_path.rb /doc/ /lib/*.bundle /lib/*.so /lib/2.?/ /lib/3.?/ /pkg/ /tmp/ /tmp_test_*/ /vendor/ /lib/libpq.dll /lib/pg/postgresql_lib_path.rb pg-1.5.5/Rakefile.cross0000644000004100000410000002334114563476204014773 0ustar www-datawww-data# -*- rake -*- require 'uri' require 'tempfile' require 'rbconfig' require 'rake/clean' require 'rake/extensiontask' require 'rake/extensioncompiler' require 'ostruct' require_relative 'rakelib/task_extension' MISCDIR = BASEDIR + 'misc' NUM_CPUS = if File.exist?('/proc/cpuinfo') File.read('/proc/cpuinfo').scan('processor').length elsif RUBY_PLATFORM.include?( 'darwin' ) `system_profiler SPHardwareDataType | grep 'Cores' | awk '{print $5}'`.chomp else 1 end class CrossLibrary < OpenStruct include Rake::DSL prepend TaskExtension def initialize(for_platform, openssl_config, toolchain) super() self.for_platform = for_platform self.openssl_config = openssl_config self.host_platform = toolchain # Cross-compilation constants self.openssl_version = ENV['OPENSSL_VERSION'] || '3.2.1' self.postgresql_version = ENV['POSTGRESQL_VERSION'] || '16.2' # Check if symlinks work in the current working directory. # This fails, if rake-compiler-dock is running on a Windows box. begin FileUtils.rm_f '.test_symlink' FileUtils.ln_s '/', '.test_symlink' rescue NotImplementedError, SystemCallError # Symlinks don't work -> use home directory instead self.compile_home = Pathname( "~/.ruby-pg-build" ).expand_path else self.compile_home = Pathname( "./build" ).expand_path end self.static_sourcesdir = compile_home + 'sources' self.static_builddir = compile_home + 'builds' + for_platform CLOBBER.include( static_sourcesdir ) CLEAN.include( static_builddir ) # Static OpenSSL build vars self.static_openssl_builddir = static_builddir + "openssl-#{openssl_version}" self.openssl_source_uri = URI( "http://www.openssl.org/source/openssl-#{openssl_version}.tar.gz" ) self.openssl_tarball = static_sourcesdir + File.basename( openssl_source_uri.path ) self.openssl_makefile = static_openssl_builddir + 'Makefile' self.libssl = static_openssl_builddir + 'libssl.a' self.libcrypto = static_openssl_builddir + 'libcrypto.a' self.openssl_patches = Rake::FileList[ (MISCDIR + "openssl-#{openssl_version}.*.patch").to_s ] # Static PostgreSQL build vars self.static_postgresql_builddir = static_builddir + "postgresql-#{postgresql_version}" self.postgresql_source_uri = begin uristring = "http://ftp.postgresql.org/pub/source/v%s/postgresql-%s.tar.bz2" % [ postgresql_version, postgresql_version ] URI( uristring ) end self.postgresql_tarball = static_sourcesdir + File.basename( postgresql_source_uri.path ) self.static_postgresql_srcdir = static_postgresql_builddir + 'src' self.static_postgresql_libdir = static_postgresql_srcdir + 'interfaces/libpq' self.static_postgresql_incdir = static_postgresql_srcdir + 'include' self.postgresql_global_makefile = static_postgresql_srcdir + 'Makefile.global' self.postgresql_shlib_makefile = static_postgresql_srcdir + 'Makefile.shlib' self.postgresql_shlib_mf_orig = static_postgresql_srcdir + 'Makefile.shlib.orig' self.postgresql_lib = static_postgresql_libdir + 'libpq.dll' self.postgresql_patches = Rake::FileList[ (MISCDIR + "postgresql-#{postgresql_version}.*.patch").to_s ] # clean intermediate files and folders CLEAN.include( static_builddir.to_s ) ##################################################################### ### C R O S S - C O M P I L A T I O N - T A S K S ##################################################################### directory static_sourcesdir.to_s # # Static OpenSSL build tasks # directory static_openssl_builddir.to_s # openssl source file should be stored there file openssl_tarball => static_sourcesdir do |t| download( openssl_source_uri, t.name ) end # Extract the openssl builds file static_openssl_builddir => openssl_tarball do |t| puts "extracting %s to %s" % [ openssl_tarball, static_openssl_builddir.parent ] static_openssl_builddir.mkpath run 'tar', '-xzf', openssl_tarball.to_s, '-C', static_openssl_builddir.parent.to_s openssl_makefile.unlink if openssl_makefile.exist? openssl_patches.each do |patchfile| puts " applying patch #{patchfile}..." run 'patch', '-Np1', '-d', static_openssl_builddir.to_s, '-i', File.expand_path( patchfile, BASEDIR ) end end self.cmd_prelude = [ "env", "CROSS_COMPILE=#{host_platform}-", "CFLAGS=-DDSO_WIN32", ] # generate the makefile in a clean build location file openssl_makefile => static_openssl_builddir do |t| chdir( static_openssl_builddir ) do cmd = cmd_prelude.dup cmd << "./Configure" << "-static" << openssl_config run( *cmd ) end end desc "compile static openssl libraries" task "openssl_libs:#{for_platform}" => [ libssl, libcrypto ] task "compile_static_openssl:#{for_platform}" => openssl_makefile do |t| chdir( static_openssl_builddir ) do cmd = cmd_prelude.dup cmd << 'make' << "-j#{NUM_CPUS}" << 'build_libs' run( *cmd ) end end desc "compile static #{libssl}" file libssl => "compile_static_openssl:#{for_platform}" desc "compile static #{libcrypto}" file libcrypto => "compile_static_openssl:#{for_platform}" # # Static PostgreSQL build tasks # directory static_postgresql_builddir.to_s # postgresql source file should be stored there file postgresql_tarball => static_sourcesdir do |t| download( postgresql_source_uri, t.name ) end # Extract the postgresql sources file static_postgresql_builddir => postgresql_tarball do |t| puts "extracting %s to %s" % [ postgresql_tarball, static_postgresql_builddir.parent ] static_postgresql_builddir.mkpath run 'tar', '-xjf', postgresql_tarball.to_s, '-C', static_postgresql_builddir.parent.to_s postgresql_patches.each do |patchfile| puts " applying patch #{patchfile}..." run 'patch', '-Np1', '-d', static_postgresql_builddir.to_s, '-i', File.expand_path( patchfile, BASEDIR ) end end # generate the makefile in a clean build location file postgresql_global_makefile => [ static_postgresql_builddir, "openssl_libs:#{for_platform}" ] do |t| options = [ "--target=#{host_platform}", "--host=#{host_platform}", '--with-openssl', '--without-zlib', '--without-icu', ] chdir( static_postgresql_builddir ) do configure_path = static_postgresql_builddir + 'configure' cmd = [ configure_path.to_s, *options ] cmd << "CFLAGS=-L#{static_openssl_builddir}" cmd << "LDFLAGS=-L#{static_openssl_builddir}" cmd << "LDFLAGS_SL=-L#{static_openssl_builddir}" cmd << "LIBS=-lwsock32 -lgdi32 -lws2_32 -lcrypt32" cmd << "CPPFLAGS=-I#{static_openssl_builddir}/include" run( *cmd ) end end # make libpq.dll task postgresql_lib => [ postgresql_global_makefile ] do |t| # Work around missing dependency to libcommon in PostgreSQL-9.4.0 chdir( static_postgresql_srcdir + "common" ) do sh 'make', "-j#{NUM_CPUS}" end chdir( static_postgresql_srcdir + "port" ) do sh 'make', "-j#{NUM_CPUS}" end chdir( postgresql_lib.dirname ) do sh 'make', "-j#{NUM_CPUS}", postgresql_lib.basename.to_s, 'SHLIB_LINK=-lssl -lcrypto -lcrypt32 -lgdi32 -lsecur32 -lwsock32 -lws2_32' end end #desc 'compile libpg.a' task "native:#{for_platform}" => postgresql_lib # copy libpq.dll to lib dir dest_libpq = "lib/#{for_platform}/#{postgresql_lib.basename}" directory File.dirname(dest_libpq) file dest_libpq => [postgresql_lib, File.dirname(dest_libpq)] do cp postgresql_lib, dest_libpq end stage_libpq = "tmp/#{for_platform}/stage/#{dest_libpq}" directory File.dirname(stage_libpq) file stage_libpq => [postgresql_lib, File.dirname(stage_libpq)] do |t| cp postgresql_lib, stage_libpq end end def download(url, save_to) part = save_to+".part" sh "wget #{url.to_s.inspect} -O #{part.inspect} || curl #{url.to_s.inspect} -o #{part.inspect}" FileUtils.mv part, save_to end def run(*args) sh(*args) end end CrossLibraries = [ ['x64-mingw-ucrt', 'mingw64', 'x86_64-w64-mingw32'], ['x86-mingw32', 'mingw', 'i686-w64-mingw32'], ['x64-mingw32', 'mingw64', 'x86_64-w64-mingw32'], ].map do |platform, openssl_config, toolchain| CrossLibrary.new platform, openssl_config, toolchain end desc 'cross compile pg for win32' task :cross => [ :mingw32 ] task :mingw32 do # Use Rake::ExtensionCompiler helpers to find the proper host unless Rake::ExtensionCompiler.mingw_host then warn "You need to install mingw32 cross compile functionality to be able to continue." warn "Please refer to your distribution/package manager documentation about installation." fail end end task 'gem:windows:prepare' do require 'io/console' require 'rake_compiler_dock' # Copy gem signing key and certs to be accessible from the docker container mkdir_p 'build/gem' sh "cp ~/.gem/gem-*.pem build/gem/ || true" sh "bundle package" begin OpenSSL::PKey.read(File.read(File.expand_path("~/.gem/gem-private_key.pem")), ENV["GEM_PRIVATE_KEY_PASSPHRASE"] || "") rescue OpenSSL::PKey::PKeyError ENV["GEM_PRIVATE_KEY_PASSPHRASE"] = STDIN.getpass("Enter passphrase of gem signature key: ") retry end end CrossLibraries.each do |xlib| platform = xlib.for_platform desc "Build fat binary gem for platform #{platform}" task "gem:windows:#{platform}" => ['gem:windows:prepare', xlib.openssl_tarball, xlib.postgresql_tarball] do RakeCompilerDock.sh <<-EOT, platform: platform (cp build/gem/gem-*.pem ~/.gem/ || true) && bundle install --local && rake native:#{platform} pkg/#{$gem_spec.full_name}-#{platform}.gem MAKE="make -j`nproc`" RUBY_CC_VERSION=3.3.0:3.2.0:3.1.0:3.0.0:2.7.0:2.6.0:2.5.0 EOT end desc "Build the windows binary gems" multitask 'gem:windows' => "gem:windows:#{platform}" end pg-1.5.5/README.ja.md0000644000004100000410000004232714563476204014053 0ustar www-datawww-data# pg * ホーム :: https://github.com/ged/ruby-pg * ドキュメント :: http://deveiate.org/code/pg (英語)、 https://deveiate.org/code/pg/README_ja_md.html (日本語) * 変更履歴 :: link:/History.md [![https://gitter.im/ged/ruby-pg でチャットに参加](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/ged/ruby-pg?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) ## 説明 Pgは[PostgreSQL RDBMS](http://www.postgresql.org/)へのRubyのインターフェースです。[PostgreSQL 9.3以降](http://www.postgresql.org/support/versioning/)で動作します。 簡単な使用例は次の通りです。 ```ruby #!/usr/bin/env ruby require 'pg' # データベースへの現在の接続を表に出力します conn = PG.connect( dbname: 'sales' ) conn.exec( "SELECT * FROM pg_stat_activity" ) do |result| puts " PID | User | Query" result.each do |row| puts " %7d | %-16s | %s " % row.values_at('pid', 'usename', 'query') end end ``` ## ビルド状況 [![Github Actionsのビルド状況](https://github.com/ged/ruby-pg/actions/workflows/source-gem.yml/badge.svg?branch=master)](https://github.com/ged/ruby-pg/actions/workflows/source-gem.yml) [![バイナリgem](https://github.com/ged/ruby-pg/actions/workflows/binary-gems.yml/badge.svg?branch=master)](https://github.com/ged/ruby-pg/actions/workflows/binary-gems.yml) [![Appveyorのビルド状況](https://ci.appveyor.com/api/projects/status/gjx5axouf3b1wicp?svg=true)](https://ci.appveyor.com/project/ged/ruby-pg-9j8l3) ## 要件 * Ruby 2.5かそれより新しいバージョン * PostgreSQL 9.3.xかそれ以降のバージョン(ヘッダー付属のもの、例えば-devの名前のパッケージ)。 それより前のバージョンのRubyやPostgreSQLでも通常は同様に動作しますが、定期的なテストはされていません。 ## バージョン管理 [セマンティックバージョニング](http://semver.org/)の原則にしたがってgemをタグ付けしてリリースしています。 この方針の結果として、2つの数字を指定する[悲観的バージョン制約](http://guides.rubygems.org/patterns/#pessimistic-version-constraint)を使ってこのgemへの依存関係を指定することができます(またそうすべきです)。 例えば次の通りです。 ```ruby spec.add_dependency 'pg', '~> 1.0' ``` ## インストール方法 RubyGemsを経由してインストールするには以下とします。 gem install pg Postgresと一緒にインストールされた'pg_config'プログラムへのパスを指定する必要があるかもしれません。 gem install pg -- --with-pg-config= Bundlerを介してインストールした場合は次のようにコンパイルのためのヒントを与えられます。 bundle config build.pg --with-pg-config= MacOS Xへインストールする詳しい情報については README-OS_X.rdoc を、Windows用のビルドやインストールの説明については README-Windows.rdoc を参照してください。 詰まったときやただ何か喋りたいときのために[Google+グループ](http://goo.gl/TFy1U)と[メーリングリスト](http://groups.google.com/group/ruby-pg)もあります。 署名されたgemとしてインストールしたい場合は、リポジトリの[`certs`ディレクトリ](https://github.com/ged/ruby-pg/tree/master/certs)にgemの署名をする公開証明書があります。 ## 型変換 Pgでは任意でRubyと素のCコードにある結果の値やクエリ引数の型変換ができます。 こうすることでデータベースとのデータの往来を加速させられます。 なぜなら文字列のアロケーションが減り、(比較的遅い)Rubyのコードでの変換部分が省かれるからです。 とても基本的な型変換は次のようにできます。 ```ruby conn.type_map_for_results = PG::BasicTypeMapForResults.new conn # ……これは結果の値の対応付けに作用します。 conn.exec("select 1, now(), '{2,3}'::int[]").values # => [[1, 2014-09-21 20:51:56 +0200, [2, 3]]] conn.type_map_for_queries = PG::BasicTypeMapForQueries.new conn # ……そしてこれは引数値の対応付けのためのものです。 conn.exec_params("SELECT $1::text, $2::text, $3::text", [1, 1.23, [2,3]]).values # => [["1", "1.2300000000000000E+00", "{2,3}"]] ``` しかしPgの型変換はかなり調整が効きます。2層に分かれているのがその理由です。 ### エンコーダーとデコーダー (ext/pg_*coder.c, lib/pg/*coder.rb) こちらはより低層で、DBMSへ転送するためにRubyのオブジェクトを変換するエンコーディングクラスと、取得してきたデータをRubyのオブジェクトに変換し戻すデコーディングクラスが含まれています。 クラスはそれぞれの形式によって名前空間 PG::TextEncoder, PG::TextDecoder, PG::BinaryEncoder, そして PG::BinaryDecoder に分かれています。 エンコーダーないしデコーダーオブジェクトにOIDデータ型や形式コード(テキストないしバイナリ)や任意で名前を割り当てることができます。 要素のエンコーダーないしデコーダーを割り当てることによって複合型を構築することもできます。 PG::Coder オブジェクトは PG::TypeMap をセットアップしたり、その代わりに単一の値と文字列表現とを相互に変換したりするのに使えます。 ruby-pgでは以下のPostgreSQLカラム型に対応しています(TE = Text Encoder、TD = Text Decoder、BE = Binary Encoder、BD = Binary Decoder)。 * Integer: [TE](rdoc-ref:PG::TextEncoder::Integer)、[TD](rdoc-ref:PG::TextDecoder::Integer)、[BD](rdoc-ref:PG::BinaryDecoder::Integer) 💡 リンクがないでしょうか。[こちら](https://deveiate.org/code/pg/README_ja_md.html#label-E5-9E-8B-E5-A4-89-E6-8F-9B)を代わりに見てください 💡 * BE: [Int2](rdoc-ref:PG::BinaryEncoder::Int2)、[Int4](rdoc-ref:PG::BinaryEncoder::Int4)、[Int8](rdoc-ref:PG::BinaryEncoder::Int8) * Float: [TE](rdoc-ref:PG::TextEncoder::Float)、[TD](rdoc-ref:PG::TextDecoder::Float)、[BD](rdoc-ref:PG::BinaryDecoder::Float) * BE: [Float4](rdoc-ref:PG::BinaryEncoder::Float4), [Float8](rdoc-ref:PG::BinaryEncoder::Float8) * Numeric: [TE](rdoc-ref:PG::TextEncoder::Numeric)、[TD](rdoc-ref:PG::TextDecoder::Numeric) * Boolean: [TE](rdoc-ref:PG::TextEncoder::Boolean)、[TD](rdoc-ref:PG::TextDecoder::Boolean)、[BE](rdoc-ref:PG::BinaryEncoder::Boolean)、[BD](rdoc-ref:PG::BinaryDecoder::Boolean) * String: [TE](rdoc-ref:PG::TextEncoder::String)、[TD](rdoc-ref:PG::TextDecoder::String)、[BE](rdoc-ref:PG::BinaryEncoder::String)、[BD](rdoc-ref:PG::BinaryDecoder::String) * Bytea: [TE](rdoc-ref:PG::TextEncoder::Bytea)、[TD](rdoc-ref:PG::TextDecoder::Bytea)、[BE](rdoc-ref:PG::BinaryEncoder::Bytea)、[BD](rdoc-ref:PG::BinaryDecoder::Bytea) * Base64: [TE](rdoc-ref:PG::TextEncoder::ToBase64)、[TD](rdoc-ref:PG::TextDecoder::FromBase64)、[BE](rdoc-ref:PG::BinaryEncoder::FromBase64)、[BD](rdoc-ref:PG::BinaryDecoder::ToBase64) * Timestamp: * TE: [現地時間](rdoc-ref:PG::TextEncoder::TimestampWithoutTimeZone)、[UTC](rdoc-ref:PG::TextEncoder::TimestampUtc)、[タイムゾーン付き](rdoc-ref:PG::TextEncoder::TimestampWithTimeZone) * TD: [現地時間](rdoc-ref:PG::TextDecoder::TimestampLocal)、[UTC](rdoc-ref:PG::TextDecoder::TimestampUtc)、[UTCから現地時間へ](rdoc-ref:PG::TextDecoder::TimestampUtcToLocal) * BE: [現地時間](rdoc-ref:PG::BinaryEncoder::TimestampLocal)、[UTC](rdoc-ref:PG::BinaryEncoder::TimestampUtc) * BD: [現地時間](rdoc-ref:PG::BinaryDecoder::TimestampLocal)、[UTC](rdoc-ref:PG::BinaryDecoder::TimestampUtc)、[UTCから現地時間へ](rdoc-ref:PG::BinaryDecoder::TimestampUtcToLocal) * 日付:[TE](rdoc-ref:PG::TextEncoder::Date)、[TD](rdoc-ref:PG::TextDecoder::Date)、[BE](rdoc-ref:PG::BinaryEncoder::Date)、[BD](rdoc-ref:PG::BinaryDecoder::Date) * JSONとJSONB: [TE](rdoc-ref:PG::TextEncoder::JSON)、[TD](rdoc-ref:PG::TextDecoder::JSON) * Inet: [TE](rdoc-ref:PG::TextEncoder::Inet)、[TD](rdoc-ref:PG::TextDecoder::Inet) * Array: [TE](rdoc-ref:PG::TextEncoder::Array)、[TD](rdoc-ref:PG::TextDecoder::Array) * 複合型(「行」や「レコード」などとも言います):[TE](rdoc-ref:PG::TextEncoder::Record)、[TD](rdoc-ref:PG::TextDecoder::Record) カラム型として使われていませんが、以下のテキスト形式とバイナリ形式もエンコードできます。 * COPYの入出力データ:[TE](rdoc-ref:PG::TextEncoder::CopyRow)、[TD](rdoc-ref:PG::TextDecoder::CopyRow), [BE](rdoc-ref:PG::BinaryEncoder::CopyRow), [BD](rdoc-ref:PG::BinaryDecoder::CopyRow) * SQL文字列に挿入するリテラル:[TE](rdoc-ref:PG::TextEncoder::QuotedLiteral) * SQLの識別子: [TE](rdoc-ref:PG::TextEncoder::Identifier)、[TD](rdoc-ref:PG::TextDecoder::Identifier) ### PG::TypeMap とその派生 (ext/pg_type_map*.c, lib/pg/type_map*.rb) TypeMapはエンコーダーまたはデコーダーのどちらによってどの値を変換するかを定義します。 様々な型の対応付け戦略があるので、このクラスにはいくつかの派生が実装されています。 型変換の特有の需要に合わせてそれらの派生から選んで調整を加えることができます。 既定の型の対応付けは PG::TypeMapAllStrings です。 型の対応付けは、結果の集合それぞれに対し、接続毎ないしクエリ毎に割り当てることができます。 型の対応付けはCOPYの入出力データストリーミングでも使うことができます。 PG::Connection#copy_data を参照してください。 以下の基底となる型の対応付けが使えます。 * PG::TypeMapAllStrings - 全ての値と文字列について相互にエンコードとデコードを行います(既定) * PG::TypeMapByClass - 送信する値のクラスに基づいてエンコーダーを選択します * PG::TypeMapByColumn - カラムの順番によってエンコーダーとデコーダーを選択します * PG::TypeMapByOid - PostgreSQLのOIDデータ型によってデコーダーを選択します * PG::TypeMapInRuby - Rubyで独自の型の対応付けを定義します 以下の型の対応付けは PG::BasicTypeRegistry 由来の型の対応付けが入った状態になっています。 * PG::BasicTypeMapForResults - PG::TypeMapByOid によくあるPostgreSQLカラム型用にデコーダーが入った状態になっています * PG::BasicTypeMapBasedOnResult - PG::TypeMapByOid によくあるPostgreSQLカラム型用のエンコーダーが入った状態になっています * PG::BasicTypeMapForQueries - PG::TypeMapByClass によくあるRubyの値クラス用にエンコーダーが入った状態になっています ## スレッド対応 PGには個々のスレッドが別々の PG::Connection オブジェクトを同時に使えるという点でスレッド安全性があります。 しかし1つ以上のスレッドから同時にPgのオブジェクトにアクセスすると安全ではありません。 そのため必ず、毎回新しいスレッドを作るときに新しいデータベースサーバー接続を開くか、スレッド安全性のある方法で接続を管理するActiveRecordのようなラッパーライブラリを使うようにしてください。 以下のようなメッセージが標準エラー出力に表示された場合、恐らく複数のスレッドが1つの接続を使っています。 message type 0x31 arrived from server while idle message type 0x32 arrived from server while idle message type 0x54 arrived from server while idle message type 0x43 arrived from server while idle message type 0x5a arrived from server while idle ## Fiber IOスケジューラー対応 pg-1.3.0以降で、PgはRuby-3.0で導入された`Fiber.scheduler`に完全に対応しています。 Windowsでは、`Fiber.scheduler`対応はRuby-3.1以降で使えます。 `Fiber.scheduler`が走らせているスレッドに登録されている場合、起こりうる全てのブロッキングIO操作はそのスケジューラーを経由します。 同期的であったりブロックしたりするメソッド呼び出しについてもpgが内部的に非同期のlibpqインターフェースを使っているのはそれが理由です。 またlibpqの組み込み関数に代えてRubyのDNS解決を使っています。 内部的にPgは常にlibpqのノンブロッキング接続モードを使います。 それからブロッキングモードで走っているように振舞いますが、もし`Fiber.scheduler`が登録されていれば全てのブロッキングIOはそのスケジューラーを通じてRubyで制御されます。 `PG::Connection.setnonblocking(true)`が呼ばれたらノンブロッキング状態が有効になったままになりますが、それ以降のブロッキング状態の制御が無効になるので、呼び出しているプログラムはブロッキング状態を自力で制御しなければなりません。 この規則の1つの例外には、`PG::Connection#lo_create`や外部ライブラリを使う認証メソッド(GSSAPI認証など)のような、大きめのオブジェクト用のメソッドがあります。これらは`Fiber.scheduler`と互換性がないため、ブロッキング状態は登録されたIOスケジューラに渡されません。つまり操作は適切に実行されますが、IO待ち状態に別のIOを扱うFiberから使用を切り替えてくることができなくなります。 ## Ractor対応 pg-1.5.0以降で、PgはRuby-3.0で導入されたRactorと完全な互換性があります。 型エンコーダーないしデコーダー、及び型の対応付けが`Ractor.make_shareable`により凍結されている場合、これらをractor間で共有できます。 また凍結された PG::Result と PG::Tuple オブジェクトも共有できます。 少なくとも全ての凍結されたオブジェクト(ただし PG::Connection を除く)はPostgreSQLサーバーとのやり取りをしたり取得されたデータを読むのに使えます。 PG::Connection は共有できません。個々の接続を確立するために、それぞれのRactor内で作られなければなりません。 ## 貢献 バグを報告したり機能を提案したりGitでソースをチェックアウトしたりするには[プロジェクトページをご確認ください](https://github.com/ged/ruby-pg)。 ソースをチェックアウトしたあとは全ての依存関係をインストールします。 $ bundle install 拡張ファイル、パッケージファイル、テストデータベースを一掃するには、このコマンドを走らせてください。PostgreSQLのバージョンも切り替わります。 $ rake clean 拡張をコンパイルするには次のようにします。 $ rake compile `pg_config --bindir`が指すPostgreSQLのバージョンでテストやスペックを走らせるには次のようにします。 $ rake test あるいは特定のPostgreSQLのバージョンで、ファイル中の行番号を使って特定のテストを走らせるには次のようにします。 $ PATH=/usr/lib/postgresql/14/bin:$PATH rspec -Ilib -fd spec/pg/connection_spec.rb:455 APIドキュメントを生成するには次のようにします。 $ rake docs 必ず全てのバグと新機能についてテストを使って検証してください。 現在のメンテナはMichael Granger とLars Kanis です。 ## 著作権 Copyright (c) 1997-2022 by the authors. * Jeff Davis * Guy Decoux (ts) * Michael Granger * Lars Kanis * Dave Lee * Eiji Matsumoto * Yukihiro Matsumoto * Noboru Saitou You may redistribute this software under the same terms as Ruby itself; see https://www.ruby-lang.org/en/about/license.txt or the BSDL file in the source for details. (参考訳:このソフトウェアはRuby自体と同じ条件の元で再配布することができます。詳細については https://www.ruby-lang.org/en/about/license.txt やソース中のBSDLファイルを参照してください) Portions of the code are from the PostgreSQL project, and are distributed " "under the terms of the PostgreSQL license, included in the file POSTGRES. (参考訳:コードの一部はPostgreSQLプロジェクトから来ており、PostgreSQLの使用許諾の条件の元で配布されます。ファイルPOSTGRESに含まれています) Portions copyright LAIKA, Inc. ## 謝辞 長年にわたって貢献してくださった方々については Contributors.rdoc を参照してください。 ruby-listとruby-devメーリングリストの方々に感謝します。またPostgreSQLを開発された方々へも謝意を表します。 pg-1.5.5/rakelib/0000755000004100000410000000000014563476204013604 5ustar www-datawww-datapg-1.5.5/rakelib/task_extension.rb0000644000004100000410000000334314563476204017172 0ustar www-datawww-data# This source code is borrowed from: # https://github.com/oneclick/rubyinstaller2/blob/b3dcbf69f131e44c78ea3a1c5e0041c223f266ce/lib/ruby_installer/build/utils.rb#L104-L144 module TaskExtension # Extend rake's file task to be defined only once and to check the expected file is indeed generated # # The same as #task, but for #file. # In addition this file task raises an error, if the file that is expected to be generated is not present after the block was executed. def file(name, *args, &block) task_once(name, block) do super(name, *args) do |ta| block&.call(ta).tap do raise "file #{ta.name} is missing after task executed" unless File.exist?(ta.name) end end end end # Extend rake's task definition to be defined only once, even if called several times # # This allows to define common tasks next to specific tasks. # It is expected that any variation of the task's block is reflected in the task name or namespace. # If the task name is identical, the task block is executed only once, even if the file task definition is executed twice. def task(name, *args, &block) task_once(name, block) do super end end private def task_once(name, block) name = name.keys.first if name.is_a?(Hash) if block && Rake::Task.task_defined?(name) && Rake::Task[name].instance_variable_get('@task_block_location') == block.source_location # task is already defined for this target and the same block # So skip double definition of the same action Rake::Task[name] elsif block yield.tap do Rake::Task[name].instance_variable_set('@task_block_location', block.source_location) end else yield end end end pg-1.5.5/.github/0000755000004100000410000000000014563476204013533 5ustar www-datawww-datapg-1.5.5/.github/workflows/0000755000004100000410000000000014563476204015570 5ustar www-datawww-datapg-1.5.5/.github/workflows/binary-gems.yml0000644000004100000410000001024514563476204020532 0ustar www-datawww-dataname: Binary gems on: push: pull_request: workflow_dispatch: schedule: - cron: "0 5 * * 3" # At 05:00 on Wednesday # https://crontab.guru/#0_5_*_*_3 jobs: job_build_x64: name: Build runs-on: ubuntu-latest strategy: fail-fast: false matrix: include: - platform: "x64-mingw-ucrt" - platform: "x64-mingw32" - platform: "x86-mingw32" steps: - uses: actions/checkout@v3 - name: Set up Ruby uses: ruby/setup-ruby@v1 with: ruby-version: "3.3" - run: bundle install - name: Create a dummy cert to satisfy the build run: | mkdir -p ~/.gem/ ruby -ropenssl -e "puts OpenSSL::PKey::RSA.new(2048).to_pem" > ~/.gem/gem-private_key.pem gem cert --build travis-ci@dummy.org --private-key ~/.gem/gem-private_key.pem cp gem-public_cert.pem ~/.gem/gem-public_cert.pem - name: Build binary gem run: bundle exec rake gem:windows:${{ matrix.platform }} - name: Upload binary gem uses: actions/upload-artifact@v3 with: name: binary-gem path: pkg/*.gem job_test_binary: name: Test needs: job_build_x64 strategy: fail-fast: false matrix: include: - os: windows-latest ruby: "3.3" platform: "x64-mingw-ucrt" PGVERSION: 16.0-1-windows-x64 - os: windows-latest ruby: "3.1.4-1" platform: "x86-mingw32" PGVERSION: 10.20-1-windows - os: windows-latest ruby: "2.5" platform: "x64-mingw32" PGVERSION: 10.20-1-windows runs-on: ${{ matrix.os }} env: PGVERSION: ${{ matrix.PGVERSION }} steps: - uses: actions/checkout@v3 - name: Set up Ruby if: matrix.platform != 'x86-mingw32' uses: ruby/setup-ruby@v1 with: ruby-version: ${{ matrix.ruby }} - name: Set up 32 bit x86 Ruby if: matrix.platform == 'x86-mingw32' run: | $(new-object net.webclient).DownloadFile("https://github.com/oneclick/rubyinstaller2/releases/download/RubyInstaller-${{ matrix.ruby }}/rubyinstaller-${{ matrix.ruby }}-x86.exe", "$pwd/ruby-setup.exe") cmd /c ruby-setup.exe /currentuser /verysilent /dir=C:/Ruby-${{ matrix.ruby }} echo "c:/ruby-${{ matrix.ruby }}/bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append c:/ruby-${{ matrix.ruby }}/bin/ridk enable c:/msys64/usr/bin/bash -lc "pacman -S --noconfirm --needed make `${MINGW_PACKAGE_PREFIX}-pkgconf `${MINGW_PACKAGE_PREFIX}-libyaml `${MINGW_PACKAGE_PREFIX}-gcc `${MINGW_PACKAGE_PREFIX}-make" echo "C:/msys64/$env:MSYSTEM_PREFIX/bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append - name: Download gem from build job uses: actions/download-artifact@v3 with: name: binary-gem - name: Download PostgreSQL run: | Add-Type -AssemblyName System.IO.Compression.FileSystem function Unzip { param([string]$zipfile, [string]$outpath) [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $outpath) } $(new-object net.webclient).DownloadFile("http://get.enterprisedb.com/postgresql/postgresql-$env:PGVERSION-binaries.zip", "postgresql-binaries.zip") Unzip "postgresql-binaries.zip" "." echo "$pwd/pgsql/bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append echo "PGUSER=$env:USERNAME" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append echo "PGPASSWORD=" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append - run: echo $env:PATH - run: gem update --system 3.3.26 - run: bundle install - run: gem install --local pg-*${{ matrix.platform }}.gem --verbose - name: Run specs run: ruby -rpg -S rspec -fd spec/**/*_spec.rb - name: Print logs if job failed if: ${{ failure() && matrix.os == 'windows-latest' }} run: | ridk enable find "$(ruby -e"puts RbConfig::CONFIG[%q[libdir]]")" -name mkmf.log -print0 | xargs -0 cat pg-1.5.5/.github/workflows/source-gem.yml0000644000004100000410000001064114563476204020363 0ustar www-datawww-dataname: Source gem on: push: pull_request: workflow_dispatch: schedule: - cron: "0 5 * * 3" # At 05:00 on Wednesday # https://crontab.guru/#0_5_*_*_3 jobs: job_build_gem: name: Build runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Set up Ruby uses: ruby/setup-ruby@v1 with: ruby-version: "3.2" - name: Build source gem run: gem build pg.gemspec - name: Upload source gem uses: actions/upload-artifact@v3 with: name: source-gem path: "*.gem" job_test_gem: name: Test needs: job_build_gem strategy: fail-fast: false matrix: include: - os: windows ruby: "head" PGVERSION: 16.0-1-windows-x64 PGVER: "16" - os: windows ruby: "2.5" PGVERSION: 9.4.26-1-windows-x64 PGVER: "9.4" - os: windows ruby: "mswin" PGVERSION: 16.0-1-windows-x64 PGVER: "16" - os: ubuntu ruby: "head" PGVER: "16" - os: ubuntu ruby: "3.2" PGVER: "12" - os: ubuntu os_ver: "20.04" ruby: "2.5" PGVER: "9.3" - os: ubuntu ruby: "truffleruby" PGVER: "13" - os: ubuntu ruby: "truffleruby-head" PGVER: "16" - os: macos ruby: "head" PGVERSION: 16.0-1-osx PGVER: "16" runs-on: ${{ matrix.os }}-${{ matrix.os_ver || 'latest' }} env: PGVERSION: ${{ matrix.PGVERSION }} PGVER: ${{ matrix.PGVER }} MAKE: make -j2 V=1 steps: - uses: actions/checkout@v3 - name: Set up Ruby uses: ruby/setup-ruby@v1 with: ruby-version: ${{ matrix.ruby }} - name: Download gem from build job uses: actions/download-artifact@v3 with: name: source-gem - name: Install required packages Windows if: matrix.os == 'windows' && matrix.ruby != 'mswin' shell: cmd run: ridk exec sh -c "pacman --sync --needed --noconfirm ${MINGW_PACKAGE_PREFIX}-gcc" - name: Download PostgreSQL Windows if: matrix.os == 'windows' run: | Add-Type -AssemblyName System.IO.Compression.FileSystem function Unzip { param([string]$zipfile, [string]$outpath) [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $outpath) } $(new-object net.webclient).DownloadFile("http://get.enterprisedb.com/postgresql/postgresql-$env:PGVERSION-binaries.zip", "postgresql-binaries.zip") Unzip "postgresql-binaries.zip" "." echo "$pwd/pgsql/bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append echo "PGUSER=$env:USERNAME" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append echo "PGPASSWORD=" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append - name: Download PostgreSQL Ubuntu if: matrix.os == 'ubuntu' run: | echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main $PGVER" | sudo tee -a /etc/apt/sources.list.d/pgdg.list wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - sudo apt-get -y update sudo apt-get -y --allow-downgrades install postgresql-$PGVER libpq5=$PGVER* libpq-dev=$PGVER* echo /usr/lib/postgresql/$PGVER/bin >> $GITHUB_PATH - name: Download PostgreSQL Macos if: matrix.os == 'macos' run: | wget https://get.enterprisedb.com/postgresql/postgresql-$PGVERSION-binaries.zip && \ sudo mkdir -p /Library/PostgreSQL && \ sudo unzip postgresql-$PGVERSION-binaries.zip -d /Library/PostgreSQL/$PGVER && \ echo /Library/PostgreSQL/$PGVER/bin >> $GITHUB_PATH - run: gem update --system 3.3.26 - run: bundle install - run: gem install --local *.gem --verbose - name: Run specs env: PG_DEBUG: 0 run: ruby -rpg -S rspec spec/**/*_spec.rb -cfdoc - name: Print logs if job failed if: ${{ failure() && matrix.os == 'windows' }} run: ridk exec cat tmp_test_specs/*.log - name: Print logs if job failed if: ${{ failure() && matrix.os != 'windows' }} run: cat tmp_test_specs/*.log pg-1.5.5/lib/0000755000004100000410000000000014563476204012741 5ustar www-datawww-datapg-1.5.5/lib/pg/0000755000004100000410000000000014563476204013347 5ustar www-datawww-datapg-1.5.5/lib/pg/coder.rb0000644000004100000410000000401314563476204014766 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG class Coder module BinaryFormatting def initialize(hash={}, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) unless hash.empty? super(format: 1, **hash, **kwargs) end end # Create a new coder object based on the attribute Hash. def initialize(hash=nil, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) if hash (hash || kwargs).each do |key, val| send("#{key}=", val) end end def dup self.class.new(**to_h) end # Returns coder attributes as Hash. def to_h { oid: oid, format: format, flags: flags, name: name, } end def ==(v) self.class == v.class && to_h == v.to_h end def marshal_dump Marshal.dump(to_h) end def marshal_load(str) initialize(**Marshal.load(str)) end def inspect str = self.to_s oid_str = " oid=#{oid}" unless oid==0 format_str = " format=#{format}" unless format==0 name_str = " #{name.inspect}" if name str[-1,0] = "#{name_str} #{oid_str}#{format_str}" str end def inspect_short str = case format when 0 then "T" when 1 then "B" else format.to_s end str += "E" if respond_to?(:encode) str += "D" if respond_to?(:decode) "#{name || self.class.name}:#{str}" end end class CompositeCoder < Coder def to_h { **super, elements_type: elements_type, needs_quotation: needs_quotation?, delimiter: delimiter, } end def inspect str = super str[-1,0] = " elements_type=#{elements_type.inspect} #{needs_quotation? ? 'needs' : 'no'} quotation" str end end class CopyCoder < Coder def to_h { **super, type_map: type_map, delimiter: delimiter, null_string: null_string, } end end class RecordCoder < Coder def to_h { **super, type_map: type_map, } end end end # module PG pg-1.5.5/lib/pg/text_decoder/0000755000004100000410000000000014563476204016020 5ustar www-datawww-datapg-1.5.5/lib/pg/text_decoder/date.rb0000644000004100000410000000047314563476204017266 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'date' module PG module TextDecoder class Date < SimpleDecoder def decode(string, tuple=nil, field=nil) if string =~ /\A(\d{4})-(\d\d)-(\d\d)\z/ ::Date.new $1.to_i, $2.to_i, $3.to_i else string end end end end end # module PG pg-1.5.5/lib/pg/text_decoder/json.rb0000644000004100000410000000036314563476204017320 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'json' module PG module TextDecoder class JSON < SimpleDecoder def decode(string, tuple=nil, field=nil) ::JSON.parse(string, quirks_mode: true) end end end end # module PG pg-1.5.5/lib/pg/text_decoder/timestamp.rb0000644000004100000410000000236514563476204020356 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG module TextDecoder # Convenience classes for timezone options class TimestampUtc < Timestamp def initialize(hash={}, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) unless hash.empty? super(**hash, **kwargs, flags: PG::Coder::TIMESTAMP_DB_UTC | PG::Coder::TIMESTAMP_APP_UTC) end end class TimestampUtcToLocal < Timestamp def initialize(hash={}, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) unless hash.empty? super(**hash, **kwargs, flags: PG::Coder::TIMESTAMP_DB_UTC | PG::Coder::TIMESTAMP_APP_LOCAL) end end class TimestampLocal < Timestamp def initialize(hash={}, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) unless hash.empty? super(**hash, **kwargs, flags: PG::Coder::TIMESTAMP_DB_LOCAL | PG::Coder::TIMESTAMP_APP_LOCAL) end end # For backward compatibility: TimestampWithoutTimeZone = TimestampLocal TimestampWithTimeZone = Timestamp end end # module PG pg-1.5.5/lib/pg/text_decoder/inet.rb0000644000004100000410000000021414563476204017301 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG module TextDecoder # Init C part of the decoder init_inet end end # module PG pg-1.5.5/lib/pg/text_decoder/numeric.rb0000644000004100000410000000021714563476204020007 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG module TextDecoder # Init C part of the decoder init_numeric end end # module PG pg-1.5.5/lib/pg/type_map_by_column.rb0000644000004100000410000000051614563476204017563 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'pg' unless defined?( PG ) class PG::TypeMapByColumn # Returns the type oids of the assigned coders. def oids coders.map{|c| c.oid if c } end def inspect type_strings = coders.map{|c| c ? c.inspect_short : 'nil' } "#<#{self.class} #{type_strings.join(' ')}>" end end pg-1.5.5/lib/pg/binary_decoder/0000755000004100000410000000000014563476204016320 5ustar www-datawww-datapg-1.5.5/lib/pg/binary_decoder/date.rb0000644000004100000410000000021614563476204017561 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG module BinaryDecoder # Init C part of the decoder init_date end end # module PG pg-1.5.5/lib/pg/binary_decoder/timestamp.rb0000644000004100000410000000220614563476204020650 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG module BinaryDecoder # Convenience classes for timezone options class TimestampUtc < Timestamp def initialize(hash={}, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) unless hash.empty? super(**hash, **kwargs, flags: PG::Coder::TIMESTAMP_DB_UTC | PG::Coder::TIMESTAMP_APP_UTC) end end class TimestampUtcToLocal < Timestamp def initialize(hash={}, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) unless hash.empty? super(**hash, **kwargs, flags: PG::Coder::TIMESTAMP_DB_UTC | PG::Coder::TIMESTAMP_APP_LOCAL) end end class TimestampLocal < Timestamp def initialize(hash={}, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) unless hash.empty? super(**hash, **kwargs, flags: PG::Coder::TIMESTAMP_DB_LOCAL | PG::Coder::TIMESTAMP_APP_LOCAL) end end end end # module PG pg-1.5.5/lib/pg/basic_type_map_for_results.rb0000644000004100000410000000756614563476204021320 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'pg' unless defined?( PG ) # Simple set of rules for type casting common PostgreSQL types to Ruby. # # OIDs of supported type casts are not hard-coded in the sources, but are retrieved from the # PostgreSQL's +pg_type+ table in PG::BasicTypeMapForResults.new . # # Result values are type casted based on the type OID of the given result column. # # Higher level libraries will most likely not make use of this class, but use their # own set of rules to choose suitable encoders and decoders. # # Example: # conn = PG::Connection.new # # Assign a default ruleset for type casts of output values. # conn.type_map_for_results = PG::BasicTypeMapForResults.new(conn) # # Execute a query. # res = conn.exec_params( "SELECT $1::INT", ['5'] ) # # Retrieve and cast the result value. Value format is 0 (text) and OID is 20. Therefore typecasting # # is done by PG::TextDecoder::Integer internally for all value retrieval methods. # res.values # => [[5]] # # PG::TypeMapByOid#build_column_map(result) can be used to generate # a result independent PG::TypeMapByColumn type map, which can subsequently be used # to cast #get_copy_data fields: # # For the following table: # conn.exec( "CREATE TABLE copytable AS VALUES('a', 123, '{5,4,3}'::INT[])" ) # # # Retrieve table OIDs per empty result set. # res = conn.exec( "SELECT * FROM copytable LIMIT 0" ) # # Build a type map for common database to ruby type decoders. # btm = PG::BasicTypeMapForResults.new(conn) # # Build a PG::TypeMapByColumn with decoders suitable for copytable. # tm = btm.build_column_map( res ) # row_decoder = PG::TextDecoder::CopyRow.new type_map: tm # # conn.copy_data( "COPY copytable TO STDOUT", row_decoder ) do |res| # while row=conn.get_copy_data # p row # end # end # This prints the rows with type casted columns: # ["a", 123, [5, 4, 3]] # # Very similar with binary format: # # conn.exec( "CREATE TABLE copytable AS VALUES('a', 123, '2023-03-19 18:39:44'::TIMESTAMP)" ) # # # Retrieve table OIDs per empty result set in binary format. # res = conn.exec_params( "SELECT * FROM copytable LIMIT 0", [], 1 ) # # Build a type map for common database to ruby type decoders. # btm = PG::BasicTypeMapForResults.new(conn) # # Build a PG::TypeMapByColumn with decoders suitable for copytable. # tm = btm.build_column_map( res ) # row_decoder = PG::BinaryDecoder::CopyRow.new type_map: tm # # conn.copy_data( "COPY copytable TO STDOUT WITH (FORMAT binary)", row_decoder ) do |res| # while row=conn.get_copy_data # p row # end # end # This prints the rows with type casted columns: # ["a", 123, 2023-03-19 18:39:44 UTC] # # See also PG::BasicTypeMapBasedOnResult for the encoder direction and PG::BasicTypeRegistry for the definition of additional types. class PG::BasicTypeMapForResults < PG::TypeMapByOid include PG::BasicTypeRegistry::Checker class WarningTypeMap < PG::TypeMapInRuby def initialize(typenames) @already_warned = {} @typenames_by_oid = typenames end def typecast_result_value(result, _tuple, field) format = result.fformat(field) oid = result.ftype(field) unless @already_warned.dig(format, oid) warn "Warning: no type cast defined for type #{@typenames_by_oid[oid].inspect} format #{format} with oid #{oid}. Please cast this type explicitly to TEXT to be safe for future changes." unless frozen? @already_warned[format] ||= {} @already_warned[format][oid] = true end end super end end def initialize(connection_or_coder_maps, registry: nil) @coder_maps = build_coder_maps(connection_or_coder_maps, registry: registry) # Populate TypeMapByOid hash with decoders @coder_maps.each_format(:decoder).flat_map{|f| f.coders }.each do |coder| add_coder(coder) end typenames = @coder_maps.typenames_by_oid self.default_type_map = WarningTypeMap.new(typenames) end end pg-1.5.5/lib/pg/exceptions.rb0000644000004100000410000000061714563476204016061 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'pg' unless defined?( PG ) module PG class Error < StandardError def initialize(msg=nil, connection: nil, result: nil) @connection = connection @result = result super(msg) end end class NotAllCopyDataRetrieved < PG::Error end class LostCopyState < PG::Error end class NotInBlockingMode < PG::Error end end # module PG pg-1.5.5/lib/pg/text_encoder/0000755000004100000410000000000014563476204016032 5ustar www-datawww-datapg-1.5.5/lib/pg/text_encoder/date.rb0000644000004100000410000000034614563476204017277 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG module TextEncoder class Date < SimpleEncoder def encode(value) value.respond_to?(:strftime) ? value.strftime("%Y-%m-%d") : value end end end end # module PG pg-1.5.5/lib/pg/text_encoder/json.rb0000644000004100000410000000033614563476204017332 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'json' module PG module TextEncoder class JSON < SimpleEncoder def encode(value) ::JSON.generate(value, quirks_mode: true) end end end end # module PG pg-1.5.5/lib/pg/text_encoder/timestamp.rb0000644000004100000410000000110614563476204020360 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG module TextEncoder class TimestampWithoutTimeZone < SimpleEncoder def encode(value) value.respond_to?(:strftime) ? value.strftime("%Y-%m-%d %H:%M:%S.%N") : value end end class TimestampUtc < SimpleEncoder def encode(value) value.respond_to?(:utc) ? value.utc.strftime("%Y-%m-%d %H:%M:%S.%N") : value end end class TimestampWithTimeZone < SimpleEncoder def encode(value) value.respond_to?(:strftime) ? value.strftime("%Y-%m-%d %H:%M:%S.%N %:z") : value end end end end # module PG pg-1.5.5/lib/pg/text_encoder/inet.rb0000644000004100000410000000112014563476204017310 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'ipaddr' module PG module TextEncoder class Inet < SimpleEncoder def encode(value) case value when IPAddr default_prefix = (value.family == Socket::AF_INET ? 32 : 128) s = value.to_s if value.respond_to?(:prefix) prefix = value.prefix else range = value.to_range prefix = default_prefix - Math.log(((range.end.to_i - range.begin.to_i) + 1), 2).to_i end s << "/" << prefix.to_s if prefix != default_prefix s else value end end end end end # module PG pg-1.5.5/lib/pg/text_encoder/numeric.rb0000644000004100000410000000021714563476204020021 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG module TextEncoder # Init C part of the decoder init_numeric end end # module PG pg-1.5.5/lib/pg/basic_type_registry.rb0000644000004100000410000002467014563476204017757 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'pg' unless defined?( PG ) # This class defines the mapping between PostgreSQL types and encoder/decoder classes for PG::BasicTypeMapForResults, PG::BasicTypeMapForQueries and PG::BasicTypeMapBasedOnResult. # # Additional types can be added like so: # # require 'pg' # require 'ipaddr' # # class InetDecoder < PG::SimpleDecoder # def decode(string, tuple=nil, field=nil) # IPAddr.new(string) # end # end # class InetEncoder < PG::SimpleEncoder # def encode(ip_addr) # ip_addr.to_s # end # end # # conn = PG.connect # regi = PG::BasicTypeRegistry.new.register_default_types # regi.register_type(0, 'inet', InetEncoder, InetDecoder) # conn.type_map_for_results = PG::BasicTypeMapForResults.new(conn, registry: regi) class PG::BasicTypeRegistry # An instance of this class stores the coders that should be used for a particular wire format (text or binary) # and type cast direction (encoder or decoder). # # Each coder object is filled with the PostgreSQL type name, OID, wire format and array coders are filled with the base elements_type. class CoderMap # Hash of text types that don't require quotation, when used within composite types. # type.name => true DONT_QUOTE_TYPES = %w[ int2 int4 int8 float4 float8 oid bool date timestamp timestamptz ].inject({}){|h,e| h[e] = true; h }.freeze private_constant :DONT_QUOTE_TYPES def initialize(result, coders_by_name, format, arraycoder) coder_map = {} arrays, nodes = result.partition { |row| row['typinput'] == 'array_in' } # populate the base types nodes.find_all { |row| coders_by_name.key?(row['typname']) }.each do |row| coder = coders_by_name[row['typname']].dup coder.oid = row['oid'].to_i coder.name = row['typname'] coder.format = format coder_map[coder.oid] = coder.freeze end if arraycoder # populate array types arrays.each do |row| elements_coder = coder_map[row['typelem'].to_i] next unless elements_coder coder = arraycoder.new coder.oid = row['oid'].to_i coder.name = row['typname'] coder.format = format coder.elements_type = elements_coder coder.needs_quotation = !DONT_QUOTE_TYPES[elements_coder.name] coder_map[coder.oid] = coder.freeze end end @coders = coder_map.values.freeze @coders_by_name = @coders.inject({}){|h, t| h[t.name] = t; h }.freeze @coders_by_oid = @coders.inject({}){|h, t| h[t.oid] = t; h }.freeze freeze end attr_reader :coders attr_reader :coders_by_oid attr_reader :coders_by_name def coder_by_name(name) @coders_by_name[name] end def coder_by_oid(oid) @coders_by_oid[oid] end end # An instance of this class stores CoderMap instances to be used for text and binary wire formats # as well as encoder and decoder directions. # # A PG::BasicTypeRegistry::CoderMapsBundle instance retrieves all type definitions from the PostgreSQL server and matches them with the coder definitions of the global PG::BasicTypeRegistry . # It provides 4 separate CoderMap instances for the combinations of the two formats and directions. # # A PG::BasicTypeRegistry::CoderMapsBundle instance can be used to initialize an instance of # * PG::BasicTypeMapForResults # * PG::BasicTypeMapForQueries # * PG::BasicTypeMapBasedOnResult # by passing it instead of the connection object like so: # # conn = PG::Connection.new # maps = PG::BasicTypeRegistry::CoderMapsBundle.new(conn) # conn.type_map_for_results = PG::BasicTypeMapForResults.new(maps) # class CoderMapsBundle attr_reader :typenames_by_oid def initialize(connection, registry: nil) registry ||= DEFAULT_TYPE_REGISTRY result = connection.exec(<<-SQL).to_a SELECT t.oid, t.typname, t.typelem, t.typdelim, ti.proname AS typinput FROM pg_type as t JOIN pg_proc as ti ON ti.oid = t.typinput SQL init_maps(registry, result.freeze) freeze end private def init_maps(registry, result) @maps = [ [0, :encoder, PG::TextEncoder::Array], [0, :decoder, PG::TextDecoder::Array], [1, :encoder, nil], [1, :decoder, nil], ].inject([]) do |h, (format, direction, arraycoder)| coders = registry.coders_for(format, direction) || {} h[format] ||= {} h[format][direction] = CoderMap.new(result, coders, format, arraycoder) h end.each{|h| h.freeze }.freeze @typenames_by_oid = result.inject({}){|h, t| h[t['oid'].to_i] = t['typname']; h }.freeze end def each_format(direction) @maps.map { |f| f[direction] } end def map_for(format, direction) @maps[format][direction] end end module Checker ValidFormats = { 0 => true, 1 => true }.freeze ValidDirections = { :encoder => true, :decoder => true }.freeze private_constant :ValidFormats, :ValidDirections protected def check_format_and_direction(format, direction) raise(ArgumentError, "Invalid format value %p" % format) unless ValidFormats[format] raise(ArgumentError, "Invalid direction %p" % direction) unless ValidDirections[direction] end protected def build_coder_maps(conn_or_maps, registry: nil) if conn_or_maps.is_a?(PG::BasicTypeRegistry::CoderMapsBundle) raise ArgumentError, "registry argument must be given to CoderMapsBundle" if registry conn_or_maps else PG::BasicTypeRegistry::CoderMapsBundle.new(conn_or_maps, registry: registry).freeze end end end include Checker def initialize # The key of these hashs maps to the `typname` column from the table pg_type. @coders_by_name = [] end # Retrieve a Hash of all en- or decoders for a given wire format. # The hash key is the name as defined in table +pg_type+. # The hash value is the registered coder object. def coders_for(format, direction) check_format_and_direction(format, direction) @coders_by_name[format]&.[](direction) end # Register an encoder or decoder instance for casting a PostgreSQL type. # # Coder#name must correspond to the +typname+ column in the +pg_type+ table. # Coder#format can be 0 for text format and 1 for binary. def register_coder(coder) h = @coders_by_name[coder.format] ||= { encoder: {}, decoder: {} } name = coder.name || raise(ArgumentError, "name of #{coder.inspect} must be defined") h[:encoder][name] = coder if coder.respond_to?(:encode) h[:decoder][name] = coder if coder.respond_to?(:decode) self end # Register the given +encoder_class+ and/or +decoder_class+ for casting a PostgreSQL type. # # +name+ must correspond to the +typname+ column in the +pg_type+ table. # +format+ can be 0 for text format and 1 for binary. def register_type(format, name, encoder_class, decoder_class) register_coder(encoder_class.new(name: name, format: format).freeze) if encoder_class register_coder(decoder_class.new(name: name, format: format).freeze) if decoder_class self end # Alias the +old+ type to the +new+ type. def alias_type(format, new, old) [:encoder, :decoder].each do |ende| enc = @coders_by_name[format][ende][old] if enc @coders_by_name[format][ende][new] = enc else @coders_by_name[format][ende].delete(new) end end self end # Populate the registry with all builtin types of ruby-pg def register_default_types register_type 0, 'int2', PG::TextEncoder::Integer, PG::TextDecoder::Integer alias_type 0, 'int4', 'int2' alias_type 0, 'int8', 'int2' alias_type 0, 'oid', 'int2' register_type 0, 'numeric', PG::TextEncoder::Numeric, PG::TextDecoder::Numeric register_type 0, 'text', PG::TextEncoder::String, PG::TextDecoder::String alias_type 0, 'varchar', 'text' alias_type 0, 'char', 'text' alias_type 0, 'bpchar', 'text' alias_type 0, 'xml', 'text' alias_type 0, 'name', 'text' # FIXME: why are we keeping these types as strings? # alias_type 'tsvector', 'text' # alias_type 'interval', 'text' # alias_type 'macaddr', 'text' # alias_type 'uuid', 'text' # # register_type 'money', OID::Money.new register_type 0, 'bytea', PG::TextEncoder::Bytea, PG::TextDecoder::Bytea register_type 0, 'bool', PG::TextEncoder::Boolean, PG::TextDecoder::Boolean # register_type 'bit', OID::Bit.new # register_type 'varbit', OID::Bit.new register_type 0, 'float4', PG::TextEncoder::Float, PG::TextDecoder::Float alias_type 0, 'float8', 'float4' # For compatibility reason the timestamp in text format is encoded as local time (TimestampWithoutTimeZone) instead of UTC register_type 0, 'timestamp', PG::TextEncoder::TimestampWithoutTimeZone, PG::TextDecoder::TimestampWithoutTimeZone register_type 0, 'timestamptz', PG::TextEncoder::TimestampWithTimeZone, PG::TextDecoder::TimestampWithTimeZone register_type 0, 'date', PG::TextEncoder::Date, PG::TextDecoder::Date # register_type 'time', OID::Time.new # # register_type 'path', OID::Text.new # register_type 'point', OID::Point.new # register_type 'polygon', OID::Text.new # register_type 'circle', OID::Text.new # register_type 'hstore', OID::Hstore.new register_type 0, 'json', PG::TextEncoder::JSON, PG::TextDecoder::JSON alias_type 0, 'jsonb', 'json' # register_type 'citext', OID::Text.new # register_type 'ltree', OID::Text.new # register_type 0, 'inet', PG::TextEncoder::Inet, PG::TextDecoder::Inet alias_type 0, 'cidr', 'inet' register_type 1, 'int2', PG::BinaryEncoder::Int2, PG::BinaryDecoder::Integer register_type 1, 'int4', PG::BinaryEncoder::Int4, PG::BinaryDecoder::Integer register_type 1, 'int8', PG::BinaryEncoder::Int8, PG::BinaryDecoder::Integer alias_type 1, 'oid', 'int2' register_type 1, 'text', PG::BinaryEncoder::String, PG::BinaryDecoder::String alias_type 1, 'varchar', 'text' alias_type 1, 'char', 'text' alias_type 1, 'bpchar', 'text' alias_type 1, 'xml', 'text' alias_type 1, 'name', 'text' register_type 1, 'bytea', PG::BinaryEncoder::Bytea, PG::BinaryDecoder::Bytea register_type 1, 'bool', PG::BinaryEncoder::Boolean, PG::BinaryDecoder::Boolean register_type 1, 'float4', PG::BinaryEncoder::Float4, PG::BinaryDecoder::Float register_type 1, 'float8', PG::BinaryEncoder::Float8, PG::BinaryDecoder::Float register_type 1, 'timestamp', PG::BinaryEncoder::TimestampUtc, PG::BinaryDecoder::TimestampUtc register_type 1, 'timestamptz', PG::BinaryEncoder::TimestampUtc, PG::BinaryDecoder::TimestampUtcToLocal register_type 1, 'date', PG::BinaryEncoder::Date, PG::BinaryDecoder::Date self end alias define_default_types register_default_types DEFAULT_TYPE_REGISTRY = PG.make_shareable(PG::BasicTypeRegistry.new.register_default_types) private_constant :DEFAULT_TYPE_REGISTRY end pg-1.5.5/lib/pg/version.rb0000644000004100000410000000006414563476204015361 0ustar www-datawww-datamodule PG # Library version VERSION = '1.5.5' end pg-1.5.5/lib/pg/tuple.rb0000644000004100000410000000077314563476204015034 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'pg' unless defined?( PG ) class PG::Tuple ### Return a String representation of the object suitable for debugging. def inspect "#<#{self.class} #{self.map{|k,v| "#{k}: #{v.inspect}" }.join(", ") }>" end def has_key?(key) field_map.has_key?(key) end alias key? has_key? def keys field_names || field_map.keys.freeze end def each_key(&block) if fn=field_names fn.each(&block) else field_map.each_key(&block) end end end pg-1.5.5/lib/pg/basic_type_map_based_on_result.rb0000644000004100000410000000546014563476204022110 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'pg' unless defined?( PG ) # Simple set of rules for type casting common PostgreSQL types from Ruby # to PostgreSQL. # # OIDs of supported type casts are not hard-coded in the sources, but are retrieved from the # PostgreSQL's +pg_type+ table in PG::BasicTypeMapBasedOnResult.new . # # This class works equal to PG::BasicTypeMapForResults, but does not define decoders for # the given result OIDs, but encoders. So it can be used to type cast field values based on # the type OID retrieved by a separate SQL query. # # PG::TypeMapByOid#build_column_map(result) can be used to generate a result independent # PG::TypeMapByColumn type map, which can subsequently be used to cast query bind parameters # or #put_copy_data fields. # # Example: # conn.exec( "CREATE TEMP TABLE copytable (t TEXT, i INT, ai INT[])" ) # # # Retrieve table OIDs per empty result set. # res = conn.exec( "SELECT * FROM copytable LIMIT 0" ) # # Build a type map for common ruby to database type encoders. # btm = PG::BasicTypeMapBasedOnResult.new(conn) # # Build a PG::TypeMapByColumn with encoders suitable for copytable. # tm = btm.build_column_map( res ) # row_encoder = PG::TextEncoder::CopyRow.new type_map: tm # # conn.copy_data( "COPY copytable FROM STDIN", row_encoder ) do |res| # conn.put_copy_data ['a', 123, [5,4,3]] # end # This inserts a single row into copytable with type casts from ruby to # database types using text format. # # Very similar with binary format: # # conn.exec( "CREATE TEMP TABLE copytable (t TEXT, i INT, blob bytea, created_at timestamp)" ) # # Retrieve table OIDs per empty result set in binary format. # res = conn.exec_params( "SELECT * FROM copytable LIMIT 0", [], 1 ) # # Build a type map for common ruby to database type encoders. # btm = PG::BasicTypeMapBasedOnResult.new(conn) # # Build a PG::TypeMapByColumn with encoders suitable for copytable. # tm = btm.build_column_map( res ) # row_encoder = PG::BinaryEncoder::CopyRow.new type_map: tm # # conn.copy_data( "COPY copytable FROM STDIN WITH (FORMAT binary)", row_encoder ) do |res| # conn.put_copy_data ['a', 123, "\xff\x00".b, Time.now] # end # # This inserts a single row into copytable with type casts from ruby to # database types using binary copy and value format. # Binary COPY is faster than text format but less portable and less readable and pg offers fewer en-/decoders of database types. # class PG::BasicTypeMapBasedOnResult < PG::TypeMapByOid include PG::BasicTypeRegistry::Checker def initialize(connection_or_coder_maps, registry: nil) @coder_maps = build_coder_maps(connection_or_coder_maps, registry: registry) # Populate TypeMapByOid hash with encoders @coder_maps.each_format(:encoder).flat_map{|f| f.coders }.each do |coder| add_coder(coder) end end end pg-1.5.5/lib/pg/result.rb0000644000004100000410000000174314563476204015217 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'pg' unless defined?( PG ) class PG::Result # Apply a type map for all value retrieving methods. # # +type_map+: a PG::TypeMap instance. # # This method is equal to #type_map= , but returns self, so that calls can be chained. # # See also PG::BasicTypeMapForResults def map_types!(type_map) self.type_map = type_map return self end # Set the data type for all field name returning methods. # # +type+: a Symbol defining the field name type. # # This method is equal to #field_name_type= , but returns self, so that calls can be chained. def field_names_as(type) self.field_name_type = type return self end ### Return a String representation of the object suitable for debugging. def inspect str = self.to_s str[-1,0] = if cleared? " cleared" else " status=#{res_status(result_status)} ntuples=#{ntuples} nfields=#{nfields} cmd_tuples=#{cmd_tuples}" end return str end end # class PG::Result pg-1.5.5/lib/pg/connection.rb0000644000004100000410000010463114563476204016040 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'pg' unless defined?( PG ) require 'io/wait' unless ::IO.public_instance_methods(false).include?(:wait_readable) require 'socket' # The PostgreSQL connection class. The interface for this class is based on # {libpq}[http://www.postgresql.org/docs/current/libpq.html], the C # application programmer's interface to PostgreSQL. Some familiarity with libpq # is recommended, but not necessary. # # For example, to send query to the database on the localhost: # # require 'pg' # conn = PG::Connection.open(:dbname => 'test') # res = conn.exec_params('SELECT $1 AS a, $2 AS b, $3 AS c', [1, 2, nil]) # # Equivalent to: # # res = conn.exec('SELECT 1 AS a, 2 AS b, NULL AS c') # # See the PG::Result class for information on working with the results of a query. # # Many methods of this class have three variants kind of: # 1. #exec - the base method which is an alias to #async_exec . # This is the method that should be used in general. # 2. #async_exec - the async aware version of the method, implemented by libpq's async API. # 3. #sync_exec - the method version that is implemented by blocking function(s) of libpq. # # Sync and async version of the method can be switched by Connection.async_api= , however it is not recommended to change the default. class PG::Connection # The order the options are passed to the ::connect method. CONNECT_ARGUMENT_ORDER = %w[host port options tty dbname user password].freeze private_constant :CONNECT_ARGUMENT_ORDER ### Quote a single +value+ for use in a connection-parameter string. def self.quote_connstr( value ) return "'" + value.to_s.gsub( /[\\']/ ) {|m| '\\' + m } + "'" end # Convert Hash options to connection String # # Values are properly quoted and escaped. def self.connect_hash_to_string( hash ) hash.map { |k,v| "#{k}=#{quote_connstr(v)}" }.join( ' ' ) end # Shareable program name for Ractor PROGRAM_NAME = $PROGRAM_NAME.dup.freeze private_constant :PROGRAM_NAME # Parse the connection +args+ into a connection-parameter string. # See PG::Connection.new for valid arguments. # # It accepts: # * an option String kind of "host=name port=5432" # * an option Hash kind of {host: "name", port: 5432} # * URI string # * URI object # * positional arguments # # The method adds the option "fallback_application_name" if it isn't already set. # It returns a connection string with "key=value" pairs. def self.parse_connect_args( *args ) hash_arg = args.last.is_a?( Hash ) ? args.pop.transform_keys(&:to_sym) : {} iopts = {} if args.length == 1 case args.first.to_s when /=/, /:\/\// # Option or URL string style conn_string = args.first.to_s iopts = PG::Connection.conninfo_parse(conn_string).each_with_object({}){|h, o| o[h[:keyword].to_sym] = h[:val] if h[:val] } else # Positional parameters (only host given) iopts[CONNECT_ARGUMENT_ORDER.first.to_sym] = args.first end else # Positional parameters with host and more max = CONNECT_ARGUMENT_ORDER.length raise ArgumentError, "Extra positional parameter %d: %p" % [ max + 1, args[max] ] if args.length > max CONNECT_ARGUMENT_ORDER.zip( args ) do |(k,v)| iopts[ k.to_sym ] = v if v end iopts.delete(:tty) # ignore obsolete tty parameter end iopts.merge!( hash_arg ) if !iopts[:fallback_application_name] iopts[:fallback_application_name] = PROGRAM_NAME.sub( /^(.{30}).{4,}(.{30})$/ ){ $1+"..."+$2 } end return connect_hash_to_string(iopts) end # Return a String representation of the object suitable for debugging. def inspect str = self.to_s str[-1,0] = if finished? " finished" else stats = [] stats << " status=#{ PG.constants.grep(/CONNECTION_/).find{|c| PG.const_get(c) == status} }" if status != CONNECTION_OK stats << " transaction_status=#{ PG.constants.grep(/PQTRANS_/).find{|c| PG.const_get(c) == transaction_status} }" if transaction_status != PG::PQTRANS_IDLE stats << " nonblocking=#{ isnonblocking }" if isnonblocking stats << " pipeline_status=#{ PG.constants.grep(/PQ_PIPELINE_/).find{|c| PG.const_get(c) == pipeline_status} }" if respond_to?(:pipeline_status) && pipeline_status != PG::PQ_PIPELINE_OFF stats << " client_encoding=#{ get_client_encoding }" if get_client_encoding != "UTF8" stats << " type_map_for_results=#{ type_map_for_results.to_s }" unless type_map_for_results.is_a?(PG::TypeMapAllStrings) stats << " type_map_for_queries=#{ type_map_for_queries.to_s }" unless type_map_for_queries.is_a?(PG::TypeMapAllStrings) stats << " encoder_for_put_copy_data=#{ encoder_for_put_copy_data.to_s }" if encoder_for_put_copy_data stats << " decoder_for_get_copy_data=#{ decoder_for_get_copy_data.to_s }" if decoder_for_get_copy_data " host=#{host} port=#{port} user=#{user}#{stats.join}" end return str end BinarySignature = "PGCOPY\n\377\r\n\0".b private_constant :BinarySignature # call-seq: # conn.copy_data( sql [, coder] ) {|sql_result| ... } -> PG::Result # # Execute a copy process for transferring data to or from the server. # # This issues the SQL COPY command via #exec. The response to this # (if there is no error in the command) is a PG::Result object that # is passed to the block, bearing a status code of PGRES_COPY_OUT or # PGRES_COPY_IN (depending on the specified copy direction). # The application should then use #put_copy_data or #get_copy_data # to receive or transmit data rows and should return from the block # when finished. # # #copy_data returns another PG::Result object when the data transfer # is complete. An exception is raised if some problem was encountered, # so it isn't required to make use of any of them. # At this point further SQL commands can be issued via #exec. # (It is not possible to execute other SQL commands using the same # connection while the COPY operation is in progress.) # # This method ensures, that the copy process is properly terminated # in case of client side or server side failures. Therefore, in case # of blocking mode of operation, #copy_data is preferred to raw calls # of #put_copy_data, #get_copy_data and #put_copy_end. # # _coder_ can be a PG::Coder derivation # (typically PG::TextEncoder::CopyRow or PG::TextDecoder::CopyRow). # This enables encoding of data fields given to #put_copy_data # or decoding of fields received by #get_copy_data. # # Example with CSV input format: # conn.exec "create table my_table (a text,b text,c text,d text)" # conn.copy_data "COPY my_table FROM STDIN CSV" do # conn.put_copy_data "some,data,to,copy\n" # conn.put_copy_data "more,data,to,copy\n" # end # This creates +my_table+ and inserts two CSV rows. # # The same with text format encoder PG::TextEncoder::CopyRow # and Array input: # enco = PG::TextEncoder::CopyRow.new # conn.copy_data "COPY my_table FROM STDIN", enco do # conn.put_copy_data ['some', 'data', 'to', 'copy'] # conn.put_copy_data ['more', 'data', 'to', 'copy'] # end # # Also PG::BinaryEncoder::CopyRow can be used to send data in binary format to the server. # In this case copy_data generates the header and trailer data automatically: # enco = PG::BinaryEncoder::CopyRow.new # conn.copy_data "COPY my_table FROM STDIN (FORMAT binary)", enco do # conn.put_copy_data ['some', 'data', 'to', 'copy'] # conn.put_copy_data ['more', 'data', 'to', 'copy'] # end # # Example with CSV output format: # conn.copy_data "COPY my_table TO STDOUT CSV" do # while row=conn.get_copy_data # p row # end # end # This prints all rows of +my_table+ to stdout: # "some,data,to,copy\n" # "more,data,to,copy\n" # # The same with text format decoder PG::TextDecoder::CopyRow # and Array output: # deco = PG::TextDecoder::CopyRow.new # conn.copy_data "COPY my_table TO STDOUT", deco do # while row=conn.get_copy_data # p row # end # end # This receives all rows of +my_table+ as ruby array: # ["some", "data", "to", "copy"] # ["more", "data", "to", "copy"] # # Also PG::BinaryDecoder::CopyRow can be used to retrieve data in binary format from the server. # In this case the header and trailer data is processed by the decoder and the remaining +nil+ from get_copy_data is processed by copy_data, so that binary data can be processed equally to text data: # deco = PG::BinaryDecoder::CopyRow.new # conn.copy_data "COPY my_table TO STDOUT (FORMAT binary)", deco do # while row=conn.get_copy_data # p row # end # end # This receives all rows of +my_table+ as ruby array: # ["some", "data", "to", "copy"] # ["more", "data", "to", "copy"] def copy_data( sql, coder=nil ) raise PG::NotInBlockingMode.new("copy_data can not be used in nonblocking mode", connection: self) if nonblocking? res = exec( sql ) case res.result_status when PGRES_COPY_IN begin if coder && res.binary_tuples == 1 # Binary file header (11 byte signature, 32 bit flags and 32 bit extension length) put_copy_data(BinarySignature + ("\x00" * 8)) end if coder old_coder = self.encoder_for_put_copy_data self.encoder_for_put_copy_data = coder end yield res rescue Exception => err errmsg = "%s while copy data: %s" % [ err.class.name, err.message ] begin put_copy_end( errmsg ) rescue PG::Error # Ignore error in cleanup to avoid losing original exception end discard_results raise err else begin self.encoder_for_put_copy_data = old_coder if coder if coder && res.binary_tuples == 1 put_copy_data("\xFF\xFF") # Binary file trailer 16 bit "-1" end put_copy_end rescue PG::Error => err raise PG::LostCopyState.new("#{err} (probably by executing another SQL query while running a COPY command)", connection: self) end get_last_result ensure self.encoder_for_put_copy_data = old_coder if coder end when PGRES_COPY_OUT begin if coder old_coder = self.decoder_for_get_copy_data self.decoder_for_get_copy_data = coder end yield res rescue Exception cancel discard_results raise else if coder && res.binary_tuples == 1 # There are two end markers in binary mode: file trailer and the final nil. # The file trailer is expected to be processed by BinaryDecoder::CopyRow and already returns nil, so that the remaining NULL from PQgetCopyData is retrieved here: if get_copy_data discard_results raise PG::NotAllCopyDataRetrieved.new("Not all binary COPY data retrieved", connection: self) end end res = get_last_result if !res discard_results raise PG::LostCopyState.new("Lost COPY state (probably by executing another SQL query while running a COPY command)", connection: self) elsif res.result_status != PGRES_COMMAND_OK discard_results raise PG::NotAllCopyDataRetrieved.new("Not all COPY data retrieved", connection: self) end res ensure self.decoder_for_get_copy_data = old_coder if coder end else raise ArgumentError, "SQL command is no COPY statement: #{sql}" end end # Backward-compatibility aliases for stuff that's moved into PG. class << self define_method( :isthreadsafe, &PG.method(:isthreadsafe) ) end # # call-seq: # conn.transaction { |conn| ... } -> result of the block # # Executes a +BEGIN+ at the start of the block, # and a +COMMIT+ at the end of the block, or # +ROLLBACK+ if any exception occurs. def transaction rollback = false exec "BEGIN" yield(self) rescue Exception rollback = true cancel if transaction_status == PG::PQTRANS_ACTIVE block exec "ROLLBACK" raise ensure exec "COMMIT" unless rollback end ### Returns an array of Hashes with connection defaults. See ::conndefaults ### for details. def conndefaults return self.class.conndefaults end ### Return the Postgres connection defaults structure as a Hash keyed by option ### keyword (as a Symbol). ### ### See also #conndefaults def self.conndefaults_hash return self.conndefaults.each_with_object({}) do |info, hash| hash[ info[:keyword].to_sym ] = info[:val] end end ### Returns a Hash with connection defaults. See ::conndefaults_hash ### for details. def conndefaults_hash return self.class.conndefaults_hash end ### Return the Postgres connection info structure as a Hash keyed by option ### keyword (as a Symbol). ### ### See also #conninfo def conninfo_hash return self.conninfo.each_with_object({}) do |info, hash| hash[ info[:keyword].to_sym ] = info[:val] end end # Method 'ssl_attribute' was introduced in PostgreSQL 9.5. if self.instance_methods.find{|m| m.to_sym == :ssl_attribute } # call-seq: # conn.ssl_attributes -> Hash # # Returns SSL-related information about the connection as key/value pairs # # The available attributes varies depending on the SSL library being used, # and the type of connection. # # See also #ssl_attribute def ssl_attributes ssl_attribute_names.each.with_object({}) do |n,h| h[n] = ssl_attribute(n) end end end # Read all pending socket input to internal memory and raise an exception in case of errors. # # This verifies that the connection socket is in a usable state and not aborted in any way. # No communication is done with the server. # Only pending data is read from the socket - the method doesn't wait for any outstanding server answers. # # Raises a kind of PG::Error if there was an error reading the data or if the socket is in a failure state. # # The method doesn't verify that the server is still responding. # To verify that the communication to the server works, it is recommended to use something like conn.exec('') instead. def check_socket while socket_io.wait_readable(0) consume_input end nil end # call-seq: # conn.get_result() -> PG::Result # conn.get_result() {|pg_result| block } # # Blocks waiting for the next result from a call to # #send_query (or another asynchronous command), and returns # it. Returns +nil+ if no more results are available. # # Note: call this function repeatedly until it returns +nil+, or else # you will not be able to issue further commands. # # If the optional code block is given, it will be passed result as an argument, # and the PG::Result object will automatically be cleared when the block terminates. # In this instance, conn.exec returns the value of the block. def get_result block sync_get_result end alias async_get_result get_result # call-seq: # conn.get_copy_data( [ nonblock = false [, decoder = nil ]] ) -> Object # # Return one row of data, +nil+ # if the copy is done, or +false+ if the call would # block (only possible if _nonblock_ is true). # # If _decoder_ is not set or +nil+, data is returned as binary string. # # If _decoder_ is set to a PG::Coder derivation, the return type depends on this decoder. # PG::TextDecoder::CopyRow decodes the received data fields from one row of PostgreSQL's # COPY text format to an Array of Strings. # Optionally the decoder can type cast the single fields to various Ruby types in one step, # if PG::TextDecoder::CopyRow#type_map is set accordingly. # # See also #copy_data. # def get_copy_data(async=false, decoder=nil) if async return sync_get_copy_data(async, decoder) else while (res=sync_get_copy_data(true, decoder)) == false socket_io.wait_readable consume_input end return res end end alias async_get_copy_data get_copy_data # In async_api=true mode (default) all send calls run nonblocking. # The difference is that setnonblocking(true) disables automatic handling of would-block cases. # In async_api=false mode all send calls run directly on libpq. # Blocking vs. nonblocking state can be changed in libpq. # call-seq: # conn.setnonblocking(Boolean) -> nil # # Sets the nonblocking status of the connection. # In the blocking state, calls to #send_query # will block until the message is sent to the server, # but will not wait for the query results. # In the nonblocking state, calls to #send_query # will return an error if the socket is not ready for # writing. # Note: This function does not affect #exec, because # that function doesn't return until the server has # processed the query and returned the results. # # Returns +nil+. def setnonblocking(enabled) singleton_class.async_send_api = !enabled self.flush_data = !enabled sync_setnonblocking(true) end alias async_setnonblocking setnonblocking # sync/async isnonblocking methods are switched by async_setnonblocking() # call-seq: # conn.isnonblocking() -> Boolean # # Returns the blocking status of the database connection. # Returns +true+ if the connection is set to nonblocking mode and +false+ if blocking. def isnonblocking false end alias async_isnonblocking isnonblocking alias nonblocking? isnonblocking # call-seq: # conn.put_copy_data( buffer [, encoder] ) -> Boolean # # Transmits _buffer_ as copy data to the server. # Returns true if the data was sent, false if it was # not sent (false is only possible if the connection # is in nonblocking mode, and this command would block). # # _encoder_ can be a PG::Coder derivation (typically PG::TextEncoder::CopyRow). # This encodes the data fields given as _buffer_ from an Array of Strings to # PostgreSQL's COPY text format inclusive proper escaping. Optionally # the encoder can type cast the fields from various Ruby types in one step, # if PG::TextEncoder::CopyRow#type_map is set accordingly. # # Raises an exception if an error occurs. # # See also #copy_data. # def put_copy_data(buffer, encoder=nil) # sync_put_copy_data does a non-blocking attept to flush data. until res=sync_put_copy_data(buffer, encoder) # It didn't flush immediately and allocation of more buffering memory failed. # Wait for all data sent by doing a blocking flush. res = flush end # And do a blocking flush every 100 calls. # This is to avoid memory bloat, when sending the data is slower than calls to put_copy_data happen. if (@calls_to_put_copy_data += 1) > 100 @calls_to_put_copy_data = 0 res = flush end res end alias async_put_copy_data put_copy_data # call-seq: # conn.put_copy_end( [ error_message ] ) -> Boolean # # Sends end-of-data indication to the server. # # _error_message_ is an optional parameter, and if set, # forces the COPY command to fail with the string # _error_message_. # # Returns true if the end-of-data was sent, #false* if it was # not sent (*false* is only possible if the connection # is in nonblocking mode, and this command would block). def put_copy_end(*args) until sync_put_copy_end(*args) flush end @calls_to_put_copy_data = 0 flush end alias async_put_copy_end put_copy_end if method_defined? :sync_encrypt_password # call-seq: # conn.encrypt_password( password, username, algorithm=nil ) -> String # # This function is intended to be used by client applications that wish to send commands like ALTER USER joe PASSWORD 'pwd'. # It is good practice not to send the original cleartext password in such a command, because it might be exposed in command logs, activity displays, and so on. # Instead, use this function to convert the password to encrypted form before it is sent. # # The +password+ and +username+ arguments are the cleartext password, and the SQL name of the user it is for. # +algorithm+ specifies the encryption algorithm to use to encrypt the password. # Currently supported algorithms are +md5+ and +scram-sha-256+ (+on+ and +off+ are also accepted as aliases for +md5+, for compatibility with older server versions). # Note that support for +scram-sha-256+ was introduced in PostgreSQL version 10, and will not work correctly with older server versions. # If algorithm is omitted or +nil+, this function will query the server for the current value of the +password_encryption+ setting. # That can block, and will fail if the current transaction is aborted, or if the connection is busy executing another query. # If you wish to use the default algorithm for the server but want to avoid blocking, query +password_encryption+ yourself before calling #encrypt_password, and pass that value as the algorithm. # # Return value is the encrypted password. # The caller can assume the string doesn't contain any special characters that would require escaping. # # Available since PostgreSQL-10. # See also corresponding {libpq function}[https://www.postgresql.org/docs/current/libpq-misc.html#LIBPQ-PQENCRYPTPASSWORDCONN]. def encrypt_password( password, username, algorithm=nil ) algorithm ||= exec("SHOW password_encryption").getvalue(0,0) sync_encrypt_password(password, username, algorithm) end alias async_encrypt_password encrypt_password end # call-seq: # conn.reset() # # Resets the backend connection. This method closes the # backend connection and tries to re-connect. def reset reset_start async_connect_or_reset(:reset_poll) self end alias async_reset reset # call-seq: # conn.cancel() -> String # # Requests cancellation of the command currently being # processed. # # Returns +nil+ on success, or a string containing the # error message if a failure occurs. def cancel be_pid = backend_pid be_key = backend_key cancel_request = [0x10, 1234, 5678, be_pid, be_key].pack("NnnNN") if Fiber.respond_to?(:scheduler) && Fiber.scheduler && RUBY_PLATFORM =~ /mingw|mswin/ # Ruby's nonblocking IO is not really supported on Windows. # We work around by using threads and explicit calls to wait_readable/wait_writable. cl = Thread.new(socket_io.remote_address) { |ra| ra.connect }.value begin cl.write_nonblock(cancel_request) rescue IO::WaitReadable, Errno::EINTR cl.wait_writable retry end begin cl.read_nonblock(1) rescue IO::WaitReadable, Errno::EINTR cl.wait_readable retry rescue EOFError end elsif RUBY_ENGINE == 'truffleruby' begin cl = socket_io.remote_address.connect rescue NotImplementedError # Workaround for truffleruby < 21.3.0 cl2 = Socket.for_fd(socket_io.fileno) cl2.autoclose = false adr = cl2.remote_address if adr.ip? cl = TCPSocket.new(adr.ip_address, adr.ip_port) cl.autoclose = false else cl = UNIXSocket.new(adr.unix_path) cl.autoclose = false end end cl.write(cancel_request) cl.read(1) else cl = socket_io.remote_address.connect # Send CANCEL_REQUEST_CODE and parameters cl.write(cancel_request) # Wait for the postmaster to close the connection, which indicates that it's processed the request. cl.read(1) end cl.close nil rescue SystemCallError => err err.to_s end alias async_cancel cancel private def async_connect_or_reset(poll_meth) # Track the progress of the connection, waiting for the socket to become readable/writable before polling it if (timeo = conninfo_hash[:connect_timeout].to_i) && timeo > 0 # Lowest timeout is 2 seconds - like in libpq timeo = [timeo, 2].max host_count = conninfo_hash[:host].to_s.count(",") + 1 stop_time = timeo * host_count + Process.clock_gettime(Process::CLOCK_MONOTONIC) end poll_status = PG::PGRES_POLLING_WRITING until poll_status == PG::PGRES_POLLING_OK || poll_status == PG::PGRES_POLLING_FAILED # Set single timeout to parameter "connect_timeout" but # don't exceed total connection time of number-of-hosts * connect_timeout. timeout = [timeo, stop_time - Process.clock_gettime(Process::CLOCK_MONOTONIC)].min if stop_time event = if !timeout || timeout >= 0 # If the socket needs to read, wait 'til it becomes readable to poll again case poll_status when PG::PGRES_POLLING_READING if defined?(IO::READABLE) # ruby-3.0+ socket_io.wait(IO::READABLE | IO::PRIORITY, timeout) else IO.select([socket_io], nil, [socket_io], timeout) end # ...and the same for when the socket needs to write when PG::PGRES_POLLING_WRITING if defined?(IO::WRITABLE) # ruby-3.0+ # Use wait instead of wait_readable, since connection errors are delivered as # exceptional/priority events on Windows. socket_io.wait(IO::WRITABLE | IO::PRIORITY, timeout) else # io#wait on ruby-2.x doesn't wait for priority, so fallback to IO.select IO.select(nil, [socket_io], [socket_io], timeout) end end end # connection to server at "localhost" (127.0.0.1), port 5433 failed: timeout expired (PG::ConnectionBad) # connection to server on socket "/var/run/postgresql/.s.PGSQL.5433" failed: No such file or directory unless event if self.class.send(:host_is_named_pipe?, host) connhost = "on socket \"#{host}\"" elsif respond_to?(:hostaddr) connhost = "at \"#{host}\" (#{hostaddr}), port #{port}" else connhost = "at \"#{host}\", port #{port}" end raise PG::ConnectionBad.new("connection to server #{connhost} failed: timeout expired", connection: self) end # Check to see if it's finished or failed yet poll_status = send( poll_meth ) end unless status == PG::CONNECTION_OK msg = error_message finish raise PG::ConnectionBad.new(msg, connection: self) end # Set connection to nonblocking to handle all blocking states in ruby. # That way a fiber scheduler is able to handle IO requests. sync_setnonblocking(true) self.flush_data = true set_default_encoding end class << self # call-seq: # PG::Connection.new -> conn # PG::Connection.new(connection_hash) -> conn # PG::Connection.new(connection_string) -> conn # PG::Connection.new(host, port, options, tty, dbname, user, password) -> conn # # Create a connection to the specified server. # # +connection_hash+ must be a ruby Hash with connection parameters. # See the {list of valid parameters}[https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS] in the PostgreSQL documentation. # # There are two accepted formats for +connection_string+: plain keyword = value strings and URIs. # See the documentation of {connection strings}[https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING]. # # The positional parameter form has the same functionality except that the missing parameters will always take on default values. The parameters are: # [+host+] # server hostname # [+port+] # server port number # [+options+] # backend options # [+tty+] # (ignored in all versions of PostgreSQL) # [+dbname+] # connecting database name # [+user+] # login user name # [+password+] # login password # # Examples: # # # Connect using all defaults # PG::Connection.new # # # As a Hash # PG::Connection.new( dbname: 'test', port: 5432 ) # # # As a String # PG::Connection.new( "dbname=test port=5432" ) # # # As an Array # PG::Connection.new( nil, 5432, nil, nil, 'test', nil, nil ) # # # As an URI # PG::Connection.new( "postgresql://user:pass@pgsql.example.com:5432/testdb?sslmode=require" ) # # If the Ruby default internal encoding is set (i.e., Encoding.default_internal != nil), the # connection will have its +client_encoding+ set accordingly. # # Raises a PG::Error if the connection fails. def new(*args) conn = connect_to_hosts(*args) if block_given? begin return yield conn ensure conn.finish end end conn end alias async_connect new alias connect new alias open new alias setdb new alias setdblogin new private def connect_to_hosts(*args) option_string = parse_connect_args(*args) iopts = PG::Connection.conninfo_parse(option_string).each_with_object({}){|h, o| o[h[:keyword].to_sym] = h[:val] if h[:val] } iopts = PG::Connection.conndefaults.each_with_object({}){|h, o| o[h[:keyword].to_sym] = h[:val] if h[:val] }.merge(iopts) if iopts[:hostaddr] # hostaddr is provided -> no need to resolve hostnames elsif iopts[:host] && !iopts[:host].empty? && PG.library_version >= 100000 # Resolve DNS in Ruby to avoid blocking state while connecting. # Multiple comma-separated values are generated, if the hostname resolves to both IPv4 and IPv6 addresses. # This requires PostgreSQL-10+, so no DNS resolving is done on earlier versions. ihosts = iopts[:host].split(",", -1) iports = iopts[:port].split(",", -1) iports = [nil] if iports.size == 0 iports = iports * ihosts.size if iports.size == 1 raise PG::ConnectionBad, "could not match #{iports.size} port numbers to #{ihosts.size} hosts" if iports.size != ihosts.size dests = ihosts.each_with_index.flat_map do |mhost, idx| unless host_is_named_pipe?(mhost) if Fiber.respond_to?(:scheduler) && Fiber.scheduler && RUBY_VERSION < '3.1.' # Use a second thread to avoid blocking of the scheduler. # `TCPSocket.gethostbyname` isn't fiber aware before ruby-3.1. hostaddrs = Thread.new{ Addrinfo.getaddrinfo(mhost, nil, nil, :STREAM).map(&:ip_address) rescue [''] }.value else hostaddrs = Addrinfo.getaddrinfo(mhost, nil, nil, :STREAM).map(&:ip_address) rescue [''] end else # No hostname to resolve (UnixSocket) hostaddrs = [nil] end hostaddrs.map { |hostaddr| [hostaddr, mhost, iports[idx]] } end iopts.merge!( hostaddr: dests.map{|d| d[0] }.join(","), host: dests.map{|d| d[1] }.join(","), port: dests.map{|d| d[2] }.join(",")) else # No host given end conn = self.connect_start(iopts) or raise(PG::Error, "Unable to create a new connection") raise PG::ConnectionBad, conn.error_message if conn.status == PG::CONNECTION_BAD conn.send(:async_connect_or_reset, :connect_poll) conn end private def host_is_named_pipe?(host_string) host_string.empty? || host_string.start_with?("/") || # it's UnixSocket? host_string.start_with?("@") || # it's UnixSocket in the abstract namespace? # it's a path on Windows? (RUBY_PLATFORM =~ /mingw|mswin/ && host_string =~ /\A([\/\\]|\w:[\/\\])/) end # call-seq: # PG::Connection.ping(connection_hash) -> Integer # PG::Connection.ping(connection_string) -> Integer # PG::Connection.ping(host, port, options, tty, dbname, login, password) -> Integer # # PQpingParams reports the status of the server. # # It accepts connection parameters identical to those of PQ::Connection.new . # It is not necessary to supply correct user name, password, or database name values to obtain the server status; however, if incorrect values are provided, the server will log a failed connection attempt. # # See PG::Connection.new for a description of the parameters. # # Returns one of: # [+PQPING_OK+] # server is accepting connections # [+PQPING_REJECT+] # server is alive but rejecting connections # [+PQPING_NO_RESPONSE+] # could not establish connection # [+PQPING_NO_ATTEMPT+] # connection not attempted (bad params) # # See also check_socket for a way to check the connection without doing any server communication. def ping(*args) if Fiber.respond_to?(:scheduler) && Fiber.scheduler # Run PQping in a second thread to avoid blocking of the scheduler. # Unfortunately there's no nonblocking way to run ping. Thread.new { sync_ping(*args) }.value else sync_ping(*args) end end alias async_ping ping REDIRECT_CLASS_METHODS = PG.make_shareable({ :new => [:async_connect, :sync_connect], :connect => [:async_connect, :sync_connect], :open => [:async_connect, :sync_connect], :setdb => [:async_connect, :sync_connect], :setdblogin => [:async_connect, :sync_connect], :ping => [:async_ping, :sync_ping], }) private_constant :REDIRECT_CLASS_METHODS # These methods are affected by PQsetnonblocking REDIRECT_SEND_METHODS = PG.make_shareable({ :isnonblocking => [:async_isnonblocking, :sync_isnonblocking], :nonblocking? => [:async_isnonblocking, :sync_isnonblocking], :put_copy_data => [:async_put_copy_data, :sync_put_copy_data], :put_copy_end => [:async_put_copy_end, :sync_put_copy_end], :flush => [:async_flush, :sync_flush], }) private_constant :REDIRECT_SEND_METHODS REDIRECT_METHODS = { :exec => [:async_exec, :sync_exec], :query => [:async_exec, :sync_exec], :exec_params => [:async_exec_params, :sync_exec_params], :prepare => [:async_prepare, :sync_prepare], :exec_prepared => [:async_exec_prepared, :sync_exec_prepared], :describe_portal => [:async_describe_portal, :sync_describe_portal], :describe_prepared => [:async_describe_prepared, :sync_describe_prepared], :setnonblocking => [:async_setnonblocking, :sync_setnonblocking], :get_result => [:async_get_result, :sync_get_result], :get_last_result => [:async_get_last_result, :sync_get_last_result], :get_copy_data => [:async_get_copy_data, :sync_get_copy_data], :reset => [:async_reset, :sync_reset], :set_client_encoding => [:async_set_client_encoding, :sync_set_client_encoding], :client_encoding= => [:async_set_client_encoding, :sync_set_client_encoding], :cancel => [:async_cancel, :sync_cancel], } private_constant :REDIRECT_METHODS if PG::Connection.instance_methods.include? :async_encrypt_password REDIRECT_METHODS.merge!({ :encrypt_password => [:async_encrypt_password, :sync_encrypt_password], }) end PG.make_shareable(REDIRECT_METHODS) def async_send_api=(enable) REDIRECT_SEND_METHODS.each do |ali, (async, sync)| undef_method(ali) if method_defined?(ali) alias_method( ali, enable ? async : sync ) end end # Switch between sync and async libpq API. # # PG::Connection.async_api = true # this is the default. # It sets an alias from #exec to #async_exec, #reset to #async_reset and so on. # # PG::Connection.async_api = false # sets an alias from #exec to #sync_exec, #reset to #sync_reset and so on. # # pg-1.1.0+ defaults to libpq's async API for query related blocking methods. # pg-1.3.0+ defaults to libpq's async API for all possibly blocking methods. # # _PLEASE_ _NOTE_: This method is not part of the public API and is for debug and development use only. # Do not use this method in production code. # Any issues with the default setting of async_api=true should be reported to the maintainers instead. # def async_api=(enable) self.async_send_api = enable REDIRECT_METHODS.each do |ali, (async, sync)| remove_method(ali) if method_defined?(ali) alias_method( ali, enable ? async : sync ) end REDIRECT_CLASS_METHODS.each do |ali, (async, sync)| singleton_class.remove_method(ali) if method_defined?(ali) singleton_class.alias_method(ali, enable ? async : sync ) end end end self.async_api = true end # class PG::Connection pg-1.5.5/lib/pg/basic_type_map_for_queries.rb0000644000004100000410000001450714563476204021265 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true require 'pg' unless defined?( PG ) # Simple set of rules for type casting common Ruby types to PostgreSQL. # # OIDs of supported type casts are not hard-coded in the sources, but are retrieved from the # PostgreSQL's pg_type table in PG::BasicTypeMapForQueries.new . # # Query params are type casted based on the class of the given value. # # Higher level libraries will most likely not make use of this class, but use their # own derivation of PG::TypeMapByClass or another set of rules to choose suitable # encoders and decoders for the values to be sent. # # Example: # conn = PG::Connection.new # # Assign a default ruleset for type casts of input and output values. # conn.type_map_for_queries = PG::BasicTypeMapForQueries.new(conn) # # Execute a query. The Integer param value is typecasted internally by PG::BinaryEncoder::Int8. # # The format of the parameter is set to 0 (text) and the OID of this parameter is set to 20 (int8). # res = conn.exec_params( "SELECT $1", [5] ) class PG::BasicTypeMapForQueries < PG::TypeMapByClass # Helper class for submission of binary strings into bytea columns. # # Since PG::BasicTypeMapForQueries chooses the encoder to be used by the class of the submitted value, # it's necessary to send binary strings as BinaryData. # That way they're distinct from text strings. # Please note however that PG::BasicTypeMapForResults delivers bytea columns as plain String # with binary encoding. # # conn.type_map_for_queries = PG::BasicTypeMapForQueries.new(conn) # conn.exec("CREATE TEMP TABLE test (data bytea)") # bd = PG::BasicTypeMapForQueries::BinaryData.new("ab\xff\0cd") # conn.exec_params("INSERT INTO test (data) VALUES ($1)", [bd]) class BinaryData < String end class UndefinedEncoder < RuntimeError end include PG::BasicTypeRegistry::Checker # Create a new type map for query submission # # Options: # * +registry+: Custom type registry, nil for default global registry # * +if_undefined+: Optional +Proc+ object which is called, if no type for an parameter class is not defined in the registry. # The +Proc+ object is called with the name and format of the missing type. # Its return value is not used. def initialize(connection_or_coder_maps, registry: nil, if_undefined: nil) @coder_maps = build_coder_maps(connection_or_coder_maps, registry: registry) @array_encoders_by_klass = array_encoders_by_klass @encode_array_as = :array @if_undefined = if_undefined || method(:raise_undefined_type).to_proc init_encoders end private def raise_undefined_type(oid_name, format) raise UndefinedEncoder, "no encoder defined for type #{oid_name.inspect} format #{format}" end # Change the mechanism that is used to encode ruby array values # # Possible values: # * +:array+ : Encode the ruby array as a PostgreSQL array. # The array element type is inferred from the class of the first array element. This is the default. # * +:json+ : Encode the ruby array as a JSON document. # * +:record+ : Encode the ruby array as a composite type row. # * "_type" : Encode the ruby array as a particular PostgreSQL type. # All PostgreSQL array types are supported. # If there's an encoder registered for the elements +type+, it will be used. # Otherwise a string conversion (by +value.to_s+) is done. def encode_array_as=(pg_type) case pg_type when :array when :json when :record when /\A_/ else raise ArgumentError, "invalid pg_type #{pg_type.inspect}" end @encode_array_as = pg_type init_encoders end attr_reader :encode_array_as private def init_encoders coders.each { |kl, c| self[kl] = nil } # Clear type map populate_encoder_list @textarray_encoder = coder_by_name(0, :encoder, '_text') end def coder_by_name(format, direction, name) check_format_and_direction(format, direction) @coder_maps.map_for(format, direction).coder_by_name(name) end def undefined(name, format) @if_undefined.call(name, format) end def populate_encoder_list DEFAULT_TYPE_MAP.each do |klass, selector| if Array === selector format, name, oid_name = selector coder = coder_by_name(format, :encoder, name).dup if coder if oid_name oid_coder = coder_by_name(format, :encoder, oid_name) if oid_coder coder.oid = oid_coder.oid else undefined(oid_name, format) end else coder.oid = 0 end self[klass] = coder else undefined(name, format) end else case @encode_array_as when :array self[klass] = selector when :json self[klass] = PG::TextEncoder::JSON.new when :record self[klass] = PG::TextEncoder::Record.new type_map: self when /\A_/ coder = coder_by_name(0, :encoder, @encode_array_as) if coder self[klass] = coder else undefined(@encode_array_as, format) end else raise ArgumentError, "invalid pg_type #{@encode_array_as.inspect}" end end end end def array_encoders_by_klass DEFAULT_ARRAY_TYPE_MAP.inject({}) do |h, (klass, (format, name))| h[klass] = coder_by_name(format, :encoder, name) h end end def get_array_type(value) elem = value while elem.kind_of?(Array) elem = elem.first end @array_encoders_by_klass[elem.class] || elem.class.ancestors.lazy.map{|ancestor| @array_encoders_by_klass[ancestor] }.find{|a| a } || @textarray_encoder end DEFAULT_TYPE_MAP = PG.make_shareable({ TrueClass => [1, 'bool', 'bool'], FalseClass => [1, 'bool', 'bool'], # We use text format and no type OID for numbers, because setting the OID can lead # to unnecessary type conversions on server side. Integer => [0, 'int8'], Float => [0, 'float8'], BigDecimal => [0, 'numeric'], Time => [0, 'timestamptz'], # We use text format and no type OID for IPAddr, because setting the OID can lead # to unnecessary inet/cidr conversions on the server side. IPAddr => [0, 'inet'], Hash => [0, 'json'], Array => :get_array_type, BinaryData => [1, 'bytea'], }) private_constant :DEFAULT_TYPE_MAP DEFAULT_ARRAY_TYPE_MAP = PG.make_shareable({ TrueClass => [0, '_bool'], FalseClass => [0, '_bool'], Integer => [0, '_int8'], String => [0, '_text'], Float => [0, '_float8'], BigDecimal => [0, '_numeric'], Time => [0, '_timestamptz'], IPAddr => [0, '_inet'], }) private_constant :DEFAULT_ARRAY_TYPE_MAP end pg-1.5.5/lib/pg/binary_encoder/0000755000004100000410000000000014563476204016332 5ustar www-datawww-datapg-1.5.5/lib/pg/binary_encoder/timestamp.rb0000644000004100000410000000136014563476204020662 0ustar www-datawww-data# -*- ruby -*- # frozen_string_literal: true module PG module BinaryEncoder # Convenience classes for timezone options class TimestampUtc < Timestamp def initialize(hash={}, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) unless hash.empty? super(**hash, **kwargs, flags: PG::Coder::TIMESTAMP_DB_UTC) end end class TimestampLocal < Timestamp def initialize(hash={}, **kwargs) warn("PG::Coder.new(hash) is deprecated. Please use keyword arguments instead! Called from #{caller.first}", category: :deprecated) unless hash.empty? super(**hash, **kwargs, flags: PG::Coder::TIMESTAMP_DB_LOCAL) end end end end # module PG pg-1.5.5/lib/pg.rb0000644000004100000410000000730214563476204013676 0ustar www-datawww-data # -*- ruby -*- # frozen_string_literal: true # The top-level PG namespace. module PG # Is this file part of a fat binary gem with bundled libpq? bundled_libpq_path = File.join(__dir__, RUBY_PLATFORM.gsub(/^i386-/, "x86-")) if File.exist?(bundled_libpq_path) POSTGRESQL_LIB_PATH = bundled_libpq_path else bundled_libpq_path = nil # Try to load libpq path as found by extconf.rb begin require "pg/postgresql_lib_path" rescue LoadError # rake-compiler doesn't use regular "make install", but uses it's own install tasks. # It therefore doesn't copy pg/postgresql_lib_path.rb in case of "rake compile". POSTGRESQL_LIB_PATH = false end end add_dll_path = proc do |path, &block| if RUBY_PLATFORM =~/(mswin|mingw)/i && path && File.exist?(path) begin require 'ruby_installer/runtime' RubyInstaller::Runtime.add_dll_directory(path, &block) rescue LoadError old_path = ENV['PATH'] ENV['PATH'] = "#{path};#{old_path}" block.call ENV['PATH'] = old_path end else # No need to set a load path manually - it's set as library rpath. block.call end end # Add a load path to the one retrieved from pg_config add_dll_path.call(POSTGRESQL_LIB_PATH) do if bundled_libpq_path # It's a Windows binary gem, try the . subdirectory major_minor = RUBY_VERSION[ /^(\d+\.\d+)/ ] or raise "Oops, can't extract the major/minor version from #{RUBY_VERSION.dump}" require "#{major_minor}/pg_ext" else require 'pg_ext' end end # Get the PG library version. # # +include_buildnum+ is no longer used and any value passed will be ignored. def self.version_string( include_buildnum=nil ) "%s %s" % [ self.name, VERSION ] end ### Convenience alias for PG::Connection.new. def self.connect( *args, &block ) Connection.new( *args, &block ) end if defined?(Ractor.make_shareable) def self.make_shareable(obj) Ractor.make_shareable(obj) end else def self.make_shareable(obj) obj.freeze end end module BinaryDecoder %i[ TimestampUtc TimestampUtcToLocal TimestampLocal ].each do |klass| autoload klass, 'pg/binary_decoder/timestamp' end autoload :Date, 'pg/binary_decoder/date' end module BinaryEncoder %i[ TimestampUtc TimestampLocal ].each do |klass| autoload klass, 'pg/binary_encoder/timestamp' end end module TextDecoder %i[ TimestampUtc TimestampUtcToLocal TimestampLocal TimestampWithoutTimeZone TimestampWithTimeZone ].each do |klass| autoload klass, 'pg/text_decoder/timestamp' end autoload :Date, 'pg/text_decoder/date' autoload :Inet, 'pg/text_decoder/inet' autoload :JSON, 'pg/text_decoder/json' autoload :Numeric, 'pg/text_decoder/numeric' end module TextEncoder %i[ TimestampUtc TimestampWithoutTimeZone TimestampWithTimeZone ].each do |klass| autoload klass, 'pg/text_encoder/timestamp' end autoload :Date, 'pg/text_encoder/date' autoload :Inet, 'pg/text_encoder/inet' autoload :JSON, 'pg/text_encoder/json' autoload :Numeric, 'pg/text_encoder/numeric' end autoload :BasicTypeMapBasedOnResult, 'pg/basic_type_map_based_on_result' autoload :BasicTypeMapForQueries, 'pg/basic_type_map_for_queries' autoload :BasicTypeMapForResults, 'pg/basic_type_map_for_results' autoload :BasicTypeRegistry, 'pg/basic_type_registry' require 'pg/exceptions' require 'pg/coder' require 'pg/type_map_by_column' require 'pg/connection' require 'pg/result' require 'pg/tuple' autoload :VERSION, 'pg/version' # Avoid "uninitialized constant Truffle::WarningOperations" on Truffleruby up to 22.3.1 if RUBY_ENGINE=="truffleruby" && !defined?(Truffle::WarningOperations) module TruffleFixWarn def warn(str, category=nil) super(str) end end Warning.extend(TruffleFixWarn) end end # module PG pg-1.5.5/sample/0000755000004100000410000000000014563476204013454 5ustar www-datawww-datapg-1.5.5/sample/minimal-testcase.rb0000644000004100000410000000062414563476204017242 0ustar www-datawww-data# -*- ruby -*- require 'pg' conn = PG.connect( :dbname => 'test' ) $stderr.puts '---', RUBY_DESCRIPTION, PG.version_string( true ), "Server version: #{conn.server_version}", "Client version: #{PG.library_version}", '---' result = conn.exec( "SELECT * from pg_stat_activity" ) $stderr.puts %Q{Expected this to return: ["select * from pg_stat_activity"]} p result.field_values( 'current_query' ) pg-1.5.5/sample/array_insert.rb0000644000004100000410000000071014563476204016501 0ustar www-datawww-data# -*- ruby -*- require 'pg' c = PG.connect( dbname: 'test' ) # this one works: c.exec( "DROP TABLE IF EXISTS foo" ) c.exec( "CREATE TABLE foo (strings character varying[]);" ) # But using a prepared statement works: c.set_error_verbosity( PG::PQERRORS_VERBOSE ) c.prepare( 'stmt', "INSERT INTO foo VALUES ($1);" ) # This won't work #c.exec_prepared( 'stmt', ["ARRAY['this','that']"] ) # but this will: c.exec_prepared( 'stmt', ["{'this','that'}"] ) pg-1.5.5/sample/async_copyto.rb0000644000004100000410000000147414563476204016521 0ustar www-datawww-data# -*- ruby -*- require 'pg' require 'stringio' # Using COPY asynchronously $stderr.puts "Opening database connection ..." conn = PG.connect( :dbname => 'test' ) conn.setnonblocking( true ) socket = conn.socket_io $stderr.puts "Running COPY command ..." buf = '' conn.transaction do conn.send_query( "COPY logs TO STDOUT WITH csv" ) buf = nil # #get_copy_data returns a row if there's a whole one to return, false # if there isn't one but the COPY is still running, or nil when it's # finished. begin $stderr.puts "COPY loop" conn.consume_input while conn.is_busy $stderr.puts " ready loop" select( [socket], nil, nil, 5.0 ) or raise "Timeout (5s) waiting for query response." conn.consume_input end buf = conn.get_copy_data $stdout.puts( buf ) if buf end until buf.nil? end conn.finish pg-1.5.5/sample/replication_monitor.rb0000644000004100000410000001253114563476204020063 0ustar www-datawww-data# -*- ruby -*- # vim: set noet nosta sw=4 ts=4 : # # Get the current WAL segment and offset from a master postgresql # server, and compare slave servers to see how far behind they # are in MB. This script should be easily modified for use with # Nagios/Mon/Monit/Zabbix/whatever, or wrapping it in a display loop, # and is suitable for both WAL shipping or streaming forms of replication. # # Mahlon E. Smith # # First argument is the master server, all other arguments are treated # as slave machines. # # db_replication.monitor db-master.example.com ... # require 'ostruct' require 'optparse' require 'pathname' require 'etc' require 'pg' require 'pp' ### A class to encapsulate the PG handles. ### class PGMonitor VERSION = %q$Id$ # When to consider a slave as 'behind', measured in WAL segments. # The default WAL segment size is 16, so we'll alert after # missing two WAL files worth of data. # LAG_ALERT = 32 ### Create a new PGMonitor object. ### def initialize( opts, hosts ) @opts = opts @master = hosts.shift @slaves = hosts @current_wal = {} @failures = [] end attr_reader :opts, :current_wal, :master, :slaves, :failures ### Perform the connections and check the lag. ### def check # clear prior failures, get current xlog info @failures = [] return unless self.get_current_wal # check all slaves self.slaves.each do |slave| begin slave_db = PG.connect( :dbname => self.opts.database, :host => slave, :port => self.opts.port, :user => self.opts.user, :password => self.opts.pass, :sslmode => 'prefer' ) xlog = slave_db.exec( 'SELECT pg_last_xlog_receive_location()' ).getvalue( 0, 0 ) slave_db.close lag_in_megs = ( self.find_lag( xlog ).to_f / 1024 / 1024 ).abs if lag_in_megs >= LAG_ALERT failures << { :host => slave, :error => "%0.2fMB behind the master." % [ lag_in_megs ] } end rescue => err failures << { :host => slave, :error => err.message } end end end ######### protected ######### ### Ask the master for the current xlog information, to compare ### to slaves. Returns true on success. On failure, populates ### the failures array and returns false. ### def get_current_wal master_db = PG.connect( :dbname => self.opts.database, :host => self.master, :port => self.opts.port, :user => self.opts.user, :password => self.opts.pass, :sslmode => 'prefer' ) self.current_wal[ :segbytes ] = master_db.exec( 'SHOW wal_segment_size' ). getvalue( 0, 0 ).sub( /\D+/, '' ).to_i << 20 current = master_db.exec( 'SELECT pg_current_xlog_location()' ).getvalue( 0, 0 ) self.current_wal[ :segment ], self.current_wal[ :offset ] = current.split( /\// ) master_db.close return true # If we can't get any of the info from the master, then there is no # point in a comparison with slaves. # rescue => err self.failures << { :host => self.master, :error => 'Unable to retrieve required info from the master (%s)' % [ err.message ] } return false end ### Given an +xlog+ position from a slave server, return ### the number of bytes the slave needs to replay before it ### is caught up to the master. ### def find_lag( xlog ) s_segment, s_offset = xlog.split( /\// ) m_segment = self.current_wal[ :segment ] m_offset = self.current_wal[ :offset ] m_segbytes = self.current_wal[ :segbytes ] return (( m_segment.hex - s_segment.hex ) * m_segbytes) + ( m_offset.hex - s_offset.hex ) end end ### Parse command line arguments. Return a struct of global options. ### def parse_args( args ) options = OpenStruct.new options.database = 'postgres' options.port = 5432 options.user = Etc.getpwuid( Process.uid ).name options.sslmode = 'prefer' opts = OptionParser.new do |opts| opts.banner = "Usage: #{$0} [options] [slave2, slave3...]" opts.separator '' opts.separator 'Connection options:' opts.on( '-d', '--database DBNAME', "specify the database to connect to (default: \"#{options.database}\")" ) do |db| options.database = db end opts.on( '-h', '--host HOSTNAME', 'database server host' ) do |host| options.host = host end opts.on( '-p', '--port PORT', Integer, "database server port (default: \"#{options.port}\")" ) do |port| options.port = port end opts.on( '-U', '--user NAME', "database user name (default: \"#{options.user}\")" ) do |user| options.user = user end opts.on( '-W', 'force password prompt' ) do |pw| print 'Password: ' begin system 'stty -echo' options.pass = $stdin.gets.chomp ensure system 'stty echo' puts end end opts.separator '' opts.separator 'Other options:' opts.on_tail( '--help', 'show this help, then exit' ) do $stderr.puts opts exit end opts.on_tail( '--version', 'output version information, then exit' ) do puts PGMonitor::VERSION exit end end opts.parse!( args ) return options end if __FILE__ == $0 opts = parse_args( ARGV ) raise ArgumentError, "At least two PostgreSQL servers are required." if ARGV.length < 2 mon = PGMonitor.new( opts, ARGV ) mon.check if mon.failures.empty? puts "All is well!" exit 0 else puts "Database replication delayed or broken." mon.failures.each do |bad| puts "%s: %s" % [ bad[ :host ], bad[ :error ] ] end exit 1 end end pg-1.5.5/sample/test_binary_values.rb0000644000004100000410000000144114563476204017703 0ustar www-datawww-data# -*- ruby -*-1.9.1 require 'pg' db = PG.connect( :dbname => 'test' ) db.exec "DROP TABLE IF EXISTS test" db.exec "CREATE TABLE test (a INTEGER, b BYTEA)" a = 42 b = [1, 2, 3] db.exec "INSERT INTO test(a, b) VALUES($1::int, $2::bytea)", [a, {:value => b.pack('N*'), :format => 1}] db.exec( "SELECT a::int, b::bytea FROM test LIMIT 1", [], 1 ) do |res| res.nfields.times do |i| puts "Field %d is: %s, a %s (%s) column from table %p" % [ i, res.fname( i ), db.exec( "SELECT format_type($1,$2)", [res.ftype(i), res.fmod(1)] ).getvalue(0,0), res.fformat( i ).zero? ? "string" : "binary", res.ftable( i ), ] end res.each do |row| puts "a = #{row['a'].inspect}" puts "a (unpacked) = #{row['a'].unpack('N*').inspect}" puts "b = #{row['b'].unpack('N*').inspect}" end end pg-1.5.5/sample/wal_shipper.rb0000644000004100000410000002775214563476204016333 0ustar www-datawww-data# -*- ruby -*- # # A script to wrap ssh and rsync for PostgreSQL WAL files shipping. # Mahlon E. Smith # # Based off of Joshua Drake's PITRTools concept, but with some important # differences: # # - Only supports PostgreSQL >= 8.3 # - No support for rsync version < 3 # - Only shipping, no client side sync (too much opportunity for failure, # and it's easy to get a base backup manually) # - WAL files are only stored once, regardless of how many # slaves are configured or not responding, and are removed from # the master when they are no longer needed. # - Each slave can have completely distinct settings, instead # of a single set of options applied to all slaves # - slave sync can be individually paused from the master # - can run synchronously, or if you have a lot of slaves, threaded async mode # - It's ruby, instead of python. :) # # wal_shipper is configurable via an external YAML file, and will create # a template on its first run -- you'll need to modify it! It expects # a directory structure like so: # # postgres/ # data/... # bin/wal_shipper.rb # etc/wal_shipper.conf <-- YAML settings! # wal/ # # It should be loaded from the PostgreSQL master's postgresql.conf # as such, after putting it into your postgres user homedir under 'bin': # # archive_command = '/path/to/postgres_home/bin/wal_shipper.rb %p' # # Passwordless ssh keys need to be set up for the postgres user on all # participating masters and slaves. # # You can use any replay method of your choosing on the slaves. # Here's a nice example using pg_standby, to be put in data/recovery.conf: # # restore_command = 'pg_standby -t /tmp/pgrecovery.done -s5 -w0 -c /path/to/postgres_home/wal_files/ %f %p %r' # # Or, here's another simple alternative data/recovery.conf, for using WAL shipping # alongside streaming replication: # # standby_mode = 'on' # primary_conninfo = 'host=master.example.com port=5432 user=repl password=XXXXXXX' # restore_command = 'cp /usr/local/pgsql/wal/%f %p' # trigger_file = '/usr/local/pgsql/pg.become_primary' # archive_cleanup_command = '/usr/local/bin/pg_archivecleanup /usr/local/pgsql/wal %r' # #======================================================================================== require 'pathname' require 'yaml' require 'fileutils' require 'ostruct' ### Encapsulate WAL shipping functionality. ### module WalShipper ### Send messages to the PostgreSQL log files. ### def log( msg ) return unless @debug puts "WAL Shipper: %s" % [ msg ] end ### An object that represents a single destination from the ### configuration file. ### class Destination < OpenStruct include WalShipper ### Create a new WalShipper::Destination object. def initialize( dest, debug=false ) @debug = debug super( dest ) self.validate end ######### protected ######### ### Check for required keys and normalize various keys. ### def validate # Check for required destination keys %w[ label kind ].each do |key| if self.send( key.to_sym ).nil? self.log "Destination %p missing required '%s' key." % [ self, key ] self.invalid = true end end # Ensure paths are Pathnames for the 'file' destination type. self.path = Pathname.new( self.path ) if self.kind == 'file' if self.kind == 'rsync-ssh' self.port ||= 22 self.user = self.user ? "#{self.user}@" : '' end end end # Class Destination ### Class for creating new Destination objects and determining how to ### ship WAL files to them. ### class Dispatcher include WalShipper ### Create a new Shipper object, given a +conf+ hash and a +wal+ file ### Pathname object. ### def initialize( wal, conf ) # Make the config keys instance variables. conf.each_pair {|key, val| self.instance_variable_set( "@#{key}", val ) } # Spool directory check. # @spool = Pathname.new( @spool ) @spool.exist? or raise "The configured spool directory (%s) doesn't exist." % [ @spool ] # Stop right away if we have disabled shipping. # unless @enabled self.log "WAL shipping is disabled, queuing segment %s" % [ wal.basename ] exit 1 end # Instantiate Destination objects, creating new spool directories # for each. # @destinations. collect!{|dest| WalShipper::Destination.new( dest, @debug ) }. reject {|dest| dest.invalid }. collect do |dest| dest.spool = @spool + dest.label dest.spool.mkdir( 0711 ) unless dest.spool.exist? dest end # Put the WAL file into the spool for processing! # @waldir = @spool + 'wal_segments' @waldir.mkdir( 0711 ) unless @waldir.exist? self.log "Copying %s to %s" % [ wal.basename, @waldir ] FileUtils::cp wal, @waldir # 'wal' now references the copy. The original is managed and auto-expired # by PostgreSQL when a new checkpoint segment it reached. @wal = @waldir + wal.basename end ### Create hardlinks for the WAL file into each of the destination directories ### for separate queueing and recording of what was shipped successfully. ### def link @destinations.each do |dest| self.log "Linking %s into %s" % [ @wal.basename, dest.spool.basename ] FileUtils::ln @wal, dest.spool, :force => true end end ### Decide to be synchronous or threaded, and delegate each destination ### to the proper ship method. ### def dispatch # Synchronous mode. # unless @async self.log "Performing a synchronous dispatch." @destinations.each {|dest| self.dispatch_dest( dest ) } return end tg = ThreadGroup.new # Async, one thread per destination # if @async_max.nil? || @async_max.to_i.zero? self.log "Performing an asynchronous dispatch: one thread per destination." @destinations.each do |dest| t = Thread.new do Thread.current.abort_on_exception = true self.dispatch_dest( dest ) end tg.add( t ) end tg.list.each {|t| t.join } return end # Async, one thread per destination, in groups of asynx_max size. # self.log "Performing an asynchronous dispatch: one thread per destination, %d at a time." % [ @async_max ] all_dests = @destinations.dup dest_chunks = [] until all_dests.empty? do dest_chunks << all_dests.slice!( 0, @async_max ) end dest_chunks.each do |chunk| chunk.each do |dest| t = Thread.new do Thread.current.abort_on_exception = true self.dispatch_dest( dest ) end tg.add( t ) end tg.list.each {|t| t.join } end return end ### Remove any WAL segments no longer needed by slaves. ### def clean_spool total = 0 @waldir.children.each do |wal| if wal.stat.nlink == 1 total += wal.unlink end end self.log "Removed %d WAL segment%s." % [ total, total == 1 ? '' : 's' ] end ######### protected ######### ### Send WAL segments to remote +dest+ via rsync+ssh. ### Passwordless keys between the user running this script (postmaster owner) ### and remote user need to be set up in advance. ### def ship_rsync_ssh( dest ) if dest.host.nil? self.log "Destination %p missing required 'host' key. WAL is queued." % [ dest.host ] return end rsync_flags = '-zc' ssh_string = "%s -o ConnectTimeout=%d -o StrictHostKeyChecking=no -p %d" % [ @ssh, @ssh_timeout || 10, dest.port ] src_string = '' dst_string = "%s%s:%s/" % [ dest.user, dest.host, dest.path ] # If there are numerous files in the spool dir, it means there was # an error transferring to this host in the past. Try and ship all # WAL segments, instead of just the new one. PostgreSQL on the slave # side will "do the right thing" as they come in, regardless of # ordering. # if dest.spool.children.length > 1 src_string = dest.spool.to_s + '/' rsync_flags << 'r' else src_string = dest.spool + @wal.basename end ship_wal_cmd = [ @rsync, @debug ? (rsync_flags << 'vh') : (rsync_flags << 'q'), '--remove-source-files', '-e', ssh_string, src_string, dst_string ] self.log "Running command '%s'" % [ ship_wal_cmd.join(' ') ] system *ship_wal_cmd # Run external notification program on error, if one is configured. # unless $?.success? self.log "Ack! Error while shipping to %p, WAL is queued." % [ dest.label ] system @error_cmd, dest.label if @error_cmd end end ### Copy WAL segments to remote path as set in +dest+. ### This is useful for longer term PITR, copying to NFS shares, etc. ### def ship_file( dest ) if dest.path.nil? self.log "Destination %p missing required 'path' key. WAL is queued." % [ dest ] return end dest.path.mkdir( 0711 ) unless dest.path.exist? # If there are numerous files in the spool dir, it means there was # an error transferring to this host in the past. Try and ship all # WAL segments, instead of just the new one. PostgreSQL on the slave # side will "do the right thing" as they come in, regardless of # ordering. # if dest.spool.children.length > 1 dest.spool.children.each do |wal| wal.unlink if self.copy_file( wal, dest.path, dest.label, dest.compress ) end else wal = dest.spool + @wal.basename wal.unlink if self.copy_file( wal, dest.path, dest.label, dest.compress ) end end ### Given a +wal+ Pathname, a +path+ destination, and the destination ### label, copy and optionally compress a WAL file. ### def copy_file( wal, path, label, compress=false ) dest_file = path + wal.basename FileUtils::cp wal, dest_file if compress system *[ 'gzip', '-f', dest_file ] raise "Error while compressing: %s" % [ wal.basename ] unless $?.success? end self.log "Copied %s%s to %s." % [ wal.basename, compress ? ' (and compressed)' : '', path ] return true rescue => err self.log "Ack! Error while copying '%s' (%s) to %p, WAL is queued." % [ wal.basename, err.message, path ] system @error_cmd, label if @error_cmd return false end ### Figure out how to send the WAL file to its intended destination +dest+. ### def dispatch_dest( dest ) if ! dest.enabled.nil? && ! dest.enabled self.log "Skipping explicitly disabled destination %p, WAL is queued." % [ dest.label ] return end # Send to the appropriate method. ( rsync-ssh --> ship_rsync_ssh ) # meth = ( 'ship_' + dest.kind.gsub(/-/, '_') ).to_sym if WalShipper::Dispatcher.method_defined?( meth ) self.send( meth, dest ) else self.log "Unknown destination kind %p for %p. WAL is queued." % [ dest.kind, dest.label ] end end end end # Ship the WAL file! # if __FILE__ == $0 CONFIG_DIR = Pathname.new( __FILE__ ).dirname.parent + 'etc' CONFIG = CONFIG_DIR + 'wal_shipper.conf' unless CONFIG.exist? CONFIG_DIR.mkdir( 0711 ) unless CONFIG_DIR.exist? CONFIG.open('w') {|conf| conf.print(DATA.read) } CONFIG.chmod( 0644 ) puts "No WAL shipping configuration found, default file created." end wal = ARGV[0] or raise "No WAL file was specified on the command line." wal = Pathname.new( wal ) conf = YAML.load( CONFIG.read ) shipper = WalShipper::Dispatcher.new( wal, conf ) shipper.link shipper.dispatch shipper.clean_spool end __END__ --- # Spool from pg_xlog to the working area? # This must be set to 'true' for wal shipping to function! enabled: false # Log everything to the PostgreSQL log files? debug: true # The working area for WAL segments. spool: /opt/local/var/db/postgresql84/wal # With multiple slaves, ship WAL in parallel, or be synchronous? async: false # Put a ceiling on the parallel threads? # '0' or removing this option uses a thread for each destination, # regardless of how many you have. Keep in mind that's 16 * destination # count megs of simultaneous bandwidth. async_max: 5 # Paths and settings for various binaries. rsync: /usr/bin/rsync ssh: /usr/bin/ssh ssh_timeout: 10 destinations: - label: rsync-example port: 2222 kind: rsync-ssh host: localhost user: postgres path: wal # relative to the user's homedir on the remote host enabled: false - label: file-example kind: file compress: true enabled: true path: /tmp/someplace pg-1.5.5/sample/async_api.rb0000644000004100000410000000643314563476204015755 0ustar www-datawww-data# -*- ruby -*- require 'pg' # This is a example of how to use the asynchronous API to query the # server without blocking other threads. It's intentionally low-level; # if you hooked up the PG::Connection#socket to some kind of reactor, you # could make this much nicer. TIMEOUT = 5.0 # seconds to wait for an async operation to complete # Print 'x' continuously to demonstrate that other threads aren't # blocked while waiting for the connection, for the query to be sent, # for results, etc. You might want to sleep inside the loop or # comment this out entirely for cleaner output. progress_thread = Thread.new { loop { print 'x' } } # Output progress messages def output_progress( msg ) puts "\n>>> #{msg}\n" end # Start the connection output_progress "Starting connection..." conn = PG::Connection.connect_start( :dbname => 'test' ) or abort "Unable to create a new connection!" abort "Connection failed: %s" % [ conn.error_message ] if conn.status == PG::CONNECTION_BAD # Track the progress of the connection, waiting for the socket to become readable/writable # before polling it poll_status = PG::PGRES_POLLING_WRITING until poll_status == PG::PGRES_POLLING_OK || poll_status == PG::PGRES_POLLING_FAILED # If the socket needs to read, wait 'til it becomes readable to poll again case poll_status when PG::PGRES_POLLING_READING output_progress " waiting for socket to become readable" select( [conn.socket_io], nil, nil, TIMEOUT ) or raise "Asynchronous connection timed out!" # ...and the same for when the socket needs to write when PG::PGRES_POLLING_WRITING output_progress " waiting for socket to become writable" select( nil, [conn.socket_io], nil, TIMEOUT ) or raise "Asynchronous connection timed out!" end # Output a status message about the progress case conn.status when PG::CONNECTION_STARTED output_progress " waiting for connection to be made." when PG::CONNECTION_MADE output_progress " connection OK; waiting to send." when PG::CONNECTION_AWAITING_RESPONSE output_progress " waiting for a response from the server." when PG::CONNECTION_AUTH_OK output_progress " received authentication; waiting for backend start-up to finish." when PG::CONNECTION_SSL_STARTUP output_progress " negotiating SSL encryption." when PG::CONNECTION_SETENV output_progress " negotiating environment-driven parameter settings." when PG::CONNECTION_NEEDED output_progress " internal state: connect() needed." end # Check to see if it's finished or failed yet poll_status = conn.connect_poll end abort "Connect failed: %s" % [ conn.error_message ] unless conn.status == PG::CONNECTION_OK output_progress "Sending query" conn.send_query( "SELECT * FROM pg_stat_activity" ) # Fetch results until there aren't any more loop do output_progress " waiting for a response" # Buffer any incoming data on the socket until a full result is ready. conn.consume_input while conn.is_busy select( [conn.socket_io], nil, nil, TIMEOUT ) or raise "Timeout waiting for query response." conn.consume_input end # Fetch the next result. If there isn't one, the query is finished result = conn.get_result or break puts "\n\nQuery result:\n%p\n" % [ result.values ] end output_progress "Done." conn.finish if defined?( progress_thread ) progress_thread.kill progress_thread.join end pg-1.5.5/sample/warehouse_partitions.rb0000644000004100000410000001627414563476204020271 0ustar www-datawww-data# -*- ruby -*- # vim: set nosta noet ts=4 sw=4: # # Script to automatically move partitioned tables and their indexes # to a separate area on disk. # # Mahlon E. Smith # # Example use case: # # - You've got a heavy insert table, such as syslog data. # - This table has a partitioning trigger (or is manually partitioned) # by date, to separate incoming stuff from archival/report stuff. # - You have a tablespace on cheap or slower disk (maybe even # ZFS compressed, or some such!) # # The only assumption this script makes is that your tables are dated, and # the tablespace they're moving into already exists. # # A full example, using the syslog idea from above, where each child # table is date partitioned by a convention of "syslog_YEAR-WEEKOFYEAR": # # syslog # <--- parent # syslog_2012_06 # <--- inherited # syslog_2012_07 # <--- inherited # syslog_2012_08 # <--- inherited # ... # # You'd run this script like so: # # ./warehouse_partitions.rb -F syslog_%Y_%U # # Assuming this was week 12 of the year, tables syslog_2012_06 through # syslog_2012_11 would start sequentially migrating into the tablespace # called 'warehouse'. # require 'date' require 'ostruct' require 'optparse' require 'pathname' require 'etc' require 'pg' ### A tablespace migration class. ### class PGWarehouse def initialize( opts ) @opts = opts @db = PG.connect( :dbname => opts.database, :host => opts.host, :port => opts.port, :user => opts.user, :password => opts.pass, :sslmode => 'prefer' ) @db.exec "SET search_path TO %s" % [ opts.schema ] if opts.schema @relations = self.relations end attr_reader :db ###### public ###### ### Perform the tablespace moves. ### def migrate if @relations.empty? $stderr.puts 'No tables were found for warehousing.' return end $stderr.puts "Found %d relation%s to move." % [ relations.length, relations.length == 1 ? '' : 's' ] @relations.sort_by{|_,v| v[:name] }.each do |_, val| $stderr.print " - Moving table '%s' to '%s'... " % [ val[:name], @opts.tablespace ] if @opts.dryrun $stderr.puts '(not really)' else age = self.timer do db.exec "ALTER TABLE %s SET TABLESPACE %s;" % [ val[:name], @opts.tablespace ] end puts age end val[ :indexes ].each do |idx| $stderr.print " - Moving index '%s' to '%s'... " % [ idx, @opts.tablespace ] if @opts.dryrun $stderr.puts '(not really)' else age = self.timer do db.exec "ALTER INDEX %s SET TABLESPACE %s;" % [ idx, @opts.tablespace ] end puts age end end end end ######### protected ######### ### Get OIDs and current tablespaces for everything under the ### specified schema. ### def relations return @relations if @relations relations = {} query = %q{ SELECT c.oid AS oid, c.relname AS name, c.relkind AS kind, t.spcname AS tspace FROM pg_class AS c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_tablespace t ON t.oid = c.reltablespace WHERE c.relkind = 'r' } query << "AND n.nspname='#{@opts.schema}'" if @opts.schema # Get the relations list, along with each element's current tablespace. # self.db.exec( query ) do |res| res.each do |row| relations[ row['oid'] ] = { :name => row['name'], :tablespace => row['tspace'], :indexes => [], :parent => nil } end end # Add table inheritance information. # db.exec 'SELECT inhrelid AS oid, inhparent AS parent FROM pg_inherits' do |res| res.each do |row| relations[ row['oid'] ][ :parent ] = row['parent'] end end # Remove tables that don't qualify for warehousing. # # - Tables that are not children of a parent # - Tables that are already in the warehouse tablespace # - The currently active child (it's likely being written to!) # - Any table that can't be parsed into the specified format # relations.reject! do |oid, val| begin val[:parent].nil? || val[:tablespace] == @opts.tablespace || val[:name] == Time.now.strftime( @opts.format ) || ! DateTime.strptime( val[:name], @opts.format ) rescue ArgumentError true end end query = %q{ SELECT c.oid AS oid, i.indexname AS name FROM pg_class AS c INNER JOIN pg_indexes AS i ON i.tablename = c.relname } query << "AND i.schemaname='#{@opts.schema}'" if @opts.schema # Attach index names to tables. # db.exec( query ) do |res| res.each do |row| relations[ row['oid'] ][ :indexes ] << row['name'] if relations[ row['oid'] ] end end return relations end ### Wrap arbitrary commands in a human readable timer. ### def timer start = Time.now yield age = Time.now - start diff = age secs = diff % 60 diff = ( diff - secs ) / 60 mins = diff % 60 diff = ( diff - mins ) / 60 hour = diff % 24 return "%02d:%02d:%02d" % [ hour, mins, secs ] end end ### Parse command line arguments. Return a struct of global options. ### def parse_args( args ) options = OpenStruct.new options.database = Etc.getpwuid( Process.uid ).name options.host = '127.0.0.1' options.port = 5432 options.user = Etc.getpwuid( Process.uid ).name options.sslmode = 'prefer' options.tablespace = 'warehouse' opts = OptionParser.new do |opts| opts.banner = "Usage: #{$0} [options]" opts.separator '' opts.separator 'Connection options:' opts.on( '-d', '--database DBNAME', "specify the database to connect to (default: \"#{options.database}\")" ) do |db| options.database = db end opts.on( '-h', '--host HOSTNAME', 'database server host' ) do |host| options.host = host end opts.on( '-p', '--port PORT', Integer, "database server port (default: \"#{options.port}\")" ) do |port| options.port = port end opts.on( '-n', '--schema SCHEMA', String, "operate on the named schema only (default: none)" ) do |schema| options.schema = schema end opts.on( '-T', '--tablespace SPACE', String, "move old tables to this tablespace (default: \"#{options.tablespace}\")" ) do |tb| options.tablespace = tb end opts.on( '-F', '--tableformat FORMAT', String, "The naming format (strftime) for the inherited tables (default: none)" ) do |format| options.format = format end opts.on( '-U', '--user NAME', "database user name (default: \"#{options.user}\")" ) do |user| options.user = user end opts.on( '-W', 'force password prompt' ) do |pw| print 'Password: ' begin system 'stty -echo' options.pass = gets.chomp ensure system 'stty echo' puts end end opts.separator '' opts.separator 'Other options:' opts.on_tail( '--dry-run', "don't actually do anything" ) do options.dryrun = true end opts.on_tail( '--help', 'show this help, then exit' ) do $stderr.puts opts exit end opts.on_tail( '--version', 'output version information, then exit' ) do puts Stats::VERSION exit end end opts.parse!( args ) return options end if __FILE__ == $0 opts = parse_args( ARGV ) raise ArgumentError, "A naming format (-F) is required." unless opts.format $stdout.sync = true PGWarehouse.new( opts ).migrate end pg-1.5.5/sample/copydata.rb0000644000004100000410000000627314563476204015615 0ustar www-datawww-data# -*- ruby -*- require 'pg' require 'stringio' $stderr.puts "Opening database connection ..." conn = PG.connect( dbname: 'test' ) conn.exec( < 'localhost', :dbname => 'test', } # Output progress messages def output_progress( msg ) puts ">>> #{msg}\n" end # Start the (synchronous) connection output_progress "Starting connection..." conn = PG.connect( CONN_OPTS ) or abort "Unable to create a new connection!" abort "Connect failed: %s" % [ conn.error_message ] unless conn.status == PG::CONNECTION_OK # Now grab a reference to the underlying socket to select() on while the query is running socket = conn.socket_io # Send the (asynchronous) query output_progress "Sending query" conn.send_query( "SELECT * FROM pg_stat_activity" ) # Fetch results until there aren't any more loop do output_progress " waiting for a response" # Buffer any incoming data on the socket until a full result is ready. conn.consume_input while conn.is_busy output_progress " waiting for data to be available on %p..." % [ socket ] select( [socket], nil, nil, TIMEOUT ) or raise "Timeout waiting for query response." conn.consume_input end # Fetch the next result. If there isn't one, the query is finished result = conn.get_result or break output_progress "Query result:\n%p\n" % [ result.values ] end output_progress "Done." conn.finish pg-1.5.5/sample/issue-119.rb0000644000004100000410000000425214563476204015444 0ustar www-datawww-data# -*- ruby -*- require 'pg' # This is another example of how to use COPY FROM, this time as a # minimal test case used to try to figure out what was going on in # an issue submitted from a user: # # https://bitbucket.org/ged/ruby-pg/issue/119 # conn = PG.connect( dbname: 'test' ) table_name = 'issue_119' field_list = %w[name body_weight brain_weight] method = 0 options = { truncate: true } sql_parameters = '' conn.set_error_verbosity( PG::PQERRORS_VERBOSE ) conn.exec( "DROP TABLE IF EXISTS #{table_name}" ) conn.exec( "CREATE TABLE #{table_name} ( id SERIAL, name TEXT, body_weight REAL, brain_weight REAL )" ) text = <<-END_DATA Mountain beaver 1.35 465 Cow 465 423 Grey wolf 36.33 119.5 Goat 27.66 115 Guinea pig 1.04 5.5 Dipliodocus 11700 50 Asian elephant 2547 4603 Donkey 187.1 419 Horse 521 655 Potar monkey 10 115 Cat 3.3 25.6 Giraffe 529 680 Gorilla 207 406 Human 62 1320 African elephant 6654 5712 Triceratops 9400 70 Rhesus monkey 6.8 179 Kangaroo 35 56 Golden hamster 0.12 1 Mouse 0.023 0.4 Rabbit 2.5 12.1 Sheep 55.5 175 Jaguar 100 157 Chimpanzee 52.16 440 Brachiosaurus 87000 154.5 Mole 0.122 3 Pig 192 18 END_DATA #ActiveRecord::Base.connection_pool.with_connection do |conn| conn.transaction do rc = conn #.raw_connection rc.exec "TRUNCATE TABLE #{table_name};" if options[:truncate] sql = "COPY #{table_name} (#{field_list.join(',')}) FROM STDIN #{sql_parameters} " p sql rc.exec(sql) errmsg = nil # scope this outside of the rescue below so it's visible later begin if method == 1 rc.put_copy_data text + "\\.\n" else text.each_line { |line| rc.put_copy_data(line) } end rescue Errno => err errmsg = "%s while reading copy data: %s" % [err.class.name, err.message] puts "an error occurred" end if errmsg rc.put_copy_end(errmsg) puts "ERROR #{errmsg}" else rc.put_copy_end end while res = rc.get_result st = res.res_status( res.result_status ) puts "Result of COPY is: %s" % [ st ] if res.result_status != PG::PGRES_COPY_IN puts res.error_message end end puts "end" end #transaction #end #connection conn.exec( "SELECT name, brain_weight FROM #{table_name}" ) do |res| p res.values end pg-1.5.5/sample/copyfrom.rb0000644000004100000410000000701114563476204015636 0ustar www-datawww-data# -*- ruby -*- require 'pg' require 'stringio' $stderr.puts "Opening database connection ..." conn = PG.connect( :dbname => 'test' ) conn.exec( < err errmsg = "%s while reading copy data: %s" % [ err.class.name, err.message ] conn.put_copy_end( errmsg ) else conn.put_copy_end while res = conn.get_result $stderr.puts "Result of COPY is: %s" % [ res.res_status(res.result_status) ] end end end conn.finish pg-1.5.5/sample/pg_statistics.rb0000644000004100000410000001674314563476204016674 0ustar www-datawww-data# -*- ruby -*- # vim: set noet nosta sw=4 ts=4 : # # PostgreSQL statistic gatherer. # Mahlon E. Smith # # Based on queries by Kenny Gorman. # http://www.kennygorman.com/wordpress/?page_id=491 # # An example gnuplot input script is included in the __END__ block # of this script. Using it, you can feed the output this script # generates to gnuplot (after removing header lines) to generate # some nice performance charts. # require 'ostruct' require 'optparse' require 'etc' require 'pg' ### PostgreSQL Stats. Fetch information from pg_stat_* tables. ### Optionally run in a continuous loop, displaying deltas. ### class Stats VERSION = %q$Id$ def initialize( opts ) @opts = opts @db = PG.connect( :dbname => opts.database, :host => opts.host, :port => opts.port, :user => opts.user, :password => opts.pass, :sslmode => 'prefer' ) @last = nil end ###### public ###### ### Primary loop. Gather statistics and generate deltas. ### def run run_count = 0 loop do current_stat = self.get_stats # First run, store and continue # if @last.nil? @last = current_stat sleep @opts.interval next end # headers # if run_count == 0 || run_count % 50 == 0 puts "%-20s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s" % %w[ time commits rollbks blksrd blkshit bkends seqscan seqtprd idxscn idxtrd ins upd del locks activeq ] end # calculate deltas # delta = current_stat.inject({}) do |h, pair| stat, val = *pair if %w[ activeq locks bkends ].include?( stat ) h[stat] = current_stat[stat].to_i else h[stat] = current_stat[stat].to_i - @last[stat].to_i end h end delta[ 'time' ] = Time.now.strftime('%F %T') # new values # puts "%-20s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s" % [ delta['time'], delta['commits'], delta['rollbks'], delta['blksrd'], delta['blkshit'], delta['bkends'], delta['seqscan'], delta['seqtprd'], delta['idxscn'], delta['idxtrd'], delta['ins'], delta['upd'], delta['del'], delta['locks'], delta['activeq'] ] @last = current_stat run_count += 1 sleep @opts.interval end end ### Query the database for performance measurements. Returns a hash. ### def get_stats res = @db.exec %Q{ SELECT MAX(stat_db.xact_commit) AS commits, MAX(stat_db.xact_rollback) AS rollbks, MAX(stat_db.blks_read) AS blksrd, MAX(stat_db.blks_hit) AS blkshit, MAX(stat_db.numbackends) AS bkends, SUM(stat_tables.seq_scan) AS seqscan, SUM(stat_tables.seq_tup_read) AS seqtprd, SUM(stat_tables.idx_scan) AS idxscn, SUM(stat_tables.idx_tup_fetch) AS idxtrd, SUM(stat_tables.n_tup_ins) AS ins, SUM(stat_tables.n_tup_upd) AS upd, SUM(stat_tables.n_tup_del) AS del, MAX(stat_locks.locks) AS locks, MAX(activity.sess) AS activeq FROM pg_stat_database AS stat_db, pg_stat_user_tables AS stat_tables, (SELECT COUNT(*) AS locks FROM pg_locks ) AS stat_locks, (SELECT COUNT(*) AS sess FROM pg_stat_activity WHERE current_query <> '') AS activity WHERE stat_db.datname = '%s'; } % [ @opts.database ] return res[0] end end ### Parse command line arguments. Return a struct of global options. ### def parse_args( args ) options = OpenStruct.new options.database = Etc.getpwuid( Process.uid ).name options.host = '127.0.0.1' options.port = 5432 options.user = Etc.getpwuid( Process.uid ).name options.sslmode = 'disable' options.interval = 5 opts = OptionParser.new do |opts| opts.banner = "Usage: #{$0} [options]" opts.separator '' opts.separator 'Connection options:' opts.on( '-d', '--database DBNAME', "specify the database to connect to (default: \"#{options.database}\")" ) do |db| options.database = db end opts.on( '-h', '--host HOSTNAME', 'database server host' ) do |host| options.host = host end opts.on( '-p', '--port PORT', Integer, "database server port (default: \"#{options.port}\")" ) do |port| options.port = port end opts.on( '-U', '--user NAME', "database user name (default: \"#{options.user}\")" ) do |user| options.user = user end opts.on( '-W', 'force password prompt' ) do |pw| print 'Password: ' begin system 'stty -echo' options.pass = gets.chomp ensure system 'stty echo' puts end end opts.separator '' opts.separator 'Other options:' opts.on( '-i', '--interval SECONDS', Integer, "refresh interval in seconds (default: \"#{options.interval}\")") do |seconds| options.interval = seconds end opts.on_tail( '--help', 'show this help, then exit' ) do $stderr.puts opts exit end opts.on_tail( '--version', 'output version information, then exit' ) do puts Stats::VERSION exit end end opts.parse!( args ) return options end ### Go! ### if __FILE__ == $0 $stdout.sync = true Stats.new( parse_args( ARGV ) ).run end __END__ ###################################################################### ### T E R M I N A L O P T I O N S ###################################################################### #set terminal png nocrop enhanced font arial 8 size '800x600' x000000 xffffff x444444 #set output 'graph.png' set terminal pdf linewidth 4 size 11,8 set output 'graph.pdf' #set terminal aqua ###################################################################### ### O P T I O N S F O R A L L G R A P H S ###################################################################### set multiplot layout 2,1 title "PostgreSQL Statistics\n5 second sample rate (smoothed)" set grid x y set key right vertical outside set key nobox set xdata time set timefmt "%Y-%m-%d.%H:%M:%S" set format x "%l%p" set xtic rotate by -45 input_file = "database_stats.txt" # edit to taste! set xrange ["2012-04-16.00:00:00":"2012-04-17.00:00:00"] ###################################################################### ### G R A P H 1 ###################################################################### set title "Database Operations and Connection Totals" set yrange [0:200] plot \ input_file using 1:2 title "Commits" with lines smooth bezier, \ input_file using 1:3 title "Rollbacks" with lines smooth bezier, \ input_file using 1:11 title "Inserts" with lines smooth bezier, \ input_file using 1:12 title "Updates" with lines smooth bezier, \ input_file using 1:13 title "Deletes" with lines smooth bezier, \ input_file using 1:6 title "Backends (total)" with lines, \ input_file using 1:15 title "Active queries (total)" with lines smooth bezier ###################################################################### ### G R A P H 2 ###################################################################### set title "Backend Performance" set yrange [0:10000] plot \ input_file using 1:4 title "Block (cache) reads" with lines smooth bezier, \ input_file using 1:5 title "Block (cache) hits" with lines smooth bezier, \ input_file using 1:7 title "Sequence scans" with lines smooth bezier, \ input_file using 1:8 title "Sequence tuple reads" with lines smooth bezier, \ input_file using 1:9 title "Index scans" with lines smooth bezier, \ input_file using 1:10 title "Index tuple reads" with lines smooth bezier ###################################################################### ### C L E A N U P ###################################################################### unset multiplot reset pg-1.5.5/sample/losample.rb0000644000004100000410000000352314563476204015620 0ustar www-datawww-data# -*- ruby -*- require 'pg' SAMPLE_WRITE_DATA = 'some sample data' SAMPLE_EXPORT_NAME = 'lowrite.txt' conn = PG.connect( :dbname => 'test', :host => 'localhost', :port => 5432 ) puts "dbname: " + conn.db + "\thost: " + conn.host + "\tuser: " + conn.user # Start a transaction, as all large object functions require one. puts "Beginning transaction" conn.exec( 'BEGIN' ) # Test importing from a file puts "Import test:" puts " importing %s" % [ __FILE__ ] oid = conn.lo_import( __FILE__ ) puts " imported as large object %d" % [ oid ] # Read back 50 bytes of the imported data puts "Read test:" fd = conn.lo_open( oid, PG::INV_READ|PG::INV_WRITE ) conn.lo_lseek( fd, 0, PG::SEEK_SET ) buf = conn.lo_read( fd, 50 ) puts " read: %p" % [ buf ] puts " read was ok!" if buf =~ /require 'pg'/ # Append some test data onto the end of the object puts "Write test:" conn.lo_lseek( fd, 0, PG::SEEK_END ) buf = SAMPLE_WRITE_DATA.dup totalbytes = 0 until buf.empty? bytes = conn.lo_write( fd, buf ) buf.slice!( 0, bytes ) totalbytes += bytes end puts " appended %d bytes" % [ totalbytes ] # Now export it puts "Export test:" File.unlink( SAMPLE_EXPORT_NAME ) if File.exist?( SAMPLE_EXPORT_NAME ) conn.lo_export( oid, SAMPLE_EXPORT_NAME ) puts " success!" if File.exist?( SAMPLE_EXPORT_NAME ) puts " exported as %s (%d bytes)" % [ SAMPLE_EXPORT_NAME, File.size(SAMPLE_EXPORT_NAME) ] conn.exec( 'COMMIT' ) puts "End of transaction." puts 'Testing read and delete from a new transaction:' puts ' starting a new transaction' conn.exec( 'BEGIN' ) fd = conn.lo_open( oid, PG::INV_READ ) puts ' reopened okay.' conn.lo_lseek( fd, 50, PG::SEEK_END ) buf = conn.lo_read( fd, 50 ) puts ' read okay.' if buf == SAMPLE_WRITE_DATA puts 'Closing and unlinking:' conn.lo_close( fd ) puts ' closed.' conn.lo_unlink( oid ) puts ' unlinked.' conn.exec( 'COMMIT' ) puts 'Done.' pg-1.5.5/sample/notify_wait.rb0000644000004100000410000000257214563476204016343 0ustar www-datawww-data# -*- ruby -*- # # Test script, demonstrating a non-poll notification for a table event. # BEGIN { require 'pathname' basedir = Pathname.new( __FILE__ ).expand_path.dirname.parent libdir = basedir + 'lib' $LOAD_PATH.unshift( libdir.to_s ) unless $LOAD_PATH.include?( libdir.to_s ) } require 'pg' TRIGGER_TABLE = %{ CREATE TABLE IF NOT EXISTS test ( message text ); } TRIGGER_FUNCTION = %{ CREATE OR REPLACE FUNCTION notify_test() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN NOTIFY woo; RETURN NULL; END $$ } DROP_TRIGGER = %{ DROP TRIGGER IF EXISTS notify_trigger ON test } TRIGGER = %{ CREATE TRIGGER notify_trigger AFTER UPDATE OR INSERT OR DELETE ON test FOR EACH STATEMENT EXECUTE PROCEDURE notify_test(); } conn = PG.connect( :dbname => 'test' ) conn.exec( TRIGGER_TABLE ) conn.exec( TRIGGER_FUNCTION ) conn.exec( DROP_TRIGGER ) conn.exec( TRIGGER ) conn.exec( 'LISTEN woo' ) # register interest in the 'woo' event notifications = [] puts "Now switch to a different term and run:", '', %{ psql test -c "insert into test values ('A message.')"}, '' puts "Waiting up to 30 seconds for for an event!" conn.wait_for_notify( 30 ) do |notify, pid| notifications << [ pid, notify ] end if notifications.empty? puts "Awww, I didn't see any events." else puts "I got one from pid %d: %s" % notifications.first end pg-1.5.5/sample/disk_usage_report.rb0000644000004100000410000000711214563476204017513 0ustar www-datawww-data# -*- ruby -*- # vim: set noet nosta sw=4 ts=4 : # # Quickly dump size information for a given database. # Top twenty objects, and size per schema. # # Mahlon E. Smith # # Based on work by Jeff Davis . # require 'ostruct' require 'optparse' require 'etc' require 'pg' SCRIPT_VERSION = %q$Id$ ### Gather data and output it to $stdout. ### def report( opts ) db = PG.connect( :dbname => opts.database, :host => opts.host, :port => opts.port, :user => opts.user, :password => opts.pass, :sslmode => 'prefer' ) # ----------------------------------------- db_info = db.exec %Q{ SELECT count(oid) AS num_relations, pg_size_pretty(pg_database_size('#{opts.database}')) AS dbsize FROM pg_class } puts '=' * 70 puts "Disk usage information for %s: (%d relations, %s total)" % [ opts.database, db_info[0]['num_relations'], db_info[0]['dbsize'] ] puts '=' * 70 # ----------------------------------------- top_twenty = db.exec %q{ SELECT relname AS name, relkind AS kind, pg_size_pretty(pg_relation_size(pg_class.oid)) AS size FROM pg_class ORDER BY pg_relation_size(pg_class.oid) DESC LIMIT 20 } puts 'Top twenty objects by size:' puts '-' * 70 top_twenty.each do |row| type = case row['kind'] when 'i'; 'index' when 't'; 'toast' when 'r'; 'table' when 'S'; 'sequence' else; '???' end puts "%40s %10s (%s)" % [ row['name'], row['size'], type ] end puts '-' * 70 # ----------------------------------------- schema_sizes = db.exec %q{ SELECT table_schema, pg_size_pretty( CAST( SUM(pg_total_relation_size(table_schema || '.' || table_name)) AS bigint)) AS size FROM information_schema.tables GROUP BY table_schema ORDER BY CAST( SUM(pg_total_relation_size(table_schema || '.' || table_name)) AS bigint ) DESC } puts 'Size per schema:' puts '-' * 70 schema_sizes.each do |row| puts "%20s %10s" % [ row['table_schema'], row['size'] ] end puts '-' * 70 puts db.finish end ### Parse command line arguments. Return a struct of global options. ### def parse_args( args ) options = OpenStruct.new options.database = Etc.getpwuid( Process.uid ).name options.host = '127.0.0.1' options.port = 5432 options.user = Etc.getpwuid( Process.uid ).name options.sslmode = 'prefer' options.interval = 5 opts = OptionParser.new do |opts| opts.banner = "Usage: #{$0} [options]" opts.separator '' opts.separator 'Connection options:' opts.on( '-d', '--database DBNAME', "specify the database to connect to (default: \"#{options.database}\")" ) do |db| options.database = db end opts.on( '-h', '--host HOSTNAME', 'database server host' ) do |host| options.host = host end opts.on( '-p', '--port PORT', Integer, "database server port (default: \"#{options.port}\")" ) do |port| options.port = port end opts.on( '-U', '--user NAME', "database user name (default: \"#{options.user}\")" ) do |user| options.user = user end opts.on( '-W', 'force password prompt' ) do |pw| print 'Password: ' begin system 'stty -echo' options.pass = gets.chomp ensure system 'stty echo' puts end end opts.separator '' opts.separator 'Other options:' opts.on_tail( '--help', 'show this help, then exit' ) do $stderr.puts opts exit end opts.on_tail( '--version', 'output version information, then exit' ) do puts SCRIPT_VERSION exit end end opts.parse!( args ) return options end if __FILE__ == $0 opts = parse_args( ARGV ) report( opts ) end pg-1.5.5/sample/copyto.rb0000644000004100000410000000062114563476204015315 0ustar www-datawww-data# -*- ruby -*- require 'pg' require 'stringio' # An example of how to stream data to your local host from the database as CSV. $stderr.puts "Opening database connection ..." conn = PG.connect( :dbname => 'test' ) $stderr.puts "Running COPY command ..." buf = '' conn.transaction do conn.exec( "COPY logs TO STDOUT WITH csv" ) $stdout.puts( buf ) while buf = conn.get_copy_data end conn.finish pg-1.5.5/sample/check_conn.rb0000644000004100000410000000067414563476204016102 0ustar www-datawww-data# -*- ruby -*- # vim: set nosta noet ts=4 sw=4: # encoding: utf-8 require 'pg' # This is a minimal example of a function that can test an existing PG::Connection and # reset it if necessary. def check_connection( conn ) begin conn.exec( "SELECT 1" ) rescue PG::Error => err $stderr.puts "%p while testing connection: %s" % [ err.class, err.message ] conn.reset end end conn = PG.connect( dbname: 'test' ) check_connection( conn ) pg-1.5.5/sample/cursor.rb0000644000004100000410000000107114563476204015315 0ustar www-datawww-data# -*- ruby -*- require 'pg' # An example of how to use SQL cursors. This is mostly a straight port of # the cursor portion of testlibpq.c from src/test/examples. $stderr.puts "Opening database connection ..." conn = PG.connect( :dbname => 'test' ) # conn.transaction do conn.exec( "DECLARE myportal CURSOR FOR select * from pg_database" ) res = conn.exec( "FETCH ALL IN myportal" ) puts res.fields.collect {|fname| "%-15s" % [fname] }.join( '' ) res.values.collect do |row| puts row.collect {|col| "%-15s" % [col] }.join( '' ) end end pg-1.5.5/.appveyor.yml0000644000004100000410000000327714563476204014652 0ustar www-datawww-dataimage: Visual Studio 2022 init: - set PATH=C:/Ruby%ruby_version%/bin;c:/Program Files/Git/cmd;c:/Windows/system32;C:/Windows/System32/WindowsPowerShell/v1.0;C:/Program Files/Mercurial - set RUBYOPT=--verbose install: - ps: | if ($env:RUBYDOWNLOAD -ne $null) { $(new-object net.webclient).DownloadFile("https://github.com/oneclick/rubyinstaller2/releases/download/rubyinstaller-head/rubyinstaller-head-$env:RUBYDOWNLOAD.exe", "$pwd/ruby-setup.exe") cmd /c ruby-setup.exe /currentuser /verysilent /dir=C:/Ruby$env:ruby_version } - cmd: | ridk enable c:/msys64/usr/bin/bash -lc "pacman -S --noconfirm --needed ${MINGW_PACKAGE_PREFIX}-pkgconf ${MINGW_PACKAGE_PREFIX}-libyaml ${MINGW_PACKAGE_PREFIX}-gcc" - ruby --version - gem --version - gem install bundler --conservative - bundle install - ps: | if ($env:PGVERSION -ne $null) { $(new-object net.webclient).DownloadFile('http://get.enterprisedb.com/postgresql/postgresql-' + $env:PGVERSION + '.exe', 'C:/postgresql-setup.exe') cmd /c "C:/postgresql-setup.exe" --mode unattended --extract-only 1 $env:PATH = 'C:/Program Files/PostgreSQL/' + $env:PGVER + '/bin;' + $env:PATH $env:PATH = 'C:/Program Files (x86)/PostgreSQL/' + $env:PGVER + '/bin;' + $env:PATH } else { c:/msys64/usr/bin/bash -lc "pacman -S --noconfirm --needed `${MINGW_PACKAGE_PREFIX}-postgresql" } - echo %PATH% - pg_config build_script: - bundle exec rake -rdevkit compile --trace test_script: - bundle exec rake test PG_DEBUG=0 on_failure: - find -name mkmf.log | xargs cat environment: matrix: - ruby_version: "head" RUBYDOWNLOAD: x86 - ruby_version: "30-x64" pg-1.5.5/.pryrc0000644000004100000410000000067314563476204013341 0ustar www-datawww-data#!/usr/bin/ruby -*- ruby -*- BEGIN { require 'pathname' require 'rbconfig' basedir = Pathname.new( __FILE__ ).dirname.expand_path libdir = basedir + "lib" puts ">>> Adding #{libdir} to load path..." $LOAD_PATH.unshift( libdir.to_s ) } # Try to require the 'pg' library begin $stderr.puts "Loading pg..." require 'pg' rescue => e $stderr.puts "Ack! pg library failed to load: #{e.message}\n\t" + e.backtrace.join( "\n\t" ) end pg-1.5.5/checksums.yaml.gz.sig0000444000004100000410000000040014563476204016234 0ustar www-datawww-dataUfUqCcG.Q8| YV?}Ջ]vCTw`\z`c%Icl1y~RVaj6Ej79N~#F c'e} ( ,3f\+p5Q p~}LAϱl]ƟmFdpɉ*C _,-+t̍y;QpL5OW ӵpg-1.5.5/.tm_properties0000644000004100000410000000070414563476204015071 0ustar www-datawww-data# Settings projectDirectory = "$CWD" windowTitle = "${CWD/^.*\///} «$TM_DISPLAYNAME»" excludeInFileChooser = {$exclude,.hg} exclude = {$exclude,tmp,tmp_test_specs} TM_MAKE = 'rake' TM_MAKE_FILE = '${projectDirectory}/Rakefile' [ source ] softTabs = false tabSize = 4 [ source.ruby ] softTabs = false tabSize = 4 [ source.ruby.rspec ] softTabs = false tabSize = 4 pg-1.5.5/Rakefile0000644000004100000410000000622514563476204013645 0ustar www-datawww-data# -*- rake -*- # Enable english error messages, as some specs depend on them ENV["LANG"] = "C" require 'rbconfig' require 'pathname' require 'tmpdir' require 'rake/extensiontask' require 'rake/clean' require 'rspec/core/rake_task' require 'bundler' require 'bundler/gem_helper' # Build directory constants BASEDIR = Pathname( __FILE__ ).dirname SPECDIR = BASEDIR + 'spec' LIBDIR = BASEDIR + 'lib' EXTDIR = BASEDIR + 'ext' PKGDIR = BASEDIR + 'pkg' TMPDIR = BASEDIR + 'tmp' TESTDIR = BASEDIR + "tmp_test_*" DLEXT = RbConfig::CONFIG['DLEXT'] EXT = LIBDIR + "pg_ext.#{DLEXT}" GEMSPEC = 'pg.gemspec' CLEAN.include( TESTDIR.to_s ) CLEAN.include( PKGDIR.to_s, TMPDIR.to_s ) CLEAN.include "lib/*/libpq.dll" CLEAN.include "lib/pg_ext.*" CLEAN.include "lib/pg/postgresql_lib_path.rb" load 'Rakefile.cross' Bundler::GemHelper.install_tasks $gem_spec = Bundler.load_gemspec(GEMSPEC) desc "Turn on warnings and debugging in the build." task :maint do ENV['MAINTAINER_MODE'] = 'yes' end # Rake-compiler task Rake::ExtensionTask.new do |ext| ext.name = 'pg_ext' ext.gem_spec = $gem_spec ext.ext_dir = 'ext' ext.lib_dir = 'lib' ext.source_pattern = "*.{c,h}" ext.cross_compile = true ext.cross_platform = CrossLibraries.map(&:for_platform) ext.cross_config_options += CrossLibraries.map do |lib| { lib.for_platform => [ "--enable-windows-cross", "--with-pg-include=#{lib.static_postgresql_incdir}", "--with-pg-lib=#{lib.static_postgresql_libdir}", # libpq-fe.h resides in src/interfaces/libpq/ before make install "--with-opt-include=#{lib.static_postgresql_libdir}", ] } end # Add libpq.dll to windows binary gemspec ext.cross_compiling do |spec| spec.files << "lib/#{spec.platform}/libpq.dll" end end RSpec::Core::RakeTask.new(:spec).rspec_opts = "--profile -cfdoc" task :test => :spec # Use the fivefish formatter for docs generated from development checkout require 'rdoc/task' RDoc::Task.new( 'docs' ) do |rdoc| rdoc.options = $gem_spec.rdoc_options rdoc.rdoc_files = $gem_spec.extra_rdoc_files rdoc.generator = :fivefish rdoc.rdoc_dir = 'doc' end desc "Build the source gem #{$gem_spec.full_name}.gem into the pkg directory" task :gem => :build task :clobber do puts "Stop any Postmaster instances that remain after testing." require_relative 'spec/helpers' PG::TestingHelpers.stop_existing_postmasters() end desc "Update list of server error codes" task :update_error_codes do URL_ERRORCODES_TXT = "http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/utils/errcodes.txt;hb=refs/tags/REL_16_0" ERRORCODES_TXT = "ext/errorcodes.txt" sh "wget #{URL_ERRORCODES_TXT.inspect} -O #{ERRORCODES_TXT.inspect} || curl #{URL_ERRORCODES_TXT.inspect} -o #{ERRORCODES_TXT.inspect}" ruby 'ext/errorcodes.rb', 'ext/errorcodes.txt', 'ext/errorcodes.def' end file 'ext/pg_errors.c' => ['ext/errorcodes.def'] do # trigger compilation of changed errorcodes.def touch 'ext/pg_errors.c' end desc "Translate readme" task :translate do cd "translation" do # po4a's lexer might change, so record its version for reference sh "LANG=C po4a --version > .po4a-version" sh "po4a po4a.cfg" end end pg-1.5.5/Contributors.rdoc0000644000004100000410000000326114563476204015543 0ustar www-datawww-data Thanks to all the great people that have contributed code, suggestions, and patches through the years. If you contribute a patch, please include a patch for this file that adds your name to the list. * Dennis Vshivkov * Gabriel Emerson * Noboru Saitou * Akinori MUSHA * Andy Yu * Ceri Storey * Gavin Kistner * Henry T. So Jr. * Jeremy Henty * * Leon Brooks * Martin Hedenfalk * Yukihiro Matsumoto * Eiji Matsumoto * MoonWolf * * Nate Haggard * Neil Conway * Noboru Matui * Okada Jun * Shirai,Kaoru * Riley * shibata * * ts * Yuta TSUBOI * Lugovoi Nikolai * Jeff Davis * Bertram Scharpf * Michael Granger * Mahlon E. Smith * Lars Kanis * Jason Yanowitz * Charlie Savage * Rafał Bigaj * Jason Yanowitz * Greg Hazel * Chris White * Aaron Patterson * Tim Felgentreff pg-1.5.5/BSDL0000644000004100000410000000240114563476204012637 0ustar www-datawww-dataCopyright (C) 1993-2013 Yukihiro Matsumoto. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.pg-1.5.5/POSTGRES0000644000004100000410000000225014563476204013363 0ustar www-datawww-dataPostgreSQL Database Management System (formerly known as Postgres, then as Postgres95) Portions Copyright (c) 1996-2008, PostgreSQL Global Development Group Portions Copyright (c) 1994, The Regents of the University of California Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies. IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. pg-1.5.5/.irbrc0000644000004100000410000000067314563476204013303 0ustar www-datawww-data#!/usr/bin/ruby -*- ruby -*- BEGIN { require 'pathname' require 'rbconfig' basedir = Pathname.new( __FILE__ ).dirname.expand_path libdir = basedir + "lib" puts ">>> Adding #{libdir} to load path..." $LOAD_PATH.unshift( libdir.to_s ) } # Try to require the 'pg' library begin $stderr.puts "Loading pg..." require 'pg' rescue => e $stderr.puts "Ack! pg library failed to load: #{e.message}\n\t" + e.backtrace.join( "\n\t" ) end pg-1.5.5/README-Windows.rdoc0000644000004100000410000000426014563476204015433 0ustar www-datawww-data= Compiling 'pg' on MS Windows In order to build this extension on MS Windows you will need a couple things. First, a compiler. For the one click installer this means you should use the DevKit or the compiler that comes with cygwin if you're building on that platform. If you've built Ruby yourself, you should use the same compiler to build this library that you used to build Ruby. Second, PostgreSQL. Be sure you installed it with the development header files if you installed it using the standard PostgreSQL installer for Windows. If you didn't, you can run the installer again, select "modify", and then select the 'development headers' option to install them. I recommend making sure that 'pg_config.exe' is in your PATH. The PostgreSQL installer for Windows does not necessarily update your PATH when it installs itself, so you may need to do this manually. This isn't strictly necessary, however. In order to build ruby-pg, just run 'rake'. If the pg_config.exe executable is not in your PATH, you'll need to explicitly point ruby-pg to where your PostgreSQL headers and libraries are with something like this: rake --with-pg-dir=c:/progra~1/postgr~1/8.3 Adjust your path accordingly. BE SURE TO USE THE SHORT PATH NAMES! If you try to use a path with spaces in it, the nmake.exe program will choke. == Building binary 'pg' gems for MS Windows Binary gems for windows can be built on Linux, OS-X and even on Windows with the help of docker. This is how regular windows gems are built for rubygems.org . To do this, install boot2docker {on Windows}[https://github.com/boot2docker/windows-installer/releases] or {on OS X}[https://github.com/boot2docker/osx-installer/releases] and make sure it is started. A native Docker installation is best on Linux. Then run: rake gem:windows This will download a docker image suited for building windows gems, and it will download and build OpenSSL and PostgreSQL. Finally the gem is built containing binaries for all supported ruby versions. == Reporting Problems If you have any problems you can submit them via {the project's issue-tracker}[https://github.com/ged/ruby-pg/issues]. And submit questions, problems, or solutions, so that it can be improved. pg-1.5.5/data.tar.gz.sig0000444000004100000410000000040014563476204015004 0ustar www-datawww-datau 6" Il10Tn<ʨ4v8U&Oϻtb_$,3}V3'= 1.16", "< 3.0" gem "rake-compiler", "~> 1.0" gem "rake-compiler-dock", "~> 1.0" gem "rdoc", "~> 6.4" gem "rspec", "~> 3.5" end pg-1.5.5/LICENSE0000644000004100000410000000471014563476204013202 0ustar www-datawww-dataRuby is copyrighted free software by Yukihiro Matsumoto . You can redistribute it and/or modify it under either the terms of the 2-clause BSDL (see the file BSDL), or the conditions below: 1. You may make and give away verbatim copies of the source form of the software without restriction, provided that you duplicate all of the original copyright notices and associated disclaimers. 2. You may modify your copy of the software in any way, provided that you do at least ONE of the following: a) place your modifications in the Public Domain or otherwise make them Freely Available, such as by posting said modifications to Usenet or an equivalent medium, or by allowing the author to include your modifications in the software. b) use the modified software only within your corporation or organization. c) give non-standard binaries non-standard names, with instructions on where to get the original software distribution. d) make other distribution arrangements with the author. 3. You may distribute the software in object code or binary form, provided that you do at least ONE of the following: a) distribute the binaries and library files of the software, together with instructions (in the manual page or equivalent) on where to get the original distribution. b) accompany the distribution with the machine-readable source of the software. c) give non-standard binaries non-standard names, with instructions on where to get the original software distribution. d) make other distribution arrangements with the author. 4. You may modify and include the part of the software into any other software (possibly commercial). But some files in the distribution are not written by the author, so that they are not under these terms. For the list of those files and their copying conditions, see the file LEGAL. 5. The scripts and library files supplied as input to or produced as output from the software do not automatically fall under the copyright of the software, but belong to whomever generated them, and may be sold commercially, and may be aggregated with this software. 6. THIS SOFTWARE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. pg-1.5.5/ext/0000755000004100000410000000000014563476204012773 5ustar www-datawww-datapg-1.5.5/ext/gvl_wrappers.h0000644000004100000410000002247114563476204015665 0ustar www-datawww-data/* * gvl_wrappers.h - Wrapper functions for locking/unlocking the Ruby GVL * * These are some obscure preprocessor directives that allow to generate * drop-in replacement wrapper functions in a declarative manner. * These wrapper functions ensure that ruby's GVL is released on each * function call and reacquired at the end of the call or in callbacks. * This way blocking functions calls don't block concurrent ruby threads. * * The wrapper of each function is prefixed by "gvl_". * * Use "gcc -E" to retrieve the generated code. */ #ifndef __gvl_wrappers_h #define __gvl_wrappers_h #include #ifdef RUBY_EXTCONF_H # include RUBY_EXTCONF_H #endif #define DEFINE_PARAM_LIST1(type, name) \ name, #define DEFINE_PARAM_LIST2(type, name) \ p->params.name, #define DEFINE_PARAM_LIST3(type, name) \ type name, #define DEFINE_PARAM_DECL(type, name) \ type name; #define DEFINE_GVL_WRAPPER_STRUCT(name, when_non_void, rettype, lastparamtype, lastparamname) \ struct gvl_wrapper_##name##_params { \ struct { \ FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_DECL) \ lastparamtype lastparamname; \ } params; \ when_non_void( rettype retval; ) \ }; #define DEFINE_GVL_SKELETON(name, when_non_void, rettype, lastparamtype, lastparamname) \ static void * gvl_##name##_skeleton( void *data ){ \ struct gvl_wrapper_##name##_params *p = (struct gvl_wrapper_##name##_params*)data; \ when_non_void( p->retval = ) \ name( FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST2) p->params.lastparamname ); \ return NULL; \ } #ifdef ENABLE_GVL_UNLOCK #define DEFINE_GVL_STUB(name, when_non_void, rettype, lastparamtype, lastparamname) \ rettype gvl_##name(FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST3) lastparamtype lastparamname){ \ struct gvl_wrapper_##name##_params params = { \ {FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST1) lastparamname}, when_non_void((rettype)0) \ }; \ rb_thread_call_without_gvl(gvl_##name##_skeleton, ¶ms, RUBY_UBF_IO, 0); \ when_non_void( return params.retval; ) \ } #else #define DEFINE_GVL_STUB(name, when_non_void, rettype, lastparamtype, lastparamname) \ rettype gvl_##name(FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST3) lastparamtype lastparamname){ \ when_non_void( return ) \ name( FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST1) lastparamname ); \ } #endif #define DEFINE_GVL_STUB_DECL(name, when_non_void, rettype, lastparamtype, lastparamname) \ rettype gvl_##name(FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST3) lastparamtype lastparamname); #define DEFINE_GVLCB_SKELETON(name, when_non_void, rettype, lastparamtype, lastparamname) \ static void * gvl_##name##_skeleton( void *data ){ \ struct gvl_wrapper_##name##_params *p = (struct gvl_wrapper_##name##_params*)data; \ when_non_void( p->retval = ) \ name( FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST2) p->params.lastparamname ); \ return NULL; \ } #ifdef ENABLE_GVL_UNLOCK #define DEFINE_GVLCB_STUB(name, when_non_void, rettype, lastparamtype, lastparamname) \ rettype gvl_##name(FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST3) lastparamtype lastparamname){ \ struct gvl_wrapper_##name##_params params = { \ {FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST1) lastparamname}, when_non_void((rettype)0) \ }; \ rb_thread_call_with_gvl(gvl_##name##_skeleton, ¶ms); \ when_non_void( return params.retval; ) \ } #else #define DEFINE_GVLCB_STUB(name, when_non_void, rettype, lastparamtype, lastparamname) \ rettype gvl_##name(FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST3) lastparamtype lastparamname){ \ when_non_void( return ) \ name( FOR_EACH_PARAM_OF_##name(DEFINE_PARAM_LIST1) lastparamname ); \ } #endif #define GVL_TYPE_VOID(string) #define GVL_TYPE_NONVOID(string) string /* * Definitions of blocking functions and their parameters */ #define FOR_EACH_PARAM_OF_PQconnectdb(param) #define FOR_EACH_PARAM_OF_PQconnectStart(param) #define FOR_EACH_PARAM_OF_PQconnectPoll(param) #define FOR_EACH_PARAM_OF_PQreset(param) #define FOR_EACH_PARAM_OF_PQresetStart(param) #define FOR_EACH_PARAM_OF_PQresetPoll(param) #define FOR_EACH_PARAM_OF_PQping(param) #define FOR_EACH_PARAM_OF_PQexec(param) \ param(PGconn *, conn) #define FOR_EACH_PARAM_OF_PQexecParams(param) \ param(PGconn *, conn) \ param(const char *, command) \ param(int, nParams) \ param(const Oid *, paramTypes) \ param(const char * const *, paramValues) \ param(const int *, paramLengths) \ param(const int *, paramFormats) #define FOR_EACH_PARAM_OF_PQexecPrepared(param) \ param(PGconn *, conn) \ param(const char *, stmtName) \ param(int, nParams) \ param(const char * const *, paramValues) \ param(const int *, paramLengths) \ param(const int *, paramFormats) #define FOR_EACH_PARAM_OF_PQprepare(param) \ param(PGconn *, conn) \ param(const char *, stmtName) \ param(const char *, query) \ param(int, nParams) #define FOR_EACH_PARAM_OF_PQdescribePrepared(param) \ param(PGconn *, conn) #define FOR_EACH_PARAM_OF_PQdescribePortal(param) \ param(PGconn *, conn) #define FOR_EACH_PARAM_OF_PQgetResult(param) #define FOR_EACH_PARAM_OF_PQputCopyData(param) \ param(PGconn *, conn) \ param(const char *, buffer) #define FOR_EACH_PARAM_OF_PQputCopyEnd(param) \ param(PGconn *, conn) #define FOR_EACH_PARAM_OF_PQgetCopyData(param) \ param(PGconn *, conn) \ param(char **, buffer) #define FOR_EACH_PARAM_OF_PQnotifies(param) #define FOR_EACH_PARAM_OF_PQsendQuery(param) \ param(PGconn *, conn) #define FOR_EACH_PARAM_OF_PQsendQueryParams(param) \ param(PGconn *, conn) \ param(const char *, command) \ param(int, nParams) \ param(const Oid *, paramTypes) \ param(const char *const *, paramValues) \ param(const int *, paramLengths) \ param(const int *, paramFormats) #define FOR_EACH_PARAM_OF_PQsendPrepare(param) \ param(PGconn *, conn) \ param(const char *, stmtName) \ param(const char *, query) \ param(int, nParams) #define FOR_EACH_PARAM_OF_PQsendQueryPrepared(param) \ param(PGconn *, conn) \ param(const char *, stmtName) \ param(int, nParams) \ param(const char *const *, paramValues) \ param(const int *, paramLengths) \ param(const int *, paramFormats) #define FOR_EACH_PARAM_OF_PQsendDescribePrepared(param) \ param(PGconn *, conn) #define FOR_EACH_PARAM_OF_PQsendDescribePortal(param) \ param(PGconn *, conn) #define FOR_EACH_PARAM_OF_PQsetClientEncoding(param) \ param(PGconn *, conn) #define FOR_EACH_PARAM_OF_PQisBusy(param) #define FOR_EACH_PARAM_OF_PQencryptPasswordConn(param) \ param(PGconn *, conn) \ param(const char *, passwd) \ param(const char *, user) #define FOR_EACH_PARAM_OF_PQcancel(param) \ param(PGcancel *, cancel) \ param(char *, errbuf) /* function( name, void_or_nonvoid, returntype, lastparamtype, lastparamname ) */ #define FOR_EACH_BLOCKING_FUNCTION(function) \ function(PQconnectdb, GVL_TYPE_NONVOID, PGconn *, const char *, conninfo) \ function(PQconnectStart, GVL_TYPE_NONVOID, PGconn *, const char *, conninfo) \ function(PQconnectPoll, GVL_TYPE_NONVOID, PostgresPollingStatusType, PGconn *, conn) \ function(PQreset, GVL_TYPE_VOID, void, PGconn *, conn) \ function(PQresetStart, GVL_TYPE_NONVOID, int, PGconn *, conn) \ function(PQresetPoll, GVL_TYPE_NONVOID, PostgresPollingStatusType, PGconn *, conn) \ function(PQping, GVL_TYPE_NONVOID, PGPing, const char *, conninfo) \ function(PQexec, GVL_TYPE_NONVOID, PGresult *, const char *, command) \ function(PQexecParams, GVL_TYPE_NONVOID, PGresult *, int, resultFormat) \ function(PQexecPrepared, GVL_TYPE_NONVOID, PGresult *, int, resultFormat) \ function(PQprepare, GVL_TYPE_NONVOID, PGresult *, const Oid *, paramTypes) \ function(PQdescribePrepared, GVL_TYPE_NONVOID, PGresult *, const char *, stmtName) \ function(PQdescribePortal, GVL_TYPE_NONVOID, PGresult *, const char *, portalName) \ function(PQgetResult, GVL_TYPE_NONVOID, PGresult *, PGconn *, conn) \ function(PQputCopyData, GVL_TYPE_NONVOID, int, int, nbytes) \ function(PQputCopyEnd, GVL_TYPE_NONVOID, int, const char *, errormsg) \ function(PQgetCopyData, GVL_TYPE_NONVOID, int, int, async) \ function(PQnotifies, GVL_TYPE_NONVOID, PGnotify *, PGconn *, conn) \ function(PQsendQuery, GVL_TYPE_NONVOID, int, const char *, query) \ function(PQsendQueryParams, GVL_TYPE_NONVOID, int, int, resultFormat) \ function(PQsendPrepare, GVL_TYPE_NONVOID, int, const Oid *, paramTypes) \ function(PQsendQueryPrepared, GVL_TYPE_NONVOID, int, int, resultFormat) \ function(PQsendDescribePrepared, GVL_TYPE_NONVOID, int, const char *, stmt) \ function(PQsendDescribePortal, GVL_TYPE_NONVOID, int, const char *, portal) \ function(PQsetClientEncoding, GVL_TYPE_NONVOID, int, const char *, encoding) \ function(PQisBusy, GVL_TYPE_NONVOID, int, PGconn *, conn) \ function(PQencryptPasswordConn, GVL_TYPE_NONVOID, char *, const char *, algorithm) \ function(PQcancel, GVL_TYPE_NONVOID, int, int, errbufsize); FOR_EACH_BLOCKING_FUNCTION( DEFINE_GVL_STUB_DECL ); /* * Definitions of callback functions and their parameters */ #define FOR_EACH_PARAM_OF_notice_processor_proxy(param) \ param(void *, arg) #define FOR_EACH_PARAM_OF_notice_receiver_proxy(param) \ param(void *, arg) /* function( name, void_or_nonvoid, returntype, lastparamtype, lastparamname ) */ #define FOR_EACH_CALLBACK_FUNCTION(function) \ function(notice_processor_proxy, GVL_TYPE_VOID, void, const char *, message) \ function(notice_receiver_proxy, GVL_TYPE_VOID, void, const PGresult *, result) \ FOR_EACH_CALLBACK_FUNCTION( DEFINE_GVL_STUB_DECL ); #endif /* end __gvl_wrappers_h */ pg-1.5.5/ext/pg_record_coder.c0000644000004100000410000004500614563476204016264 0ustar www-datawww-data/* * pg_record_coder.c - PG::Coder class extension * */ #include "pg.h" VALUE rb_cPG_RecordCoder; VALUE rb_cPG_RecordEncoder; VALUE rb_cPG_RecordDecoder; typedef struct { t_pg_coder comp; VALUE typemap; } t_pg_recordcoder; static void pg_recordcoder_mark( void *_this ) { t_pg_recordcoder *this = (t_pg_recordcoder *)_this; rb_gc_mark_movable(this->typemap); } static size_t pg_recordcoder_memsize( const void *_this ) { const t_pg_recordcoder *this = (const t_pg_recordcoder *)_this; return sizeof(*this); } static void pg_recordcoder_compact( void *_this ) { t_pg_recordcoder *this = (t_pg_recordcoder *)_this; pg_coder_compact(&this->comp); pg_gc_location(this->typemap); } static const rb_data_type_t pg_recordcoder_type = { "PG::RecordCoder", { pg_recordcoder_mark, RUBY_TYPED_DEFAULT_FREE, pg_recordcoder_memsize, pg_compact_callback(pg_recordcoder_compact), }, &pg_coder_type, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; static VALUE pg_recordcoder_encoder_allocate( VALUE klass ) { t_pg_recordcoder *this; VALUE self = TypedData_Make_Struct( klass, t_pg_recordcoder, &pg_recordcoder_type, this ); pg_coder_init_encoder( self ); RB_OBJ_WRITE(self, &this->typemap, pg_typemap_all_strings); return self; } static VALUE pg_recordcoder_decoder_allocate( VALUE klass ) { t_pg_recordcoder *this; VALUE self = TypedData_Make_Struct( klass, t_pg_recordcoder, &pg_recordcoder_type, this ); pg_coder_init_decoder( self ); RB_OBJ_WRITE(self, &this->typemap, pg_typemap_all_strings); return self; } /* * call-seq: * coder.type_map = map * * Defines how single columns are encoded or decoded. * +map+ must be a kind of PG::TypeMap . * * Defaults to a PG::TypeMapAllStrings , so that PG::TextEncoder::String respectively * PG::TextDecoder::String is used for encoding/decoding of each column. * */ static VALUE pg_recordcoder_type_map_set(VALUE self, VALUE type_map) { t_pg_recordcoder *this = RTYPEDDATA_DATA( self ); rb_check_frozen(self); if ( !rb_obj_is_kind_of(type_map, rb_cTypeMap) ){ rb_raise( rb_eTypeError, "wrong elements type %s (expected some kind of PG::TypeMap)", rb_obj_classname( type_map ) ); } RB_OBJ_WRITE(self, &this->typemap, type_map); return type_map; } /* * call-seq: * coder.type_map -> PG::TypeMap * * The PG::TypeMap that will be used for encoding and decoding of columns. */ static VALUE pg_recordcoder_type_map_get(VALUE self) { t_pg_recordcoder *this = RTYPEDDATA_DATA( self ); return this->typemap; } /* * Document-class: PG::TextEncoder::Record < PG::RecordEncoder * * This class encodes one record of columns for transmission as query parameter in text format. * See PostgreSQL {Composite Types}[https://www.postgresql.org/docs/current/rowtypes.html] for a description of the format and how it can be used. * * PostgreSQL allows composite types to be used in many of the same ways that simple types can be used. * For example, a column of a table can be declared to be of a composite type. * * The encoder expects the record columns as array of values. * The single values are encoded as defined in the assigned #type_map. * If no type_map was assigned, all values are converted to strings by PG::TextEncoder::String. * * It is possible to manually assign a type encoder for each column per PG::TypeMapByColumn, * or to make use of PG::BasicTypeMapBasedOnResult to assign them based on the table OIDs. * * Encode a record from an Array to a +String+ in PostgreSQL Composite Type format (uses default type map TypeMapAllStrings): * PG::TextEncoder::Record.new.encode([1, 2]) # => "(\"1\",\"2\")" * * Encode a record from Array to +String+ : * # Build a type map for two Floats * tm = PG::TypeMapByColumn.new([PG::TextEncoder::Float.new]*2) * # Use this type map to encode the record: * PG::TextEncoder::Record.new(type_map: tm).encode([1,2]) * # => "(\"1.0\",\"2.0\")" * * Records can also be encoded and decoded directly to and from the database. * This avoids intermediate string allocations and is very fast. * Take the following type and table definitions: * conn.exec("CREATE TYPE complex AS (r float, i float) ") * conn.exec("CREATE TABLE my_table (v1 complex, v2 complex) ") * * A record can be encoded by adding a type map to Connection#exec_params and siblings: * # Build a type map for the two floats "r" and "i" as in our "complex" type * tm = PG::TypeMapByColumn.new([PG::TextEncoder::Float.new]*2) * # Build a record encoder to encode this type as a record: * enco = PG::TextEncoder::Record.new(type_map: tm) * # Insert table data and use the encoder to cast the complex value "v1" from ruby array: * conn.exec_params("INSERT INTO my_table VALUES ($1) RETURNING v1", [[1,2]], 0, PG::TypeMapByColumn.new([enco])).to_a * # => [{"v1"=>"(1,2)"}] * * Alternatively the typemap can be build based on database OIDs rather than manually assigning encoders. * # Fetch a NULL record of our type to retrieve the OIDs of the two fields "r" and "i" * oids = conn.exec( "SELECT (NULL::complex).*" ) * # Build a type map (PG::TypeMapByColumn) for encoding the "complex" type * etm = PG::BasicTypeMapBasedOnResult.new(conn).build_column_map( oids ) * * It's also possible to use the BasicTypeMapForQueries to send records to the database server. * In contrast to ORM libraries, PG doesn't have information regarding the type of data the server is expecting. * So BasicTypeMapForQueries works based on the class of the values to be sent and it has to be instructed that a ruby array shall be casted to a record. * # Retrieve OIDs of all basic types from the database * etm = PG::BasicTypeMapForQueries.new(conn) * etm.encode_array_as = :record * # Apply the basic type registry to all values sent to the server * conn.type_map_for_queries = etm * # Send a complex number as an array of two integers * conn.exec_params("INSERT INTO my_table VALUES ($1) RETURNING v1", [[1,2]]).to_a * # => [{"v1"=>"(1,2)"}] * * Records can also be nested or further wrapped into other encoders like PG::TextEncoder::CopyRow. * * See also PG::TextDecoder::Record for the decoding direction. */ static int pg_text_enc_record(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { t_pg_recordcoder *this = (t_pg_recordcoder *)conv; t_pg_coder_enc_func enc_func; static t_pg_coder *p_elem_coder; int i; t_typemap *p_typemap; char *current_out; char *end_capa_ptr; p_typemap = RTYPEDDATA_DATA( this->typemap ); p_typemap->funcs.fit_to_query( this->typemap, value ); /* Allocate a new string with embedded capacity and realloc exponential when needed. */ PG_RB_STR_NEW( *intermediate, current_out, end_capa_ptr ); PG_ENCODING_SET_NOCHECK(*intermediate, enc_idx); PG_RB_STR_ENSURE_CAPA( *intermediate, 1, current_out, end_capa_ptr ); *current_out++ = '('; for( i=0; i 0 ){ PG_RB_STR_ENSURE_CAPA( *intermediate, 1, current_out, end_capa_ptr ); *current_out++ = ','; } switch(TYPE(entry)){ case T_NIL: /* emit nothing... */ break; default: p_elem_coder = p_typemap->funcs.typecast_query_param(p_typemap, entry, i); enc_func = pg_coder_enc_func(p_elem_coder); /* 1st pass for retiving the required memory space */ strlen = enc_func(p_elem_coder, entry, NULL, &subint, enc_idx); if( strlen == -1 ){ /* we can directly use String value in subint */ strlen = RSTRING_LEN(subint); /* size of string assuming the worst case, that every character must be escaped. */ PG_RB_STR_ENSURE_CAPA( *intermediate, strlen * 2 + 2, current_out, end_capa_ptr ); *current_out++ = '"'; /* Record string from subint with backslash escaping */ for(ptr1 = RSTRING_PTR(subint); ptr1 < RSTRING_PTR(subint) + strlen; ptr1++) { if (*ptr1 == '"' || *ptr1 == '\\') { *current_out++ = *ptr1; } *current_out++ = *ptr1; } *current_out++ = '"'; } else { /* 2nd pass for writing the data to prepared buffer */ /* size of string assuming the worst case, that every character must be escaped. */ PG_RB_STR_ENSURE_CAPA( *intermediate, strlen * 2 + 2, current_out, end_capa_ptr ); *current_out++ = '"'; /* Place the unescaped string at current output position. */ strlen = enc_func(p_elem_coder, entry, current_out, &subint, enc_idx); ptr1 = current_out; ptr2 = current_out + strlen; /* count required backlashs */ for(backslashs = 0; ptr1 != ptr2; ptr1++) { /* Escape backslash itself, newline, carriage return, and the current delimiter character. */ if(*ptr1 == '"' || *ptr1 == '\\'){ backslashs++; } } ptr1 = current_out + strlen; ptr2 = current_out + strlen + backslashs; current_out = ptr2; /* Then store the escaped string on the final position, walking * right to left, until all backslashs are placed. */ while( ptr1 != ptr2 ) { *--ptr2 = *--ptr1; if(*ptr1 == '"' || *ptr1 == '\\'){ *--ptr2 = *ptr1; } } *current_out++ = '"'; } } } PG_RB_STR_ENSURE_CAPA( *intermediate, 1, current_out, end_capa_ptr ); *current_out++ = ')'; rb_str_set_len( *intermediate, current_out - RSTRING_PTR(*intermediate) ); return -1; } /* * record_isspace() --- a non-locale-dependent isspace() * * We used to use isspace() for parsing array values, but that has * undesirable results: an array value might be silently interpreted * differently depending on the locale setting. Now we just hard-wire * the traditional ASCII definition of isspace(). */ static int record_isspace(char ch) { if (ch == ' ' || ch == '\t' || ch == '\n' || ch == '\r' || ch == '\v' || ch == '\f') return 1; return 0; } /* * Document-class: PG::TextDecoder::Record < PG::RecordDecoder * * This class decodes one record of values received from a composite type column in text format. * See PostgreSQL {Composite Types}[https://www.postgresql.org/docs/current/rowtypes.html] for a description of the format and how it can be used. * * PostgreSQL allows composite types to be used in many of the same ways that simple types can be used. * For example, a column of a table can be declared to be of a composite type. * * The columns are returned from the decoder as array of values. * The single values are decoded as defined in the assigned #type_map. * If no type_map was assigned, all values are converted to strings by PG::TextDecoder::String. * * Decode a record in Composite Type format from +String+ to Array (uses default type map TypeMapAllStrings): * PG::TextDecoder::Record.new.decode("(1,2)") # => ["1", "2"] * * Decode a record from +String+ to Array : * # Build a type map for two Floats * tm = PG::TypeMapByColumn.new([PG::TextDecoder::Float.new]*2) * # Use this type map to decode the record: * PG::TextDecoder::Record.new(type_map: tm).decode("(1,2)") * # => [1.0, 2.0] * * Records can also be encoded and decoded directly to and from the database. * This avoids intermediate String allocations and is very fast. * Take the following type and table definitions: * conn.exec("CREATE TYPE complex AS (r float, i float) ") * conn.exec("CREATE TABLE my_table (v1 complex, v2 complex) ") * conn.exec("INSERT INTO my_table VALUES((2,3), (4,5)), ((6,7), (8,9)) ") * * The record can be decoded by applying a type map to the PG::Result object: * # Build a type map for two floats "r" and "i" * tm = PG::TypeMapByColumn.new([PG::TextDecoder::Float.new]*2) * # Build a record decoder to decode this two-value type: * deco = PG::TextDecoder::Record.new(type_map: tm) * # Fetch table data and use the decoder to cast the two complex values "v1" and "v2": * conn.exec("SELECT * FROM my_table").map_types!(PG::TypeMapByColumn.new([deco]*2)).to_a * # => [{"v1"=>[2.0, 3.0], "v2"=>[4.0, 5.0]}, {"v1"=>[6.0, 7.0], "v2"=>[8.0, 9.0]}] * * It's more very convenient to use the PG::BasicTypeRegistry, which is based on database OIDs. * # Fetch a NULL record of our type to retrieve the OIDs of the two fields "r" and "i" * oids = conn.exec( "SELECT (NULL::complex).*" ) * # Build a type map (PG::TypeMapByColumn) for decoding the "complex" type * dtm = PG::BasicTypeMapForResults.new(conn).build_column_map( oids ) * # Build a type map and populate with basic types * btr = PG::BasicTypeRegistry.new.register_default_types * # Register a new record decoder for decoding our type "complex" * btr.register_coder(PG::TextDecoder::Record.new(type_map: dtm, name: "complex")) * # Apply our basic type registry to all results retrieved from the server * conn.type_map_for_results = PG::BasicTypeMapForResults.new(conn, registry: btr) * # Now queries decode the "complex" type (and many basic types) automatically * conn.exec("SELECT * FROM my_table").to_a * # => [{"v1"=>[2.0, 3.0], "v2"=>[4.0, 5.0]}, {"v1"=>[6.0, 7.0], "v2"=>[8.0, 9.0]}] * * Records can also be nested or further wrapped into other decoders like PG::TextDecoder::CopyRow. * * See also PG::TextEncoder::Record for the encoding direction (data sent to the server). */ /* * Parse the current line into separate attributes (fields), * performing de-escaping as needed. * * All fields are gathered into a ruby Array. The de-escaped field data is written * into to a ruby String. This object is reused for non string columns. * For String columns the field value is directly used as return value and no * reuse of the memory is done. * * The parser is thankfully borrowed from the PostgreSQL sources: * src/backend/utils/adt/rowtypes.c */ static VALUE pg_text_dec_record(t_pg_coder *conv, char *input_line, int len, int _tuple, int _field, int enc_idx) { t_pg_recordcoder *this = (t_pg_recordcoder *)conv; /* Return value: array */ VALUE array; /* Current field */ VALUE field_str; int fieldno; int expected_fields; char *output_ptr; char *cur_ptr; char *end_capa_ptr; t_typemap *p_typemap; p_typemap = RTYPEDDATA_DATA( this->typemap ); expected_fields = p_typemap->funcs.fit_to_copy_get( this->typemap ); /* The received input string will probably have this->nfields fields. */ array = rb_ary_new2(expected_fields); /* Allocate a new string with embedded capacity and realloc later with * exponential growing size when needed. */ PG_RB_STR_NEW( field_str, output_ptr, end_capa_ptr ); /* set pointer variables for loop */ cur_ptr = input_line; /* * Scan the string. We use "buf" to accumulate the de-quoted data for * each column, which is then fed to the appropriate input converter. */ /* Allow leading whitespace */ while (*cur_ptr && record_isspace(*cur_ptr)) cur_ptr++; if (*cur_ptr++ != '(') rb_raise( rb_eArgError, "malformed record literal: \"%s\" - Missing left parenthesis.", input_line ); for (fieldno = 0; ; fieldno++) { /* Check for null: completely empty input means null */ if (*cur_ptr == ',' || *cur_ptr == ')') { rb_ary_push(array, Qnil); } else { /* Extract string for this column */ int inquote = 0; VALUE field_value; while (inquote || !(*cur_ptr == ',' || *cur_ptr == ')')) { char ch = *cur_ptr++; if (ch == '\0') rb_raise( rb_eArgError, "malformed record literal: \"%s\" - Unexpected end of input.", input_line ); if (ch == '\\') { if (*cur_ptr == '\0') rb_raise( rb_eArgError, "malformed record literal: \"%s\" - Unexpected end of input.", input_line ); PG_RB_STR_ENSURE_CAPA( field_str, 1, output_ptr, end_capa_ptr ); *output_ptr++ = *cur_ptr++; } else if (ch == '"') { if (!inquote) inquote = 1; else if (*cur_ptr == '"') { /* doubled quote within quote sequence */ PG_RB_STR_ENSURE_CAPA( field_str, 1, output_ptr, end_capa_ptr ); *output_ptr++ = *cur_ptr++; } else inquote = 0; } else { PG_RB_STR_ENSURE_CAPA( field_str, 1, output_ptr, end_capa_ptr ); /* Add ch to output string */ *output_ptr++ = ch; } } /* Convert the column value */ rb_str_set_len( field_str, output_ptr - RSTRING_PTR(field_str) ); field_value = p_typemap->funcs.typecast_copy_get( p_typemap, field_str, fieldno, 0, enc_idx ); rb_ary_push(array, field_value); if( field_value == field_str ){ /* Our output string will be send to the user, so we can not reuse * it for the next field. */ PG_RB_STR_NEW( field_str, output_ptr, end_capa_ptr ); } /* Reset the pointer to the start of the output/buffer string. */ output_ptr = RSTRING_PTR(field_str); } /* Skip comma that separates prior field from this one */ if (*cur_ptr == ',') { cur_ptr++; } else if (*cur_ptr == ')') { cur_ptr++; /* Done if we hit closing parenthesis */ break; } else { rb_raise( rb_eArgError, "malformed record literal: \"%s\" - Too few columns.", input_line ); } } /* Allow trailing whitespace */ while (*cur_ptr && record_isspace(*cur_ptr)) cur_ptr++; if (*cur_ptr) rb_raise( rb_eArgError, "malformed record literal: \"%s\" - Junk after right parenthesis.", input_line ); return array; } void init_pg_recordcoder(void) { /* Document-class: PG::RecordCoder < PG::Coder * * This is the base class for all type cast classes for COPY data, */ rb_cPG_RecordCoder = rb_define_class_under( rb_mPG, "RecordCoder", rb_cPG_Coder ); rb_define_method( rb_cPG_RecordCoder, "type_map=", pg_recordcoder_type_map_set, 1 ); rb_define_method( rb_cPG_RecordCoder, "type_map", pg_recordcoder_type_map_get, 0 ); /* Document-class: PG::RecordEncoder < PG::RecordCoder */ rb_cPG_RecordEncoder = rb_define_class_under( rb_mPG, "RecordEncoder", rb_cPG_RecordCoder ); rb_define_alloc_func( rb_cPG_RecordEncoder, pg_recordcoder_encoder_allocate ); /* Document-class: PG::RecordDecoder < PG::RecordCoder */ rb_cPG_RecordDecoder = rb_define_class_under( rb_mPG, "RecordDecoder", rb_cPG_RecordCoder ); rb_define_alloc_func( rb_cPG_RecordDecoder, pg_recordcoder_decoder_allocate ); /* Make RDoc aware of the encoder classes... */ /* rb_mPG_TextEncoder = rb_define_module_under( rb_mPG, "TextEncoder" ); */ /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "Record", rb_cPG_RecordEncoder ); */ pg_define_coder( "Record", pg_text_enc_record, rb_cPG_RecordEncoder, rb_mPG_TextEncoder ); /* rb_mPG_TextDecoder = rb_define_module_under( rb_mPG, "TextDecoder" ); */ /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Record", rb_cPG_RecordDecoder ); */ pg_define_coder( "Record", pg_text_dec_record, rb_cPG_RecordDecoder, rb_mPG_TextDecoder ); } pg-1.5.5/ext/pg_errors.c0000644000004100000410000000476514563476204015155 0ustar www-datawww-data/* * pg_errors.c - Definition and lookup of error classes. * */ #include "pg.h" VALUE rb_hErrors; VALUE rb_ePGerror; VALUE rb_eServerError; VALUE rb_eUnableToSend; VALUE rb_eConnectionBad; VALUE rb_eInvalidResultStatus; VALUE rb_eNoResultError; VALUE rb_eInvalidChangeOfResultFields; static VALUE define_error_class(const char *name, const char *baseclass_code) { VALUE baseclass = rb_eServerError; if(baseclass_code) { baseclass = rb_hash_aref( rb_hErrors, rb_str_new2(baseclass_code) ); } return rb_define_class_under( rb_mPG, name, baseclass ); } static void register_error_class(const char *code, VALUE klass) { rb_hash_aset( rb_hErrors, rb_str_new2(code), klass ); } /* Find a proper error class for the given SQLSTATE string */ VALUE lookup_error_class(const char *sqlstate) { VALUE klass; if(sqlstate) { /* Find the proper error class by the 5-characters SQLSTATE. */ klass = rb_hash_aref( rb_hErrors, rb_str_new2(sqlstate) ); if(NIL_P(klass)) { /* The given SQLSTATE couldn't be found. This might happen, if * the server side uses a newer version than the client. * Try to find a error class by using the 2-characters SQLSTATE. */ klass = rb_hash_aref( rb_hErrors, rb_str_new(sqlstate, 2) ); if(NIL_P(klass)) { /* Also the 2-characters SQLSTATE is unknown. * Use the generic server error instead. */ klass = rb_eServerError; } } } else { /* Unable to retrieve the PG_DIAG_SQLSTATE. * Use the generic error instead. */ klass = rb_eUnableToSend; } return klass; } void init_pg_errors(void) { rb_hErrors = rb_hash_new(); rb_define_const( rb_mPG, "ERROR_CLASSES", rb_hErrors ); rb_ePGerror = rb_define_class_under( rb_mPG, "Error", rb_eStandardError ); /************************* * PG::Error *************************/ rb_define_alias( rb_ePGerror, "error", "message" ); rb_define_attr( rb_ePGerror, "connection", 1, 0 ); rb_define_attr( rb_ePGerror, "result", 1, 0 ); rb_eServerError = rb_define_class_under( rb_mPG, "ServerError", rb_ePGerror ); rb_eUnableToSend = rb_define_class_under( rb_mPG, "UnableToSend", rb_ePGerror ); rb_eConnectionBad = rb_define_class_under( rb_mPG, "ConnectionBad", rb_ePGerror ); rb_eInvalidResultStatus = rb_define_class_under( rb_mPG, "InvalidResultStatus", rb_ePGerror ); rb_eNoResultError = rb_define_class_under( rb_mPG, "NoResultError", rb_ePGerror ); rb_eInvalidChangeOfResultFields = rb_define_class_under( rb_mPG, "InvalidChangeOfResultFields", rb_ePGerror ); #include "errorcodes.def" } pg-1.5.5/ext/pg_util.h0000644000004100000410000000427514563476204014617 0ustar www-datawww-data/* * utils.h * */ #ifndef __utils_h #define __utils_h #define write_nbo16(l,c) ( \ *((unsigned char*)(c)+0)=(unsigned char)(((l)>>8)&0xff), \ *((unsigned char*)(c)+1)=(unsigned char)(((l) )&0xff)\ ) #define write_nbo32(l,c) ( \ *((unsigned char*)(c)+0)=(unsigned char)(((l)>>24L)&0xff), \ *((unsigned char*)(c)+1)=(unsigned char)(((l)>>16L)&0xff), \ *((unsigned char*)(c)+2)=(unsigned char)(((l)>> 8L)&0xff), \ *((unsigned char*)(c)+3)=(unsigned char)(((l) )&0xff)\ ) #define write_nbo64(l,c) ( \ *((unsigned char*)(c)+0)=(unsigned char)(((l)>>56LL)&0xff), \ *((unsigned char*)(c)+1)=(unsigned char)(((l)>>48LL)&0xff), \ *((unsigned char*)(c)+2)=(unsigned char)(((l)>>40LL)&0xff), \ *((unsigned char*)(c)+3)=(unsigned char)(((l)>>32LL)&0xff), \ *((unsigned char*)(c)+4)=(unsigned char)(((l)>>24LL)&0xff), \ *((unsigned char*)(c)+5)=(unsigned char)(((l)>>16LL)&0xff), \ *((unsigned char*)(c)+6)=(unsigned char)(((l)>> 8LL)&0xff), \ *((unsigned char*)(c)+7)=(unsigned char)(((l) )&0xff)\ ) #define read_nbo16(c) ((int16_t)( \ (((uint16_t)(*((unsigned char*)(c)+0)))<< 8L) | \ (((uint16_t)(*((unsigned char*)(c)+1))) ) \ )) #define read_nbo32(c) ((int32_t)( \ (((uint32_t)(*((unsigned char*)(c)+0)))<<24L) | \ (((uint32_t)(*((unsigned char*)(c)+1)))<<16L) | \ (((uint32_t)(*((unsigned char*)(c)+2)))<< 8L) | \ (((uint32_t)(*((unsigned char*)(c)+3))) ) \ )) #define read_nbo64(c) ((int64_t)( \ (((uint64_t)(*((unsigned char*)(c)+0)))<<56LL) | \ (((uint64_t)(*((unsigned char*)(c)+1)))<<48LL) | \ (((uint64_t)(*((unsigned char*)(c)+2)))<<40LL) | \ (((uint64_t)(*((unsigned char*)(c)+3)))<<32LL) | \ (((uint64_t)(*((unsigned char*)(c)+4)))<<24LL) | \ (((uint64_t)(*((unsigned char*)(c)+5)))<<16LL) | \ (((uint64_t)(*((unsigned char*)(c)+6)))<< 8LL) | \ (((uint64_t)(*((unsigned char*)(c)+7))) ) \ )) #define BASE64_ENCODED_SIZE(strlen) (((strlen) + 2) / 3 * 4) #define BASE64_DECODED_SIZE(base64len) (((base64len) + 3) / 4 * 3) void base64_encode( char *out, const char *in, int len); int base64_decode( char *out, const char *in, unsigned int len); int rbpg_strncasecmp(const char *s1, const char *s2, size_t n); #endif /* end __utils_h */ pg-1.5.5/ext/pg_text_encoder.c0000644000004100000410000006021014563476204016307 0ustar www-datawww-data/* * pg_text_encoder.c - PG::TextEncoder module * $Id$ * */ /* * * Type casts for encoding Ruby objects to PostgreSQL string representations. * * Encoder classes are defined with pg_define_coder(). This creates a new coder class and * assigns an encoder function. The encoder function can decide between two different options * to return the encoded data. It can either return it as a Ruby String object or write the * encoded data to a memory space provided by the caller. In the second case, the encoder * function is called twice, once for deciding the encoding option and returning the expected * data length, and a second time when the requested memory space was made available by the * calling function, to do the actual conversion and writing. Parameter intermediate can be * used to store data between these two calls. * * Signature of all type cast encoders is: * int encoder_function(t_pg_coder *this, VALUE value, char *out, VALUE *intermediate) * * Params: * this - The data part of the coder object that belongs to the encoder function. * value - The Ruby object to cast. * out - NULL for the first call, * pointer to a buffer with the requested size for the second call. * intermediate - Pointer to a VALUE that might be set by the encoding function to some * value in the first call that can be retrieved later in the second call. * This VALUE is not yet initialized by the caller. * enc_idx - Index of the output Encoding that strings should be converted to. * * Returns: * >= 0 - If out==NULL the encoder function must return the expected output buffer size. * This can be larger than the size of the second call, but may not be smaller. * If out!=NULL the encoder function must return the actually used output buffer size * without a termination character. * -1 - The encoder function can alternatively return -1 to indicate that no second call * is required, but the String value in *intermediate should be used instead. */ #include "pg.h" #include "pg_util.h" #ifdef HAVE_INTTYPES_H #include #endif #include VALUE rb_mPG_TextEncoder; static ID s_id_encode; static ID s_id_to_i; static ID s_id_to_s; static ID s_cBigDecimal; static VALUE s_str_F; static int pg_text_enc_integer(t_pg_coder *this, VALUE value, char *out, VALUE *intermediate, int enc_idx); VALUE pg_obj_to_i( VALUE value ) { switch (TYPE(value)) { case T_FIXNUM: case T_FLOAT: case T_BIGNUM: return value; default: return rb_funcall(value, s_id_to_i, 0); } } /* * Document-class: PG::TextEncoder::Boolean < PG::SimpleEncoder * * This is the encoder class for the PostgreSQL bool type. * * Ruby value false is encoded as SQL +FALSE+ value. * Ruby value true is encoded as SQL +TRUE+ value. * Any other value is sent as it's string representation. * */ static int pg_text_enc_boolean(t_pg_coder *this, VALUE value, char *out, VALUE *intermediate, int enc_idx) { switch( TYPE(value) ){ case T_FALSE: if(out) *out = 'f'; return 1; case T_TRUE: if(out) *out = 't'; return 1; case T_FIXNUM: case T_BIGNUM: if( NUM2LONG(value) == 0 ){ if(out) *out = '0'; return 1; } else if( NUM2LONG(value) == 1 ){ if(out) *out = '1'; return 1; } else { return pg_text_enc_integer(this, value, out, intermediate, enc_idx); } default: return pg_coder_enc_to_s(this, value, out, intermediate, enc_idx); } /* never reached */ return 0; } /* * Document-class: PG::TextEncoder::String < PG::SimpleEncoder * * This is the encoder class for the PostgreSQL text types. * * Non-String values are expected to have method +to_s+ defined. * */ int pg_coder_enc_to_s(t_pg_coder *this, VALUE value, char *out, VALUE *intermediate, int enc_idx) { VALUE str = rb_obj_as_string(value); if( ENCODING_GET(str) == enc_idx ){ *intermediate = str; }else{ *intermediate = rb_str_export_to_enc(str, rb_enc_from_index(enc_idx)); } return -1; } static int count_leading_zero_bits(unsigned long long x) { #if defined(__GNUC__) || defined(__clang__) return __builtin_clzll(x); #elif defined(_MSC_VER) DWORD r = 0; _BitScanForward64(&r, x); return (int)r; #else unsigned int a; for(a=0; a < sizeof(unsigned long long) * 8; a++){ if( x & (1 << (sizeof(unsigned long long) * 8 - 1))) return a; x <<= 1; } return a; #endif } /* * Document-class: PG::TextEncoder::Integer < PG::SimpleEncoder * * This is the encoder class for the PostgreSQL integer types. * * Non-Integer values are expected to have method +to_i+ defined. * */ static int pg_text_enc_integer(t_pg_coder *this, VALUE value, char *out, VALUE *intermediate, int enc_idx) { if(out){ if(TYPE(*intermediate) == T_STRING){ return pg_coder_enc_to_s(this, value, out, intermediate, enc_idx); }else{ char *start = out; int len; int neg = 0; long long sll = NUM2LL(*intermediate); unsigned long long ll; if (sll < 0) { /* Avoid problems with the most negative integer not being representable * as a positive integer, by using unsigned long long for encoding. */ ll = -sll; neg = 1; } else { ll = sll; } /* Compute the result string backwards. */ do { unsigned long long remainder; unsigned long long oldval = ll; ll /= 10; remainder = oldval - ll * 10; *out++ = '0' + remainder; } while (ll != 0); if (neg) *out++ = '-'; len = (int)(out - start); /* Reverse string. */ out--; while (start < out) { char swap = *start; *start++ = *out; *out-- = swap; } return len; } }else{ *intermediate = pg_obj_to_i(value); if(TYPE(*intermediate) == T_FIXNUM){ long long sll = NUM2LL(*intermediate); unsigned long long ll = sll < 0 ? -sll : sll; int len = (sizeof(unsigned long long) * 8 - count_leading_zero_bits(ll)) / 3; return sll < 0 ? len+2 : len+1; }else{ return pg_coder_enc_to_s(this, *intermediate, NULL, intermediate, enc_idx); } } } #define MAX_DOUBLE_DIGITS 16 /* * Document-class: PG::TextEncoder::Float < PG::SimpleEncoder * * This is the encoder class for the PostgreSQL float types. * */ static int pg_text_enc_float(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { if(out){ double dvalue = NUM2DBL(value); int len = 0; int neg = 0; int exp2i, exp10i, i; unsigned long long ll, remainder, oldval; VALUE intermediate; /* Cast to the same strings as value.to_s . */ if( isinf(dvalue) ){ if( dvalue < 0 ){ memcpy( out, "-Infinity", 9); return 9; } else { memcpy( out, "Infinity", 8); return 8; } } else if (isnan(dvalue)) { memcpy( out, "NaN", 3); return 3; } /* * The following computation is roughly a conversion kind of * sprintf( out, "%.16E", dvalue); */ /* write the algebraic sign */ if( dvalue < 0 ) { dvalue = -dvalue; *out++ = '-'; neg++; } /* retrieve the power of 2 exponent */ frexp(dvalue, &exp2i); /* compute the power of 10 exponent */ exp10i = (int)floor(exp2i * 0.30102999566398114); /* Math.log(2)/Math.log(10) */ /* move the decimal point, so that we get an integer of MAX_DOUBLE_DIGITS decimal digits */ ll = (unsigned long long)(dvalue * pow(10, MAX_DOUBLE_DIGITS - 1 - exp10i) + 0.5); /* avoid leading zeros due to inaccuracy of deriving exp10i from exp2i */ /* otherwise we would print "09.0" instead of "9.0" */ if( ll < 1000000000000000 ){ /* pow(10, MAX_DOUBLE_DIGITS-1) */ exp10i--; ll *= 10; } if( exp10i <= -5 || exp10i >= 15 ) { /* Write the float in exponent format (1.23e45) */ /* write fraction digits from right to left */ for( i = MAX_DOUBLE_DIGITS; i > 1; i--){ oldval = ll; ll /= 10; remainder = oldval - ll * 10; /* omit trailing zeros */ if(remainder != 0 || len ) { out[i] = '0' + remainder; len++; } } /* write decimal point */ if( len ){ out[1] = '.'; len++; } /* write remaining single digit left to the decimal point */ oldval = ll; ll /= 10; remainder = oldval - ll * 10; out[0] = '0' + remainder; len++; /* write exponent */ out[len++] = 'e'; intermediate = INT2NUM(exp10i); return neg + len + pg_text_enc_integer(conv, Qnil, out + len, &intermediate, enc_idx); } else { /* write the float in non exponent format (0.001234 or 123450.0) */ /* write digits from right to left */ int lz = exp10i < 0 ? 0 : exp10i; for( i = MAX_DOUBLE_DIGITS - (exp10i < 0 ? exp10i : 0); i >= 0; i-- ){ oldval = ll; ll /= 10; remainder = oldval - ll * 10; /* write decimal point */ if( i - 1 == lz ){ out[i--] = '.'; len++; } /* if possible then omit trailing zeros */ if(remainder != 0 || len || i - 2 == lz) { out[i] = '0' + remainder; len++; } } return neg + len; } }else{ return 1 /*sign*/ + MAX_DOUBLE_DIGITS + 1 /*dot*/ + 1 /*e*/ + 1 /*exp sign*/ + 3 /*exp digits*/; } } /* * Document-class: PG::TextEncoder::Numeric < PG::SimpleEncoder * * This is the encoder class for the PostgreSQL numeric types. * * It converts Integer, Float and BigDecimal objects. * All other objects are expected to respond to +to_s+. */ static int pg_text_enc_numeric(t_pg_coder *this, VALUE value, char *out, VALUE *intermediate, int enc_idx) { switch(TYPE(value)){ case T_FIXNUM: case T_BIGNUM: return pg_text_enc_integer(this, value, out, intermediate, enc_idx); case T_FLOAT: return pg_text_enc_float(this, value, out, intermediate, enc_idx); default: if(out){ /* second pass */ rb_bug("unexpected value type: %d", TYPE(value)); } else { /* first pass */ if( rb_obj_is_kind_of(value, s_cBigDecimal) ){ /* value.to_s('F') */ *intermediate = rb_funcall(value, s_id_to_s, 1, s_str_F); return -1; /* no second pass */ } else { return pg_coder_enc_to_s(this, value, NULL, intermediate, enc_idx); /* no second pass */ } } } } /* called per autoload when TextEncoder::Numeric is used */ static VALUE init_pg_text_encoder_numeric(VALUE rb_mPG_TextDecoder) { s_str_F = rb_str_freeze(rb_str_new_cstr("F")); rb_global_variable(&s_str_F); rb_require("bigdecimal"); s_cBigDecimal = rb_const_get(rb_cObject, rb_intern("BigDecimal")); /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "Numeric", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Numeric", pg_text_enc_numeric, rb_cPG_SimpleEncoder, rb_mPG_TextEncoder ); return Qnil; } static const char hextab[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }; /* * Document-class: PG::TextEncoder::Bytea < PG::SimpleEncoder * * This is an encoder class for the PostgreSQL +bytea+ type. * * The binary String is converted to hexadecimal representation for transmission * in text format. For query bind parameters it is recommended to use * PG::BinaryEncoder::Bytea or the hash form {value: binary_string, format: 1} instead, * in order to decrease network traffic and CPU usage. * See PG::Connection#exec_params for using the hash form. * * This encoder is particular useful when PG::TextEncoder::CopyRow is used with the COPY command. * In this case there's no way to change the format of a single column to binary, so that the data have to be converted to bytea hex representation. * */ static int pg_text_enc_bytea(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { if(out){ size_t strlen = RSTRING_LEN(*intermediate); char *iptr = RSTRING_PTR(*intermediate); char *eptr = iptr + strlen; char *optr = out; *optr++ = '\\'; *optr++ = 'x'; for( ; iptr < eptr; iptr++ ){ unsigned char c = *iptr; *optr++ = hextab[c >> 4]; *optr++ = hextab[c & 0xf]; } return (int)(optr - out); }else{ *intermediate = rb_obj_as_string(value); /* The output starts with "\x" and each character is converted to hex. */ return 2 + RSTRING_LENINT(*intermediate) * 2; } } typedef int (*t_quote_func)( void *_this, char *p_in, int strlen, char *p_out ); static int quote_array_buffer( void *_this, char *p_in, int strlen, char *p_out ){ t_pg_composite_coder *this = _this; char *ptr1; char *ptr2; int backslashs = 0; int needquote; /* count data plus backslashes; detect chars needing quotes */ if (strlen == 0) needquote = 1; /* force quotes for empty string */ else if (strlen == 4 && rbpg_strncasecmp(p_in, "NULL", strlen) == 0) needquote = 1; /* force quotes for literal NULL */ else needquote = 0; /* count required backlashs */ for(ptr1 = p_in; ptr1 != p_in + strlen; ptr1++) { char ch = *ptr1; if (ch == '"' || ch == '\\'){ needquote = 1; backslashs++; } else if (ch == '{' || ch == '}' || ch == this->delimiter || ch == ' ' || ch == '\t' || ch == '\n' || ch == '\r' || ch == '\v' || ch == '\f'){ needquote = 1; } } if( needquote ){ ptr1 = p_in + strlen; ptr2 = p_out + strlen + backslashs + 2; /* Write end quote */ *--ptr2 = '"'; /* Then store the escaped string on the final position, walking * right to left, until all backslashs are placed. */ while( ptr1 != p_in ) { *--ptr2 = *--ptr1; if(*ptr2 == '"' || *ptr2 == '\\'){ *--ptr2 = '\\'; } } /* Write start quote */ *p_out = '"'; return strlen + backslashs + 2; } else { if( p_in != p_out ) memcpy( p_out, p_in, strlen ); return strlen; } } static char * quote_string(t_pg_coder *this, VALUE value, VALUE string, char *current_out, int with_quote, t_quote_func quote_buffer, void *func_data, int enc_idx) { int strlen; VALUE subint; t_pg_coder_enc_func enc_func = pg_coder_enc_func(this); strlen = enc_func(this, value, NULL, &subint, enc_idx); if( strlen == -1 ){ /* we can directly use String value in subint */ strlen = RSTRING_LENINT(subint); if(with_quote){ /* size of string assuming the worst case, that every character must be escaped. */ current_out = pg_rb_str_ensure_capa( string, strlen * 2 + 2, current_out, NULL ); current_out += quote_buffer( func_data, RSTRING_PTR(subint), strlen, current_out ); } else { current_out = pg_rb_str_ensure_capa( string, strlen, current_out, NULL ); memcpy( current_out, RSTRING_PTR(subint), strlen ); current_out += strlen; } } else { if(with_quote){ /* size of string assuming the worst case, that every character must be escaped * plus two bytes for quotation. */ current_out = pg_rb_str_ensure_capa( string, 2 * strlen + 2, current_out, NULL ); /* Place the unescaped string at current output position. */ strlen = enc_func(this, value, current_out, &subint, enc_idx); current_out += quote_buffer( func_data, current_out, strlen, current_out ); }else{ /* size of the unquoted string */ current_out = pg_rb_str_ensure_capa( string, strlen, current_out, NULL ); current_out += enc_func(this, value, current_out, &subint, enc_idx); } } return current_out; } static char * write_array(t_pg_composite_coder *this, VALUE value, char *current_out, VALUE string, int quote, int enc_idx) { int i; /* size of "{}" */ current_out = pg_rb_str_ensure_capa( string, 2, current_out, NULL ); *current_out++ = '{'; for( i=0; i 0 ){ current_out = pg_rb_str_ensure_capa( string, 1, current_out, NULL ); *current_out++ = this->delimiter; } switch(TYPE(entry)){ case T_ARRAY: current_out = write_array(this, entry, current_out, string, quote, enc_idx); break; case T_NIL: current_out = pg_rb_str_ensure_capa( string, 4, current_out, NULL ); *current_out++ = 'N'; *current_out++ = 'U'; *current_out++ = 'L'; *current_out++ = 'L'; break; default: current_out = quote_string( this->elem, entry, string, current_out, quote, quote_array_buffer, this, enc_idx ); } } current_out = pg_rb_str_ensure_capa( string, 1, current_out, NULL ); *current_out++ = '}'; return current_out; } /* * Document-class: PG::TextEncoder::Array < PG::CompositeEncoder * * This is the encoder class for PostgreSQL array types. * * All values are encoded according to the #elements_type * accessor. Sub-arrays are encoded recursively. * * This encoder expects an Array of values or sub-arrays as input. * Other values are passed through as text without interpretation. * */ static int pg_text_enc_array(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { char *end_ptr; t_pg_composite_coder *this = (t_pg_composite_coder *)conv; if( TYPE(value) == T_ARRAY){ VALUE out_str = rb_str_new(NULL, 0); PG_ENCODING_SET_NOCHECK(out_str, enc_idx); end_ptr = write_array(this, value, RSTRING_PTR(out_str), out_str, this->needs_quotation, enc_idx); rb_str_set_len( out_str, end_ptr - RSTRING_PTR(out_str) ); *intermediate = out_str; return -1; } else { return pg_coder_enc_to_s( conv, value, out, intermediate, enc_idx ); } } static char * quote_identifier( VALUE value, VALUE out_string, char *current_out ){ char *p_in = RSTRING_PTR(value); size_t strlen = RSTRING_LEN(value); char *p_inend = p_in + strlen; char *end_capa = current_out; PG_RB_STR_ENSURE_CAPA( out_string, strlen + 2, current_out, end_capa ); *current_out++ = '"'; for(; p_in != p_inend; p_in++) { char c = *p_in; if (c == '"'){ PG_RB_STR_ENSURE_CAPA( out_string, p_inend - p_in + 2, current_out, end_capa ); *current_out++ = '"'; } else if (c == 0){ rb_raise(rb_eArgError, "string contains null byte"); } *current_out++ = c; } PG_RB_STR_ENSURE_CAPA( out_string, 1, current_out, end_capa ); *current_out++ = '"'; return current_out; } static char * pg_text_enc_array_identifier(VALUE value, VALUE string, char *out, int enc_idx) { long i; long nr_elems; Check_Type(value, T_ARRAY); nr_elems = RARRAY_LEN(value); for( i=0; i '"schema"."table"."column"' * * This encoder can also be used per PG::Connection#quote_ident . */ int pg_text_enc_identifier(t_pg_coder *this, VALUE value, char *out, VALUE *intermediate, int enc_idx) { VALUE out_str; UNUSED( this ); if( TYPE(value) == T_ARRAY){ out_str = rb_str_new(NULL, 0); out = RSTRING_PTR(out_str); out = pg_text_enc_array_identifier(value, out_str, out, enc_idx); } else { StringValue(value); if( ENCODING_GET(value) != enc_idx ){ value = rb_str_export_to_enc(value, rb_enc_from_index(enc_idx)); } out_str = rb_str_new(NULL, RSTRING_LEN(value) + 2); out = RSTRING_PTR(out_str); out = quote_identifier(value, out_str, out); } rb_str_set_len( out_str, out - RSTRING_PTR(out_str) ); PG_ENCODING_SET_NOCHECK(out_str, enc_idx); *intermediate = out_str; return -1; } static int quote_literal_buffer( void *_this, char *p_in, int strlen, char *p_out ){ char *ptr1; char *ptr2; int backslashs = 0; /* count required backlashs */ for(ptr1 = p_in; ptr1 != p_in + strlen; ptr1++) { if (*ptr1 == '\''){ backslashs++; } } ptr1 = p_in + strlen; ptr2 = p_out + strlen + backslashs + 2; /* Write end quote */ *--ptr2 = '\''; /* Then store the escaped string on the final position, walking * right to left, until all backslashs are placed. */ while( ptr1 != p_in ) { *--ptr2 = *--ptr1; if(*ptr2 == '\''){ *--ptr2 = '\''; } } /* Write start quote */ *p_out = '\''; return strlen + backslashs + 2; } /* * Document-class: PG::TextEncoder::QuotedLiteral < PG::CompositeEncoder * * This is the encoder class for PostgreSQL literals. * * A literal is quoted and escaped by the ' character, so that it can be inserted into SQL queries. * It works equal to PG::Connection#escape_literal, but integrates into the type cast system of ruby-pg. * * Both expressions have the same result: * conn.escape_literal(PG::TextEncoder::Array.new.encode(["v1","v2"])) # => "'{v1,v2}'" * PG::TextEncoder::QuotedLiteral.new(elements_type: PG::TextEncoder::Array.new).encode(["v1","v2"]) # => "'{v1,v2}'" * While escape_literal requires a intermediate ruby string allocation, QuotedLiteral encodes the values directly to the result string. * */ static int pg_text_enc_quoted_literal(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { t_pg_composite_coder *this = (t_pg_composite_coder *)conv; VALUE out_str = rb_str_new(NULL, 0); PG_ENCODING_SET_NOCHECK(out_str, enc_idx); out = RSTRING_PTR(out_str); out = quote_string(this->elem, value, out_str, out, this->needs_quotation, quote_literal_buffer, this, enc_idx); rb_str_set_len( out_str, out - RSTRING_PTR(out_str) ); *intermediate = out_str; return -1; } /* * Document-class: PG::TextEncoder::ToBase64 < PG::CompositeEncoder * * This is an encoder class for conversion of binary to base64 data. * */ static int pg_text_enc_to_base64(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { int strlen; VALUE subint; t_pg_composite_coder *this = (t_pg_composite_coder *)conv; t_pg_coder_enc_func enc_func = pg_coder_enc_func(this->elem); if(out){ /* Second encoder pass, if required */ strlen = enc_func(this->elem, value, out, intermediate, enc_idx); base64_encode( out, out, strlen ); return BASE64_ENCODED_SIZE(strlen); } else { /* First encoder pass */ strlen = enc_func(this->elem, value, NULL, &subint, enc_idx); if( strlen == -1 ){ /* Encoded string is returned in subint */ VALUE out_str; strlen = RSTRING_LENINT(subint); out_str = rb_str_new(NULL, BASE64_ENCODED_SIZE(strlen)); PG_ENCODING_SET_NOCHECK(out_str, enc_idx); base64_encode( RSTRING_PTR(out_str), RSTRING_PTR(subint), strlen); *intermediate = out_str; return -1; } else { *intermediate = subint; return BASE64_ENCODED_SIZE(strlen); } } } void init_pg_text_encoder(void) { s_id_encode = rb_intern("encode"); s_id_to_i = rb_intern("to_i"); s_id_to_s = rb_intern("to_s"); /* This module encapsulates all encoder classes with text output format */ rb_mPG_TextEncoder = rb_define_module_under( rb_mPG, "TextEncoder" ); rb_define_private_method(rb_singleton_class(rb_mPG_TextEncoder), "init_numeric", init_pg_text_encoder_numeric, 0); /* Make RDoc aware of the encoder classes... */ /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "Boolean", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Boolean", pg_text_enc_boolean, rb_cPG_SimpleEncoder, rb_mPG_TextEncoder ); /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "Integer", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Integer", pg_text_enc_integer, rb_cPG_SimpleEncoder, rb_mPG_TextEncoder ); /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "Float", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Float", pg_text_enc_float, rb_cPG_SimpleEncoder, rb_mPG_TextEncoder ); /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "String", rb_cPG_SimpleEncoder ); */ pg_define_coder( "String", pg_coder_enc_to_s, rb_cPG_SimpleEncoder, rb_mPG_TextEncoder ); /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "Bytea", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Bytea", pg_text_enc_bytea, rb_cPG_SimpleEncoder, rb_mPG_TextEncoder ); /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "Identifier", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Identifier", pg_text_enc_identifier, rb_cPG_SimpleEncoder, rb_mPG_TextEncoder ); /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "Array", rb_cPG_CompositeEncoder ); */ pg_define_coder( "Array", pg_text_enc_array, rb_cPG_CompositeEncoder, rb_mPG_TextEncoder ); /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "QuotedLiteral", rb_cPG_CompositeEncoder ); */ pg_define_coder( "QuotedLiteral", pg_text_enc_quoted_literal, rb_cPG_CompositeEncoder, rb_mPG_TextEncoder ); /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "ToBase64", rb_cPG_CompositeEncoder ); */ pg_define_coder( "ToBase64", pg_text_enc_to_base64, rb_cPG_CompositeEncoder, rb_mPG_TextEncoder ); } pg-1.5.5/ext/errorcodes.rb0000644000004100000410000000234714563476204015475 0ustar www-datawww-data# -*- ruby -*- def camelize(lower_case_and_underscored_word) lower_case_and_underscored_word.to_s.gsub(/\/(.?)/) { "::" + $1.upcase }.gsub(/(^|_)(.)/) { $2.upcase } end ec_txt, ec_def = *ARGV File.open(ec_def, 'w') do |fd_def| fd_def.puts < 0 ){ int num_tuples = PQntuples(result); if( num_tuples > 0 ){ int pos; /* This is a simple heuristic to determine the number of sample fields and subsequently to approximate the memory size taken by all field values of the result set. * Since scanning of all field values is would have a severe performance impact, only a small subset of fields is retrieved and the result is extrapolated to the whole result set. * The given algorithm has no real scientific background, but is made for speed and typical table layouts. */ int num_samples = (num_fields < 9 ? num_fields : 39 - count_leading_zero_bits(num_fields-8)) * (num_tuples < 8 ? 1 : 30 - count_leading_zero_bits(num_tuples)); /* start with scanning very last fields, since they are most probably in the cache */ for( pos = 0; pos < (num_samples+1)/2; pos++ ){ size += PQgetlength(result, num_tuples - 1 - (pos / num_fields), num_fields - 1 - (pos % num_fields)); } /* scan the very first fields */ for( pos = 0; pos < num_samples/2; pos++ ){ size += PQgetlength(result, pos / num_fields, pos % num_fields); } /* extrapolate sample size to whole result set */ size = size * num_tuples * num_fields / num_samples; } /* count metadata */ size += num_fields * ( sizeof(PGresAttDesc) + /* column description */ num_tuples * ( sizeof(PGresAttValue) + 1 /* ptr, len and zero termination of each value */ ) ); /* Account free space due to libpq's default block size */ size = (size + PGRESULT_DATA_BLOCKSIZE - 1) / PGRESULT_DATA_BLOCKSIZE * PGRESULT_DATA_BLOCKSIZE; /* count tuple pointers */ size += sizeof(void*) * ((num_tuples + 128 - 1) / 128 * 128); } size += 216; /* add PGresult size */ return size; } #endif /* * GC Mark function */ static void pgresult_gc_mark( void *_this ) { t_pg_result *this = (t_pg_result *)_this; int i; rb_gc_mark_movable( this->connection ); rb_gc_mark_movable( this->typemap ); rb_gc_mark_movable( this->tuple_hash ); rb_gc_mark_movable( this->field_map ); for( i=0; i < this->nfields; i++ ){ rb_gc_mark_movable( this->fnames[i] ); } } static void pgresult_gc_compact( void *_this ) { t_pg_result *this = (t_pg_result *)_this; int i; pg_gc_location( this->connection ); pg_gc_location( this->typemap ); pg_gc_location( this->tuple_hash ); pg_gc_location( this->field_map ); for( i=0; i < this->nfields; i++ ){ pg_gc_location( this->fnames[i] ); } } /* * GC Free function */ static void pgresult_clear( void *_this ) { t_pg_result *this = (t_pg_result *)_this; if( this->pgresult && !this->autoclear ){ PQclear(this->pgresult); #ifdef HAVE_RB_GC_ADJUST_MEMORY_USAGE rb_gc_adjust_memory_usage(-this->result_size); #endif } this->result_size = 0; this->nfields = -1; this->pgresult = NULL; } static void pgresult_gc_free( void *_this ) { t_pg_result *this = (t_pg_result *)_this; pgresult_clear( this ); xfree(this); } static size_t pgresult_memsize( const void *_this ) { const t_pg_result *this = (const t_pg_result *)_this; /* Ideally the memory 'this' is pointing to should be taken into account as well. * However we don't want to store two memory sizes in t_pg_result just for reporting by ObjectSpace.memsize_of. */ return this->result_size; } static const rb_data_type_t pgresult_type = { "PG::Result", { pgresult_gc_mark, pgresult_gc_free, pgresult_memsize, pg_compact_callback(pgresult_gc_compact), }, 0, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; /* Needed by sequel_pg gem, do not delete */ int pg_get_result_enc_idx(VALUE self) { return pgresult_get_this(self)->enc_idx; } /* * Global functions */ /* * Result constructor */ static VALUE pg_new_result2(PGresult *result, VALUE rb_pgconn) { int nfields = result ? PQnfields(result) : 0; VALUE self; t_pg_result *this; this = (t_pg_result *)xmalloc(sizeof(*this) + sizeof(*this->fnames) * nfields); this->pgresult = result; /* Initialize connection and typemap prior to any object allocations, * to make sure valid objects are marked. */ this->connection = rb_pgconn; this->typemap = pg_typemap_all_strings; this->p_typemap = RTYPEDDATA_DATA( this->typemap ); this->nfields = -1; this->tuple_hash = Qnil; this->field_map = Qnil; this->flags = 0; self = TypedData_Wrap_Struct(rb_cPGresult, &pgresult_type, this); if( result ){ t_pg_connection *p_conn = pg_get_connection(rb_pgconn); VALUE typemap = p_conn->type_map_for_results; /* Type check is done when assigned to PG::Connection. */ t_typemap *p_typemap = RTYPEDDATA_DATA(typemap); this->enc_idx = p_conn->enc_idx; typemap = p_typemap->funcs.fit_to_result( typemap, self ); RB_OBJ_WRITE(self, &this->typemap, typemap); this->p_typemap = RTYPEDDATA_DATA( this->typemap ); this->flags = p_conn->flags; } else { this->enc_idx = rb_locale_encindex(); } return self; } VALUE pg_new_result(PGresult *result, VALUE rb_pgconn) { VALUE self = pg_new_result2(result, rb_pgconn); t_pg_result *this = pgresult_get_this(self); this->autoclear = 0; /* Estimate size of underlying pgresult memory storage and account to ruby GC. * There's no need to adjust the GC for xmalloc'ed memory, but libpq is using libc malloc() ruby doesn't know about. */ /* TODO: If someday most systems provide PQresultMemorySize(), it's questionable to store result_size in t_pg_result in addition to the value already stored in PGresult. * For now the memory savings don't justify the ifdefs necessary to support both cases. */ this->result_size = pgresult_approx_size(result); #ifdef HAVE_RB_GC_ADJUST_MEMORY_USAGE rb_gc_adjust_memory_usage(this->result_size); #endif return self; } static VALUE pg_copy_result(t_pg_result *this) { int nfields = this->nfields == -1 ? (this->pgresult ? PQnfields(this->pgresult) : 0) : this->nfields; size_t len = sizeof(*this) + sizeof(*this->fnames) * nfields; t_pg_result *copy; copy = (t_pg_result *)xmalloc(len); memcpy(copy, this, len); this->result_size = 0; return TypedData_Wrap_Struct(rb_cPGresult, &pgresult_type, copy); } VALUE pg_new_result_autoclear(PGresult *result, VALUE rb_pgconn) { VALUE self = pg_new_result2(result, rb_pgconn); t_pg_result *this = pgresult_get_this(self); /* Autocleared results are freed implicit instead of by PQclear(). * So it's not very useful to be accounted by ruby GC. */ this->result_size = 0; this->autoclear = 1; return self; } /* * call-seq: * res.check -> nil * * Raises appropriate exception if PG::Result is in a bad state, which is: * * +PGRES_BAD_RESPONSE+ * * +PGRES_FATAL_ERROR+ * * +PGRES_NONFATAL_ERROR+ * * +PGRES_PIPELINE_ABORTED+ */ VALUE pg_result_check( VALUE self ) { t_pg_result *this = pgresult_get_this(self); VALUE error, exception, klass; char * sqlstate; if(this->pgresult == NULL) { PGconn *conn = pg_get_pgconn(this->connection); error = rb_str_new2( PQerrorMessage(conn) ); } else { switch (PQresultStatus(this->pgresult)) { case PGRES_TUPLES_OK: case PGRES_COPY_OUT: case PGRES_COPY_IN: case PGRES_COPY_BOTH: case PGRES_SINGLE_TUPLE: case PGRES_EMPTY_QUERY: case PGRES_COMMAND_OK: #ifdef HAVE_PQENTERPIPELINEMODE case PGRES_PIPELINE_SYNC: #endif return self; case PGRES_BAD_RESPONSE: case PGRES_FATAL_ERROR: case PGRES_NONFATAL_ERROR: #ifdef HAVE_PQENTERPIPELINEMODE case PGRES_PIPELINE_ABORTED: #endif error = rb_str_new2( PQresultErrorMessage(this->pgresult) ); break; default: error = rb_str_new2( "internal error : unknown result status." ); } } PG_ENCODING_SET_NOCHECK( error, this->enc_idx ); sqlstate = PQresultErrorField( this->pgresult, PG_DIAG_SQLSTATE ); klass = lookup_error_class( sqlstate ); exception = rb_exc_new3( klass, error ); rb_iv_set( exception, "@connection", this->connection ); rb_iv_set( exception, "@result", this->pgresult ? self : Qnil ); rb_exc_raise( exception ); /* Not reached */ return self; } /* * :TODO: This shouldn't be a global function, but it needs to be as long as pg_new_result * doesn't handle blocks, check results, etc. Once connection and result are disentangled * a bit more, I can make this a static pgresult_clear() again. */ /* * call-seq: * res.clear() -> nil * * Clears the PG::Result object as the result of a query. * This frees all underlying memory consumed by the result object. * Afterwards access to result methods raises PG::Error "result has been cleared". * * Explicit calling #clear can lead to better memory performance, but is not generally necessary. * Special care must be taken when PG::Tuple objects are used. * In this case #clear must not be called unless all PG::Tuple objects of this result are fully materialized. * * If PG::Result#autoclear? is +true+ then the result is only marked as cleared but clearing the underlying C struct will happen when the callback returns. * */ VALUE pg_result_clear(VALUE self) { t_pg_result *this = pgresult_get_this(self); rb_check_frozen(self); pgresult_clear( this ); return Qnil; } /* * call-seq: * res.freeze * * Freeze the PG::Result object and unlink the result from the related PG::Connection. * * A frozen PG::Result object doesn't allow any streaming and it can't be cleared. * It also denies setting a type_map or field_name_type. * */ static VALUE pg_result_freeze(VALUE self) { t_pg_result *this = pgresult_get_this(self); RB_OBJ_WRITE(self, &this->connection, Qnil); return rb_call_super(0, NULL); } /* * call-seq: * res.cleared? -> boolean * * Returns +true+ if the backend result memory has been freed. */ static VALUE pgresult_cleared_p( VALUE self ) { t_pg_result *this = pgresult_get_this(self); return this->pgresult ? Qfalse : Qtrue; } /* * call-seq: * res.autoclear? -> boolean * * Returns +true+ if the underlying C struct will be cleared at the end of a callback. * This applies only to Result objects received by the block to PG::Connection#set_notice_receiver . * * All other Result objects are automatically cleared by the GC when the object is no longer in use or manually by PG::Result#clear . * */ static VALUE pgresult_autoclear_p( VALUE self ) { t_pg_result *this = pgresult_get_this(self); return this->autoclear ? Qtrue : Qfalse; } /* * DATA pointer functions */ /* * Fetch the PG::Result object data pointer and check it's * PGresult data pointer for sanity. */ static t_pg_result * pgresult_get_this_safe( VALUE self ) { t_pg_result *this = pgresult_get_this(self); if (this->pgresult == NULL) rb_raise(rb_ePGerror, "result has been cleared"); return this; } /* * Fetch the PGresult pointer for the result object and check validity * * Note: This function is used externally by the sequel_pg gem, * so do changes carefully. * */ PGresult* pgresult_get(VALUE self) { t_pg_result *this = pgresult_get_this(self); if (this->pgresult == NULL) rb_raise(rb_ePGerror, "result has been cleared"); return this->pgresult; } static VALUE pg_cstr_to_sym(char *cstr, unsigned int flags, int enc_idx) { VALUE fname; #ifdef TRUFFLERUBY if( flags & (PG_RESULT_FIELD_NAMES_SYMBOL | PG_RESULT_FIELD_NAMES_STATIC_SYMBOL) ){ #else if( flags & PG_RESULT_FIELD_NAMES_SYMBOL ){ rb_encoding *enc = rb_enc_from_index(enc_idx); fname = rb_check_symbol_cstr(cstr, strlen(cstr), enc); if( fname == Qnil ){ fname = rb_str_new2(cstr); PG_ENCODING_SET_NOCHECK(fname, enc_idx); fname = rb_str_intern(fname); } } else if( flags & PG_RESULT_FIELD_NAMES_STATIC_SYMBOL ){ #endif rb_encoding *enc = rb_enc_from_index(enc_idx); fname = ID2SYM(rb_intern3(cstr, strlen(cstr), enc)); } else { fname = rb_str_new2(cstr); PG_ENCODING_SET_NOCHECK(fname, enc_idx); fname = rb_obj_freeze(fname); } return fname; } static void pgresult_init_fnames(VALUE self) { t_pg_result *this = pgresult_get_this_safe(self); if( this->nfields == -1 ){ int i; int nfields = PQnfields(this->pgresult); for( i=0; ipgresult, i); VALUE fname = pg_cstr_to_sym(cfname, this->flags, this->enc_idx); RB_OBJ_WRITE(self, &this->fnames[i], fname); this->nfields = i + 1; } this->nfields = nfields; } } /******************************************************************** * * Document-class: PG::Result * * The class to represent the query result tuples (rows). * An instance of this class is created as the result of every query. * All result rows and columns are stored in a memory block attached to the PG::Result object. * Whenever a value is accessed it is casted to a Ruby object by the assigned #type_map . * * Since pg-1.1 the amount of memory in use by a PG::Result object is estimated and passed to ruby's garbage collector. * You can invoke the #clear method to force deallocation of memory of the instance when finished with the result for better memory performance. * * Example: * require 'pg' * conn = PG.connect(:dbname => 'test') * res = conn.exec('SELECT 1 AS a, 2 AS b, NULL AS c') * res.getvalue(0,0) # '1' * res[0]['b'] # '2' * res[0]['c'] # nil * */ /************************************************************************** * PG::Result INSTANCE METHODS **************************************************************************/ /* * call-seq: * res.result_status() -> Integer * * Returns the status of the query. The status value is one of: * * +PGRES_EMPTY_QUERY+ * * +PGRES_COMMAND_OK+ * * +PGRES_TUPLES_OK+ * * +PGRES_COPY_OUT+ * * +PGRES_COPY_IN+ * * +PGRES_BAD_RESPONSE+ * * +PGRES_NONFATAL_ERROR+ * * +PGRES_FATAL_ERROR+ * * +PGRES_COPY_BOTH+ * * +PGRES_SINGLE_TUPLE+ * * +PGRES_PIPELINE_SYNC+ * * +PGRES_PIPELINE_ABORTED+ * * Use res.res_status to retrieve the string representation. */ static VALUE pgresult_result_status(VALUE self) { return INT2FIX(PQresultStatus(pgresult_get(self))); } /* * call-seq: * PG::Result.res_status( status ) -> String * * Returns the string representation of +status+. * */ static VALUE pgresult_s_res_status(VALUE self, VALUE status) { return rb_utf8_str_new_cstr(PQresStatus(NUM2INT(status))); } /* * call-seq: * res.res_status -> String * res.res_status( status ) -> String * * Returns the string representation of the status of the result or of the provided +status+. * */ static VALUE pgresult_res_status(int argc, VALUE *argv, VALUE self) { t_pg_result *this = pgresult_get_this_safe(self); VALUE ret; if( argc == 0 ){ ret = rb_str_new2(PQresStatus(PQresultStatus(this->pgresult))); }else if( argc == 1 ){ ret = rb_str_new2(PQresStatus(NUM2INT(argv[0]))); }else{ rb_raise(rb_eArgError, "only 0 or 1 arguments expected"); } PG_ENCODING_SET_NOCHECK(ret, this->enc_idx); return ret; } /* * call-seq: * res.error_message() -> String * * Returns the error message of the command as a string. */ static VALUE pgresult_error_message(VALUE self) { t_pg_result *this = pgresult_get_this_safe(self); VALUE ret = rb_str_new2(PQresultErrorMessage(this->pgresult)); PG_ENCODING_SET_NOCHECK(ret, this->enc_idx); return ret; } #ifdef HAVE_PQRESULTVERBOSEERRORMESSAGE /* * call-seq: * res.verbose_error_message( verbosity, show_context ) -> String * * Returns a reformatted version of the error message associated with a PGresult object. * * Available since PostgreSQL-9.6 */ static VALUE pgresult_verbose_error_message(VALUE self, VALUE verbosity, VALUE show_context) { t_pg_result *this = pgresult_get_this_safe(self); VALUE ret; char *c_str; c_str = PQresultVerboseErrorMessage(this->pgresult, NUM2INT(verbosity), NUM2INT(show_context)); if(!c_str) rb_raise(rb_eNoMemError, "insufficient memory to format error message"); ret = rb_str_new2(c_str); PQfreemem(c_str); PG_ENCODING_SET_NOCHECK(ret, this->enc_idx); return ret; } #endif /* * call-seq: * res.error_field(fieldcode) -> String * * Returns the individual field of an error. * * +fieldcode+ is one of: * * +PG_DIAG_SEVERITY+ * * +PG_DIAG_SQLSTATE+ * * +PG_DIAG_MESSAGE_PRIMARY+ * * +PG_DIAG_MESSAGE_DETAIL+ * * +PG_DIAG_MESSAGE_HINT+ * * +PG_DIAG_STATEMENT_POSITION+ * * +PG_DIAG_INTERNAL_POSITION+ * * +PG_DIAG_INTERNAL_QUERY+ * * +PG_DIAG_CONTEXT+ * * +PG_DIAG_SOURCE_FILE+ * * +PG_DIAG_SOURCE_LINE+ * * +PG_DIAG_SOURCE_FUNCTION+ * * An example: * * begin * conn.exec( "SELECT * FROM nonexistant_table" ) * rescue PG::Error => err * p [ * err.result.error_field( PG::Result::PG_DIAG_SEVERITY ), * err.result.error_field( PG::Result::PG_DIAG_SQLSTATE ), * err.result.error_field( PG::Result::PG_DIAG_MESSAGE_PRIMARY ), * err.result.error_field( PG::Result::PG_DIAG_MESSAGE_DETAIL ), * err.result.error_field( PG::Result::PG_DIAG_MESSAGE_HINT ), * err.result.error_field( PG::Result::PG_DIAG_STATEMENT_POSITION ), * err.result.error_field( PG::Result::PG_DIAG_INTERNAL_POSITION ), * err.result.error_field( PG::Result::PG_DIAG_INTERNAL_QUERY ), * err.result.error_field( PG::Result::PG_DIAG_CONTEXT ), * err.result.error_field( PG::Result::PG_DIAG_SOURCE_FILE ), * err.result.error_field( PG::Result::PG_DIAG_SOURCE_LINE ), * err.result.error_field( PG::Result::PG_DIAG_SOURCE_FUNCTION ), * ] * end * * Outputs: * * ["ERROR", "42P01", "relation \"nonexistant_table\" does not exist", nil, nil, * "15", nil, nil, nil, "path/to/parse_relation.c", "857", "parserOpenTable"] */ static VALUE pgresult_error_field(VALUE self, VALUE field) { t_pg_result *this = pgresult_get_this_safe(self); int fieldcode = NUM2INT( field ); char * fieldstr = PQresultErrorField( this->pgresult, fieldcode ); VALUE ret = Qnil; if ( fieldstr ) { ret = rb_str_new2( fieldstr ); PG_ENCODING_SET_NOCHECK( ret, this->enc_idx ); } return ret; } /* * call-seq: * res.ntuples() -> Integer * * Returns the number of tuples in the query result. */ static VALUE pgresult_ntuples(VALUE self) { return INT2FIX(PQntuples(pgresult_get(self))); } static VALUE pgresult_ntuples_for_enum(VALUE self, VALUE args, VALUE eobj) { return pgresult_ntuples(self); } /* * call-seq: * res.nfields() -> Integer * * Returns the number of columns in the query result. */ static VALUE pgresult_nfields(VALUE self) { return INT2NUM(PQnfields(pgresult_get(self))); } /* * call-seq: * res.binary_tuples() -> Integer * * Returns 1 if the PGresult contains binary data and 0 if it contains text data. * * This function is deprecated (except for its use in connection with COPY), because it is possible for a single PGresult to contain text data in some columns and binary data in others. * Result#fformat is preferred. binary_tuples returns 1 only if all columns of the result are binary (format 1). */ static VALUE pgresult_binary_tuples(VALUE self) { return INT2NUM(PQbinaryTuples(pgresult_get(self))); } /* * call-seq: * res.fname( index ) -> String or Symbol * * Returns the name of the column corresponding to _index_. * Depending on #field_name_type= it's a String or Symbol. * */ static VALUE pgresult_fname(VALUE self, VALUE index) { t_pg_result *this = pgresult_get_this_safe(self); int i = NUM2INT(index); char *cfname; if (i < 0 || i >= PQnfields(this->pgresult)) { rb_raise(rb_eArgError,"invalid field number %d", i); } cfname = PQfname(this->pgresult, i); return pg_cstr_to_sym(cfname, this->flags, this->enc_idx); } /* * call-seq: * res.fnumber( name ) -> Integer * * Returns the index of the field specified by the string +name+. * The given +name+ is treated like an identifier in an SQL command, that is, * it is downcased unless double-quoted. For example, given a query result * generated from the SQL command: * * result = conn.exec( %{SELECT 1 AS FOO, 2 AS "BAR"} ) * * we would have the results: * * result.fname( 0 ) # => "foo" * result.fname( 1 ) # => "BAR" * result.fnumber( "FOO" ) # => 0 * result.fnumber( "foo" ) # => 0 * result.fnumber( "BAR" ) # => ArgumentError * result.fnumber( %{"BAR"} ) # => 1 * * Raises an ArgumentError if the specified +name+ isn't one of the field names; * raises a TypeError if +name+ is not a String. */ static VALUE pgresult_fnumber(VALUE self, VALUE name) { int n; Check_Type(name, T_STRING); n = PQfnumber(pgresult_get(self), StringValueCStr(name)); if (n == -1) { rb_raise(rb_eArgError,"Unknown field: %s", StringValueCStr(name)); } return INT2FIX(n); } /* * call-seq: * res.ftable( column_number ) -> Integer * * Returns the Oid of the table from which the column _column_number_ * was fetched. * * Raises ArgumentError if _column_number_ is out of range or if * the Oid is undefined for that column. */ static VALUE pgresult_ftable(VALUE self, VALUE column_number) { Oid n ; int col_number = NUM2INT(column_number); PGresult *pgresult = pgresult_get(self); if( col_number < 0 || col_number >= PQnfields(pgresult)) rb_raise(rb_eArgError,"Invalid column index: %d", col_number); n = PQftable(pgresult, col_number); return UINT2NUM(n); } /* * call-seq: * res.ftablecol( column_number ) -> Integer * * Returns the column number (within its table) of the table from * which the column _column_number_ is made up. * * Raises ArgumentError if _column_number_ is out of range or if * the column number from its table is undefined for that column. */ static VALUE pgresult_ftablecol(VALUE self, VALUE column_number) { int col_number = NUM2INT(column_number); PGresult *pgresult = pgresult_get(self); int n; if( col_number < 0 || col_number >= PQnfields(pgresult)) rb_raise(rb_eArgError,"Invalid column index: %d", col_number); n = PQftablecol(pgresult, col_number); return INT2FIX(n); } /* * call-seq: * res.fformat( column_number ) -> Integer * * Returns the format (0 for text, 1 for binary) of column * _column_number_. * * Raises ArgumentError if _column_number_ is out of range. */ static VALUE pgresult_fformat(VALUE self, VALUE column_number) { PGresult *result = pgresult_get(self); int fnumber = NUM2INT(column_number); if (fnumber < 0 || fnumber >= PQnfields(result)) { rb_raise(rb_eArgError, "Column number is out of range: %d", fnumber); } return INT2FIX(PQfformat(result, fnumber)); } /* * call-seq: * res.ftype( column_number ) -> Integer * * Returns the data type associated with _column_number_. * * The integer returned is the internal +OID+ number (in PostgreSQL) * of the type. To get a human-readable value for the type, use the * returned OID and the field's #fmod value with the format_type() SQL * function: * * # Get the type of the second column of the result 'res' * typename = conn. * exec( "SELECT format_type($1,$2)", [res.ftype(1), res.fmod(1)] ). * getvalue( 0, 0 ) * * Raises an ArgumentError if _column_number_ is out of range. */ static VALUE pgresult_ftype(VALUE self, VALUE index) { PGresult* result = pgresult_get(self); int i = NUM2INT(index); if (i < 0 || i >= PQnfields(result)) { rb_raise(rb_eArgError, "invalid field number %d", i); } return UINT2NUM(PQftype(result, i)); } /* * call-seq: * res.fmod( column_number ) * * Returns the type modifier associated with column _column_number_. See * the #ftype method for an example of how to use this. * * Raises an ArgumentError if _column_number_ is out of range. */ static VALUE pgresult_fmod(VALUE self, VALUE column_number) { PGresult *result = pgresult_get(self); int fnumber = NUM2INT(column_number); int modifier; if (fnumber < 0 || fnumber >= PQnfields(result)) { rb_raise(rb_eArgError, "Column number is out of range: %d", fnumber); } modifier = PQfmod(result,fnumber); return INT2NUM(modifier); } /* * call-seq: * res.fsize( index ) * * Returns the size of the field type in bytes. Returns -1 if the field is variable sized. * * res = conn.exec("SELECT myInt, myVarChar50 FROM foo") * res.size(0) => 4 * res.size(1) => -1 */ static VALUE pgresult_fsize(VALUE self, VALUE index) { PGresult *result; int i = NUM2INT(index); result = pgresult_get(self); if (i < 0 || i >= PQnfields(result)) { rb_raise(rb_eArgError,"invalid field number %d", i); } return INT2NUM(PQfsize(result, i)); } /* * call-seq: * res.getvalue( tup_num, field_num ) * * Returns the value in tuple number _tup_num_, field _field_num_, * or +nil+ if the field is +NULL+. */ static VALUE pgresult_getvalue(VALUE self, VALUE tup_num, VALUE field_num) { t_pg_result *this = pgresult_get_this_safe(self); int i = NUM2INT(tup_num); int j = NUM2INT(field_num); if(i < 0 || i >= PQntuples(this->pgresult)) { rb_raise(rb_eArgError,"invalid tuple number %d", i); } if(j < 0 || j >= PQnfields(this->pgresult)) { rb_raise(rb_eArgError,"invalid field number %d", j); } return this->p_typemap->funcs.typecast_result_value(this->p_typemap, self, i, j); } /* * call-seq: * res.getisnull(tuple_position, field_position) -> boolean * * Returns +true+ if the specified value is +nil+; +false+ otherwise. */ static VALUE pgresult_getisnull(VALUE self, VALUE tup_num, VALUE field_num) { PGresult *result; int i = NUM2INT(tup_num); int j = NUM2INT(field_num); result = pgresult_get(self); if (i < 0 || i >= PQntuples(result)) { rb_raise(rb_eArgError,"invalid tuple number %d", i); } if (j < 0 || j >= PQnfields(result)) { rb_raise(rb_eArgError,"invalid field number %d", j); } return PQgetisnull(result, i, j) ? Qtrue : Qfalse; } /* * call-seq: * res.getlength( tup_num, field_num ) -> Integer * * Returns the (String) length of the field in bytes. * * Equivalent to res.value(tup_num,field_num).length. */ static VALUE pgresult_getlength(VALUE self, VALUE tup_num, VALUE field_num) { PGresult *result; int i = NUM2INT(tup_num); int j = NUM2INT(field_num); result = pgresult_get(self); if (i < 0 || i >= PQntuples(result)) { rb_raise(rb_eArgError,"invalid tuple number %d", i); } if (j < 0 || j >= PQnfields(result)) { rb_raise(rb_eArgError,"invalid field number %d", j); } return INT2FIX(PQgetlength(result, i, j)); } /* * call-seq: * res.nparams() -> Integer * * Returns the number of parameters of a prepared statement. * Only useful for the result returned by conn.describePrepared */ static VALUE pgresult_nparams(VALUE self) { PGresult *result; result = pgresult_get(self); return INT2FIX(PQnparams(result)); } /* * call-seq: * res.paramtype( param_number ) -> Oid * * Returns the Oid of the data type of parameter _param_number_. * Only useful for the result returned by conn.describePrepared */ static VALUE pgresult_paramtype(VALUE self, VALUE param_number) { PGresult *result; result = pgresult_get(self); return UINT2NUM(PQparamtype(result,NUM2INT(param_number))); } /* * call-seq: * res.cmd_status() -> String * * Returns the status string of the last query command. */ static VALUE pgresult_cmd_status(VALUE self) { t_pg_result *this = pgresult_get_this_safe(self); VALUE ret = rb_str_new2(PQcmdStatus(this->pgresult)); PG_ENCODING_SET_NOCHECK(ret, this->enc_idx); return ret; } /* * call-seq: * res.cmd_tuples() -> Integer * * Returns the number of tuples (rows) affected by the SQL command. * * If the SQL command that generated the PG::Result was not one of: * * * SELECT * * CREATE TABLE AS * * INSERT * * UPDATE * * DELETE * * MOVE * * FETCH * * COPY * * an +EXECUTE+ of a prepared query that contains an +INSERT+, +UPDATE+, or +DELETE+ statement * * or if no tuples were affected, 0 is returned. */ static VALUE pgresult_cmd_tuples(VALUE self) { long n; n = strtol(PQcmdTuples(pgresult_get(self)),NULL, 10); return LONG2NUM(n); } /* * call-seq: * res.oid_value() -> Integer * * Returns the +oid+ of the inserted row if applicable, * otherwise +nil+. */ static VALUE pgresult_oid_value(VALUE self) { Oid n = PQoidValue(pgresult_get(self)); if (n == InvalidOid) return Qnil; else return UINT2NUM(n); } /* Utility methods not in libpq */ /* * call-seq: * res[ n ] -> Hash * * Returns tuple _n_ as a hash. */ static VALUE pgresult_aref(VALUE self, VALUE index) { t_pg_result *this = pgresult_get_this_safe(self); int tuple_num = NUM2INT(index); int field_num; int num_tuples = PQntuples(this->pgresult); VALUE tuple; if( this->nfields == -1 ) pgresult_init_fnames( self ); if ( tuple_num < 0 || tuple_num >= num_tuples ) rb_raise( rb_eIndexError, "Index %d is out of range", tuple_num ); /* We reuse the Hash of the previous output for larger row counts. * This is somewhat faster than populating an empty Hash object. */ tuple = NIL_P(this->tuple_hash) ? rb_hash_new() : this->tuple_hash; for ( field_num = 0; field_num < this->nfields; field_num++ ) { VALUE val = this->p_typemap->funcs.typecast_result_value(this->p_typemap, self, tuple_num, field_num); rb_hash_aset( tuple, this->fnames[field_num], val ); } /* Store a copy of the filled hash for use at the next row. */ if( num_tuples > 10 ) RB_OBJ_WRITE(self, &this->tuple_hash, rb_hash_dup(tuple)); return tuple; } /* * call-seq: * res.each_row { |row| ... } * * Yields each row of the result. The row is a list of column values. */ static VALUE pgresult_each_row(VALUE self) { t_pg_result *this; int row; int field; int num_rows; int num_fields; RETURN_SIZED_ENUMERATOR(self, 0, NULL, pgresult_ntuples_for_enum); this = pgresult_get_this_safe(self); num_rows = PQntuples(this->pgresult); num_fields = PQnfields(this->pgresult); for ( row = 0; row < num_rows; row++ ) { PG_VARIABLE_LENGTH_ARRAY(VALUE, row_values, num_fields, PG_MAX_COLUMNS) /* populate the row */ for ( field = 0; field < num_fields; field++ ) { row_values[field] = this->p_typemap->funcs.typecast_result_value(this->p_typemap, self, row, field); } rb_yield( rb_ary_new4( num_fields, row_values )); } return Qnil; } /* * call-seq: * res.values -> Array * * Returns all tuples as an array of arrays. */ static VALUE pgresult_values(VALUE self) { t_pg_result *this = pgresult_get_this_safe(self); int row; int field; int num_rows = PQntuples(this->pgresult); int num_fields = PQnfields(this->pgresult); VALUE results = rb_ary_new2( num_rows ); for ( row = 0; row < num_rows; row++ ) { PG_VARIABLE_LENGTH_ARRAY(VALUE, row_values, num_fields, PG_MAX_COLUMNS) /* populate the row */ for ( field = 0; field < num_fields; field++ ) { row_values[field] = this->p_typemap->funcs.typecast_result_value(this->p_typemap, self, row, field); } rb_ary_store( results, row, rb_ary_new4( num_fields, row_values ) ); } return results; } /* * Make a Ruby array out of the encoded values from the specified * column in the given result. */ static VALUE make_column_result_array( VALUE self, int col ) { t_pg_result *this = pgresult_get_this_safe(self); int rows = PQntuples( this->pgresult ); int i; VALUE results = rb_ary_new2( rows ); if ( col >= PQnfields(this->pgresult) ) rb_raise( rb_eIndexError, "no column %d in result", col ); for ( i=0; i < rows; i++ ) { VALUE val = this->p_typemap->funcs.typecast_result_value(this->p_typemap, self, i, col); rb_ary_store( results, i, val ); } return results; } /* * call-seq: * res.column_values( n ) -> array * * Returns an Array of the values from the nth column of each * tuple in the result. * */ static VALUE pgresult_column_values(VALUE self, VALUE index) { int col = NUM2INT( index ); return make_column_result_array( self, col ); } /* * call-seq: * res.field_values( field ) -> array * * Returns an Array of the values from the given _field_ of each tuple in the result. * */ static VALUE pgresult_field_values( VALUE self, VALUE field ) { PGresult *result = pgresult_get( self ); const char *fieldname; int fnum; if( RB_TYPE_P(field, T_SYMBOL) ) field = rb_sym_to_s( field ); fieldname = StringValueCStr( field ); fnum = PQfnumber( result, fieldname ); if ( fnum < 0 ) rb_raise( rb_eIndexError, "no such field '%s' in result", fieldname ); return make_column_result_array( self, fnum ); } /* * call-seq: * res.tuple_values( n ) -> array * * Returns an Array of the field values from the nth row of the result. * */ static VALUE pgresult_tuple_values(VALUE self, VALUE index) { int tuple_num = NUM2INT( index ); t_pg_result *this; int field; int num_tuples; int num_fields; this = pgresult_get_this_safe(self); num_tuples = PQntuples(this->pgresult); num_fields = PQnfields(this->pgresult); if ( tuple_num < 0 || tuple_num >= num_tuples ) rb_raise( rb_eIndexError, "Index %d is out of range", tuple_num ); { PG_VARIABLE_LENGTH_ARRAY(VALUE, row_values, num_fields, PG_MAX_COLUMNS) /* populate the row */ for ( field = 0; field < num_fields; field++ ) { row_values[field] = this->p_typemap->funcs.typecast_result_value(this->p_typemap, self, tuple_num, field); } return rb_ary_new4( num_fields, row_values ); } } static void ensure_init_for_tuple(VALUE self) { t_pg_result *this = pgresult_get_this_safe(self); if( this->field_map == Qnil ){ int i; VALUE field_map = rb_hash_new(); if( this->nfields == -1 ) pgresult_init_fnames( self ); for( i = 0; i < this->nfields; i++ ){ rb_hash_aset(field_map, this->fnames[i], INT2FIX(i)); } rb_obj_freeze(field_map); RB_OBJ_WRITE(self, &this->field_map, field_map); } } /* * call-seq: * res.tuple( n ) -> PG::Tuple * * Returns a PG::Tuple from the nth row of the result. * */ static VALUE pgresult_tuple(VALUE self, VALUE index) { int tuple_num = NUM2INT( index ); t_pg_result *this; int num_tuples; this = pgresult_get_this_safe(self); num_tuples = PQntuples(this->pgresult); if ( tuple_num < 0 || tuple_num >= num_tuples ) rb_raise( rb_eIndexError, "Index %d is out of range", tuple_num ); ensure_init_for_tuple(self); return pg_tuple_new(self, tuple_num); } /* * call-seq: * res.each{ |tuple| ... } * * Invokes block for each tuple in the result set. */ static VALUE pgresult_each(VALUE self) { PGresult *result; int tuple_num; RETURN_SIZED_ENUMERATOR(self, 0, NULL, pgresult_ntuples_for_enum); result = pgresult_get(self); for(tuple_num = 0; tuple_num < PQntuples(result); tuple_num++) { rb_yield(pgresult_aref(self, INT2NUM(tuple_num))); } return self; } /* * call-seq: * res.fields() -> Array * * Depending on #field_name_type= returns an array of strings or symbols representing the names of the fields in the result. */ static VALUE pgresult_fields(VALUE self) { t_pg_result *this = pgresult_get_this_safe(self); if( this->nfields == -1 ) pgresult_init_fnames( self ); return rb_ary_new4( this->nfields, this->fnames ); } /* * call-seq: * res.type_map = typemap * * Set the TypeMap that is used for type casts of result values to ruby objects. * * All value retrieval methods will respect the type map and will do the * type casts from PostgreSQL's wire format to Ruby objects on the fly, * according to the rules and decoders defined in the given typemap. * * +typemap+ must be a kind of PG::TypeMap . * */ static VALUE pgresult_type_map_set(VALUE self, VALUE typemap) { t_pg_result *this = pgresult_get_this(self); t_typemap *p_typemap; rb_check_frozen(self); /* Check type of method param */ TypedData_Get_Struct(typemap, t_typemap, &pg_typemap_type, p_typemap); typemap = p_typemap->funcs.fit_to_result( typemap, self ); RB_OBJ_WRITE(self, &this->typemap, typemap); this->p_typemap = RTYPEDDATA_DATA( typemap ); return typemap; } /* * call-seq: * res.type_map -> value * * Returns the TypeMap that is currently set for type casts of result values to ruby objects. * */ static VALUE pgresult_type_map_get(VALUE self) { t_pg_result *this = pgresult_get_this(self); return this->typemap; } static int yield_hash(VALUE self, int ntuples, int nfields, void *data) { int tuple_num; UNUSED(nfields); for(tuple_num = 0; tuple_num < ntuples; tuple_num++) { rb_yield(pgresult_aref(self, INT2NUM(tuple_num))); } return 1; /* clear the result */ } static int yield_array(VALUE self, int ntuples, int nfields, void *data) { int row; t_pg_result *this = pgresult_get_this(self); for ( row = 0; row < ntuples; row++ ) { PG_VARIABLE_LENGTH_ARRAY(VALUE, row_values, nfields, PG_MAX_COLUMNS) int field; /* populate the row */ for ( field = 0; field < nfields; field++ ) { row_values[field] = this->p_typemap->funcs.typecast_result_value(this->p_typemap, self, row, field); } rb_yield( rb_ary_new4( nfields, row_values )); } return 1; /* clear the result */ } static int yield_tuple(VALUE self, int ntuples, int nfields, void *data) { int tuple_num; t_pg_result *this = pgresult_get_this(self); VALUE copy; UNUSED(nfields); /* make a copy of the base result, that is bound to the PG::Tuple */ copy = pg_copy_result(this); /* The copy is now owner of the PGresult and is responsible to PQclear it. * We clear the pgresult here, so that it's not double freed on error within yield. */ this->pgresult = NULL; for(tuple_num = 0; tuple_num < ntuples; tuple_num++) { VALUE tuple = pgresult_tuple(copy, INT2FIX(tuple_num)); rb_yield( tuple ); } return 0; /* don't clear the result */ } /* Non-static, and data pointer for use by sequel_pg */ VALUE pgresult_stream_any(VALUE self, int (*yielder)(VALUE, int, int, void*), void* data) { t_pg_result *this; int nfields, nfields2; PGconn *pgconn; PGresult *pgresult; rb_check_frozen(self); RETURN_ENUMERATOR(self, 0, NULL); this = pgresult_get_this_safe(self); pgconn = pg_get_pgconn(this->connection); pgresult = this->pgresult; nfields = PQnfields(pgresult); for(;;){ int ntuples = PQntuples(pgresult); switch( PQresultStatus(pgresult) ){ case PGRES_TUPLES_OK: case PGRES_COMMAND_OK: if( ntuples == 0 ) return self; rb_raise( rb_eInvalidResultStatus, "PG::Result is not in single row mode"); case PGRES_SINGLE_TUPLE: break; default: pg_result_check( self ); } nfields2 = PQnfields(pgresult); if( nfields != nfields2 ){ pgresult_clear( this ); rb_raise( rb_eInvalidChangeOfResultFields, "number of fields changed in single row mode from %d to %d - this is a sign for intersection with another query", nfields, nfields2); } if( yielder( self, ntuples, nfields, data ) ){ pgresult_clear( this ); } if( gvl_PQisBusy(pgconn) ){ /* wait for input (without blocking) before reading each result */ pgconn_block( 0, NULL, this->connection ); } pgresult = gvl_PQgetResult(pgconn); if( pgresult == NULL ) rb_raise( rb_eNoResultError, "no result received - possibly an intersection with another query"); this->pgresult = pgresult; } /* never reached */ return self; } /* * call-seq: * res.stream_each{ |tuple| ... } * * Invokes block for each tuple in the result set in single row mode. * * This is a convenience method for retrieving all result tuples * as they are transferred. It is an alternative to repeated calls of * PG::Connection#get_result , but given that it avoids the overhead of * wrapping each row into a dedicated result object, it delivers data in nearly * the same speed as with ordinary results. * * The base result must be in status PGRES_SINGLE_TUPLE. * It iterates over all tuples until the status changes to PGRES_TUPLES_OK. * A PG::Error is raised for any errors from the server. * * Row description data does not change while the iteration. All value retrieval * methods refer to only the current row. Result#ntuples returns +1+ while * the iteration and +0+ after all tuples were yielded. * * Example: * conn.send_query( "first SQL query; second SQL query" ) * conn.set_single_row_mode * conn.get_result.stream_each do |row| * # do something with each received row of the first query * end * conn.get_result.stream_each do |row| * # do something with each received row of the second query * end * conn.get_result # => nil (no more results) */ static VALUE pgresult_stream_each(VALUE self) { return pgresult_stream_any(self, yield_hash, NULL); } /* * call-seq: * res.stream_each_row { |row| ... } * * Yields each row of the result set in single row mode. * The row is a list of column values. * * This method works equally to #stream_each , but yields an Array of * values. */ static VALUE pgresult_stream_each_row(VALUE self) { return pgresult_stream_any(self, yield_array, NULL); } /* * call-seq: * res.stream_each_tuple { |tuple| ... } * * Yields each row of the result set in single row mode. * * This method works equally to #stream_each , but yields a PG::Tuple object. */ static VALUE pgresult_stream_each_tuple(VALUE self) { /* allocate VALUEs that are shared between all streamed tuples */ ensure_init_for_tuple(self); return pgresult_stream_any(self, yield_tuple, NULL); } /* * call-seq: * res.field_name_type = Symbol * * Set type of field names specific to this result. * It can be set to one of: * * +:string+ to use String based field names * * +:symbol+ to use Symbol based field names * * +:static_symbol+ to use pinned Symbol (can not be garbage collected) - Don't use this, it will probably be removed in future. * * The default is retrieved from PG::Connection#field_name_type , which defaults to +:string+ . * * This setting affects several result methods: * * keys of Hash returned by #[] , #each and #stream_each * * #fields * * #fname * * field names used by #tuple and #stream_each_tuple * * The type of field names can only be changed before any of the affected methods have been called. * */ static VALUE pgresult_field_name_type_set(VALUE self, VALUE sym) { t_pg_result *this = pgresult_get_this(self); rb_check_frozen(self); if( this->nfields != -1 ) rb_raise(rb_eArgError, "field names are already materialized"); this->flags &= ~PG_RESULT_FIELD_NAMES_MASK; if( sym == sym_symbol ) this->flags |= PG_RESULT_FIELD_NAMES_SYMBOL; else if ( sym == sym_static_symbol ) this->flags |= PG_RESULT_FIELD_NAMES_STATIC_SYMBOL; else if ( sym == sym_string ); else rb_raise(rb_eArgError, "invalid argument %+"PRIsVALUE, sym); return sym; } /* * call-seq: * res.field_name_type -> Symbol * * Get type of field names. * * See description at #field_name_type= */ static VALUE pgresult_field_name_type_get(VALUE self) { t_pg_result *this = pgresult_get_this(self); if( this->flags & PG_RESULT_FIELD_NAMES_SYMBOL ){ return sym_symbol; } else if( this->flags & PG_RESULT_FIELD_NAMES_STATIC_SYMBOL ){ return sym_static_symbol; } else { return sym_string; } } void init_pg_result(void) { sym_string = ID2SYM(rb_intern("string")); sym_symbol = ID2SYM(rb_intern("symbol")); sym_static_symbol = ID2SYM(rb_intern("static_symbol")); rb_cPGresult = rb_define_class_under( rb_mPG, "Result", rb_cObject ); rb_undef_alloc_func(rb_cPGresult); rb_include_module(rb_cPGresult, rb_mEnumerable); rb_include_module(rb_cPGresult, rb_mPGconstants); /****** PG::Result INSTANCE METHODS: libpq ******/ rb_define_method(rb_cPGresult, "result_status", pgresult_result_status, 0); rb_define_method(rb_cPGresult, "res_status", pgresult_res_status, -1); rb_define_singleton_method(rb_cPGresult, "res_status", pgresult_s_res_status, 1); rb_define_method(rb_cPGresult, "error_message", pgresult_error_message, 0); rb_define_alias( rb_cPGresult, "result_error_message", "error_message"); #ifdef HAVE_PQRESULTVERBOSEERRORMESSAGE rb_define_method(rb_cPGresult, "verbose_error_message", pgresult_verbose_error_message, 2); rb_define_alias( rb_cPGresult, "result_verbose_error_message", "verbose_error_message"); #endif rb_define_method(rb_cPGresult, "error_field", pgresult_error_field, 1); rb_define_alias( rb_cPGresult, "result_error_field", "error_field" ); rb_define_method(rb_cPGresult, "clear", pg_result_clear, 0); rb_define_method(rb_cPGresult, "freeze", pg_result_freeze, 0 ); rb_define_method(rb_cPGresult, "check", pg_result_check, 0); rb_define_alias (rb_cPGresult, "check_result", "check"); rb_define_method(rb_cPGresult, "ntuples", pgresult_ntuples, 0); rb_define_alias(rb_cPGresult, "num_tuples", "ntuples"); rb_define_method(rb_cPGresult, "nfields", pgresult_nfields, 0); rb_define_alias(rb_cPGresult, "num_fields", "nfields"); rb_define_method(rb_cPGresult, "binary_tuples", pgresult_binary_tuples, 0); rb_define_method(rb_cPGresult, "fname", pgresult_fname, 1); rb_define_method(rb_cPGresult, "fnumber", pgresult_fnumber, 1); rb_define_method(rb_cPGresult, "ftable", pgresult_ftable, 1); rb_define_method(rb_cPGresult, "ftablecol", pgresult_ftablecol, 1); rb_define_method(rb_cPGresult, "fformat", pgresult_fformat, 1); rb_define_method(rb_cPGresult, "ftype", pgresult_ftype, 1); rb_define_method(rb_cPGresult, "fmod", pgresult_fmod, 1); rb_define_method(rb_cPGresult, "fsize", pgresult_fsize, 1); rb_define_method(rb_cPGresult, "getvalue", pgresult_getvalue, 2); rb_define_method(rb_cPGresult, "getisnull", pgresult_getisnull, 2); rb_define_method(rb_cPGresult, "getlength", pgresult_getlength, 2); rb_define_method(rb_cPGresult, "nparams", pgresult_nparams, 0); rb_define_method(rb_cPGresult, "paramtype", pgresult_paramtype, 1); rb_define_method(rb_cPGresult, "cmd_status", pgresult_cmd_status, 0); rb_define_method(rb_cPGresult, "cmd_tuples", pgresult_cmd_tuples, 0); rb_define_alias(rb_cPGresult, "cmdtuples", "cmd_tuples"); rb_define_method(rb_cPGresult, "oid_value", pgresult_oid_value, 0); /****** PG::Result INSTANCE METHODS: other ******/ rb_define_method(rb_cPGresult, "[]", pgresult_aref, 1); rb_define_method(rb_cPGresult, "each", pgresult_each, 0); rb_define_method(rb_cPGresult, "fields", pgresult_fields, 0); rb_define_method(rb_cPGresult, "each_row", pgresult_each_row, 0); rb_define_method(rb_cPGresult, "values", pgresult_values, 0); rb_define_method(rb_cPGresult, "column_values", pgresult_column_values, 1); rb_define_method(rb_cPGresult, "field_values", pgresult_field_values, 1); rb_define_method(rb_cPGresult, "tuple_values", pgresult_tuple_values, 1); rb_define_method(rb_cPGresult, "tuple", pgresult_tuple, 1); rb_define_method(rb_cPGresult, "cleared?", pgresult_cleared_p, 0); rb_define_method(rb_cPGresult, "autoclear?", pgresult_autoclear_p, 0); rb_define_method(rb_cPGresult, "type_map=", pgresult_type_map_set, 1); rb_define_method(rb_cPGresult, "type_map", pgresult_type_map_get, 0); /****** PG::Result INSTANCE METHODS: streaming ******/ rb_define_method(rb_cPGresult, "stream_each", pgresult_stream_each, 0); rb_define_method(rb_cPGresult, "stream_each_row", pgresult_stream_each_row, 0); rb_define_method(rb_cPGresult, "stream_each_tuple", pgresult_stream_each_tuple, 0); rb_define_method(rb_cPGresult, "field_name_type=", pgresult_field_name_type_set, 1 ); rb_define_method(rb_cPGresult, "field_name_type", pgresult_field_name_type_get, 0 ); } pg-1.5.5/ext/pg_type_map.c0000644000004100000410000001165414563476204015452 0ustar www-datawww-data/* * pg_column_map.c - PG::ColumnMap class extension * $Id$ * */ #include "pg.h" void pg_typemap_mark( void *_this ) { t_typemap *this = (t_typemap *)_this; rb_gc_mark_movable(this->default_typemap); } size_t pg_typemap_memsize( const void *_this ) { t_typemap *this = (t_typemap *)_this; return sizeof(*this); } void pg_typemap_compact( void *_this ) { t_typemap *this = (t_typemap *)_this; pg_gc_location(this->default_typemap); } const rb_data_type_t pg_typemap_type = { "PG::TypeMap", { pg_typemap_mark, RUBY_TYPED_DEFAULT_FREE, pg_typemap_memsize, pg_compact_callback(pg_typemap_compact), }, 0, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; VALUE rb_cTypeMap; VALUE rb_mDefaultTypeMappable; static ID s_id_fit_to_query; static ID s_id_fit_to_result; NORETURN( VALUE pg_typemap_fit_to_result( VALUE self, VALUE result )); NORETURN( VALUE pg_typemap_fit_to_query( VALUE self, VALUE params )); NORETURN( int pg_typemap_fit_to_copy_get( VALUE self )); NORETURN( VALUE pg_typemap_result_value( t_typemap *p_typemap, VALUE result, int tuple, int field )); NORETURN( t_pg_coder * pg_typemap_typecast_query_param( t_typemap *p_typemap, VALUE param_value, int field )); NORETURN( VALUE pg_typemap_typecast_copy_get( t_typemap *p_typemap, VALUE field_str, int fieldno, int format, int enc_idx )); VALUE pg_typemap_fit_to_result( VALUE self, VALUE result ) { rb_raise( rb_eNotImpError, "type map %s is not suitable to map result values", rb_obj_classname(self) ); } VALUE pg_typemap_fit_to_query( VALUE self, VALUE params ) { rb_raise( rb_eNotImpError, "type map %s is not suitable to map query params", rb_obj_classname(self) ); } int pg_typemap_fit_to_copy_get( VALUE self ) { rb_raise( rb_eNotImpError, "type map %s is not suitable to map get_copy_data results", rb_obj_classname(self) ); } VALUE pg_typemap_result_value( t_typemap *p_typemap, VALUE result, int tuple, int field ) { rb_raise( rb_eNotImpError, "type map is not suitable to map result values" ); } t_pg_coder * pg_typemap_typecast_query_param( t_typemap *p_typemap, VALUE param_value, int field ) { rb_raise( rb_eNotImpError, "type map is not suitable to map query params" ); } VALUE pg_typemap_typecast_copy_get( t_typemap *p_typemap, VALUE field_str, int fieldno, int format, int enc_idx ) { rb_raise( rb_eNotImpError, "type map is not suitable to map get_copy_data results" ); } const struct pg_typemap_funcs pg_typemap_funcs = { pg_typemap_fit_to_result, pg_typemap_fit_to_query, pg_typemap_fit_to_copy_get, pg_typemap_result_value, pg_typemap_typecast_query_param, pg_typemap_typecast_copy_get }; static VALUE pg_typemap_s_allocate( VALUE klass ) { VALUE self; t_typemap *this; self = TypedData_Make_Struct( klass, t_typemap, &pg_typemap_type, this ); this->funcs = pg_typemap_funcs; return self; } /* * call-seq: * res.default_type_map = typemap * * Set the default TypeMap that is used for values that could not be * casted by this type map. * * +typemap+ must be a kind of PG::TypeMap * */ static VALUE pg_typemap_default_type_map_set(VALUE self, VALUE typemap) { t_typemap *this = RTYPEDDATA_DATA( self ); t_typemap *tm; UNUSED(tm); rb_check_frozen(self); /* Check type of method param */ TypedData_Get_Struct(typemap, t_typemap, &pg_typemap_type, tm); RB_OBJ_WRITE(self, &this->default_typemap, typemap); return typemap; } /* * call-seq: * res.default_type_map -> TypeMap * * Returns the default TypeMap that is currently set for values that could not be * casted by this type map. * * Returns a kind of PG::TypeMap. * */ static VALUE pg_typemap_default_type_map_get(VALUE self) { t_typemap *this = RTYPEDDATA_DATA( self ); return this->default_typemap; } /* * call-seq: * res.with_default_type_map( typemap ) * * Set the default TypeMap that is used for values that could not be * casted by this type map. * * +typemap+ must be a kind of PG::TypeMap * * Returns self. */ static VALUE pg_typemap_with_default_type_map(VALUE self, VALUE typemap) { pg_typemap_default_type_map_set( self, typemap ); return self; } void init_pg_type_map(void) { s_id_fit_to_query = rb_intern("fit_to_query"); s_id_fit_to_result = rb_intern("fit_to_result"); /* * Document-class: PG::TypeMap < Object * * This is the base class for type maps. * See derived classes for implementations of different type cast strategies * ( PG::TypeMapByColumn, PG::TypeMapByOid ). * */ rb_cTypeMap = rb_define_class_under( rb_mPG, "TypeMap", rb_cObject ); rb_define_alloc_func( rb_cTypeMap, pg_typemap_s_allocate ); rb_mDefaultTypeMappable = rb_define_module_under( rb_cTypeMap, "DefaultTypeMappable"); rb_define_method( rb_mDefaultTypeMappable, "default_type_map=", pg_typemap_default_type_map_set, 1 ); rb_define_method( rb_mDefaultTypeMappable, "default_type_map", pg_typemap_default_type_map_get, 0 ); rb_define_method( rb_mDefaultTypeMappable, "with_default_type_map", pg_typemap_with_default_type_map, 1 ); } pg-1.5.5/ext/pg_binary_decoder.c0000644000004100000410000002331414563476204016601 0ustar www-datawww-data/* * pg_column_map.c - PG::ColumnMap class extension * $Id$ * */ #include "ruby/version.h" #include "pg.h" #include "pg_util.h" #ifdef HAVE_INTTYPES_H #include #endif VALUE rb_mPG_BinaryDecoder; static VALUE s_Date; static ID s_id_new; /* * Document-class: PG::BinaryDecoder::Boolean < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL binary +bool+ type * to Ruby +true+ or +false+ objects. * */ static VALUE pg_bin_dec_boolean(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { if (len < 1) { rb_raise( rb_eTypeError, "wrong data for binary boolean converter in tuple %d field %d", tuple, field); } return *val == 0 ? Qfalse : Qtrue; } /* * Document-class: PG::BinaryDecoder::Integer < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL binary +int2+, +int4+ and +int8+ types * to Ruby Integer objects. * */ static VALUE pg_bin_dec_integer(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { switch( len ){ case 2: return INT2NUM(read_nbo16(val)); case 4: return LONG2NUM(read_nbo32(val)); case 8: return LL2NUM(read_nbo64(val)); default: rb_raise( rb_eTypeError, "wrong data for binary integer converter in tuple %d field %d length %d", tuple, field, len); } } /* * Document-class: PG::BinaryDecoder::Float < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL binary +float4+ and +float8+ types * to Ruby Float objects. * */ static VALUE pg_bin_dec_float(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { union { float f; int32_t i; } swap4; union { double f; int64_t i; } swap8; switch( len ){ case 4: swap4.i = read_nbo32(val); return rb_float_new(swap4.f); case 8: swap8.i = read_nbo64(val); return rb_float_new(swap8.f); default: rb_raise( rb_eTypeError, "wrong data for BinaryFloat converter in tuple %d field %d length %d", tuple, field, len); } } /* * Document-class: PG::BinaryDecoder::Bytea < PG::SimpleDecoder * * This decoder class delivers the data received from the server as binary String object. * It is therefore suitable for conversion of PostgreSQL +bytea+ data as well as any other * data in binary format. * */ VALUE pg_bin_dec_bytea(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { VALUE ret; ret = rb_str_new( val, len ); PG_ENCODING_SET_NOCHECK( ret, rb_ascii8bit_encindex() ); return ret; } /* * Document-class: PG::BinaryDecoder::ToBase64 < PG::CompositeDecoder * * This is a decoder class for conversion of binary +bytea+ to base64 data. * */ static VALUE pg_bin_dec_to_base64(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { t_pg_composite_coder *this = (t_pg_composite_coder *)conv; t_pg_coder_dec_func dec_func = pg_coder_dec_func(this->elem, this->comp.format); int encoded_len = BASE64_ENCODED_SIZE(len); /* create a buffer of the encoded length */ VALUE out_value = rb_str_new(NULL, encoded_len); base64_encode( RSTRING_PTR(out_value), val, len ); /* Is it a pure String conversion? Then we can directly send out_value to the user. */ if( this->comp.format == 0 && dec_func == pg_text_dec_string ){ PG_ENCODING_SET_NOCHECK( out_value, enc_idx ); return out_value; } if( this->comp.format == 1 && dec_func == pg_bin_dec_bytea ){ PG_ENCODING_SET_NOCHECK( out_value, rb_ascii8bit_encindex() ); return out_value; } out_value = dec_func(this->elem, RSTRING_PTR(out_value), encoded_len, tuple, field, enc_idx); return out_value; } #define PG_INT64_MIN (-0x7FFFFFFFFFFFFFFFL - 1) #define PG_INT64_MAX 0x7FFFFFFFFFFFFFFFL /* * Document-class: PG::BinaryDecoder::Timestamp < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL binary timestamps * to Ruby Time objects. * * The following flags can be used to specify timezone interpretation: * * +PG::Coder::TIMESTAMP_DB_UTC+ : Interpret timestamp as UTC time (default) * * +PG::Coder::TIMESTAMP_DB_LOCAL+ : Interpret timestamp as local time * * +PG::Coder::TIMESTAMP_APP_UTC+ : Return timestamp as UTC time (default) * * +PG::Coder::TIMESTAMP_APP_LOCAL+ : Return timestamp as local time * * Example: * deco = PG::BinaryDecoder::Timestamp.new(flags: PG::Coder::TIMESTAMP_DB_UTC | PG::Coder::TIMESTAMP_APP_LOCAL) * deco.decode("\0"*8) # => 2000-01-01 01:00:00 +0100 */ static VALUE pg_bin_dec_timestamp(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { int64_t timestamp; int64_t sec; int64_t nsec; VALUE t; if( len != sizeof(timestamp) ){ rb_raise( rb_eTypeError, "wrong data for timestamp converter in tuple %d field %d length %d", tuple, field, len); } timestamp = read_nbo64(val); switch(timestamp){ case PG_INT64_MAX: return rb_str_new2("infinity"); case PG_INT64_MIN: return rb_str_new2("-infinity"); default: /* PostgreSQL's timestamp is based on year 2000 and Ruby's time is based on 1970. * Adjust the 30 years difference. */ sec = (timestamp / 1000000) + 10957L * 24L * 3600L; nsec = (timestamp % 1000000) * 1000; #if (RUBY_API_VERSION_MAJOR > 2 || (RUBY_API_VERSION_MAJOR == 2 && RUBY_API_VERSION_MINOR >= 3)) && defined(NEGATIVE_TIME_T) && defined(SIZEOF_TIME_T) && SIZEOF_TIME_T >= 8 /* Fast path for time conversion */ { struct timespec ts = {sec, nsec}; t = rb_time_timespec_new(&ts, conv->flags & PG_CODER_TIMESTAMP_APP_LOCAL ? INT_MAX : INT_MAX-1); } #else t = rb_funcall(rb_cTime, rb_intern("at"), 2, LL2NUM(sec), LL2NUM(nsec / 1000)); if( !(conv->flags & PG_CODER_TIMESTAMP_APP_LOCAL) ) { t = rb_funcall(t, rb_intern("utc"), 0); } #endif if( conv->flags & PG_CODER_TIMESTAMP_DB_LOCAL ) { /* interpret it as local time */ t = rb_funcall(t, rb_intern("-"), 1, rb_funcall(t, rb_intern("utc_offset"), 0)); } return t; } } #define PG_INT32_MIN (-0x7FFFFFFF-1) #define PG_INT32_MAX (0x7FFFFFFF) #define POSTGRES_EPOCH_JDATE 2451545 /* == date2j(2000, 1, 1) */ #define MONTHS_PER_YEAR 12 /* taken from PostgreSQL sources at src/backend/utils/adt/datetime.c */ void j2date(int jd, int *year, int *month, int *day) { unsigned int julian; unsigned int quad; unsigned int extra; int y; julian = jd; julian += 32044; quad = julian / 146097; extra = (julian - quad * 146097) * 4 + 3; julian += 60 + quad * 3 + extra / 146097; quad = julian / 1461; julian -= quad * 1461; y = julian * 4 / 1461; julian = ((y != 0) ? ((julian + 305) % 365) : ((julian + 306) % 366)) + 123; y += quad * 4; *year = y - 4800; quad = julian * 2141 / 65536; *day = julian - 7834 * quad / 256; *month = (quad + 10) % MONTHS_PER_YEAR + 1; } /* j2date() */ /* * Document-class: PG::BinaryDecoder::Date < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL binary date * to Ruby Date objects. */ static VALUE pg_bin_dec_date(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { int year, month, day; int date; if (len != 4) { rb_raise(rb_eTypeError, "unexpected date format != 4 bytes"); } date = read_nbo32(val); switch(date){ case PG_INT32_MAX: return rb_str_new2("infinity"); case PG_INT32_MIN: return rb_str_new2("-infinity"); default: j2date(date + POSTGRES_EPOCH_JDATE, &year, &month, &day); return rb_funcall(s_Date, s_id_new, 3, INT2NUM(year), INT2NUM(month), INT2NUM(day)); } } /* called per autoload when BinaryDecoder::Date is used */ static VALUE init_pg_bin_decoder_date(VALUE rb_mPG_BinaryDecoder) { rb_require("date"); s_Date = rb_const_get(rb_cObject, rb_intern("Date")); rb_gc_register_mark_object(s_Date); s_id_new = rb_intern("new"); /* dummy = rb_define_class_under( rb_mPG_BinaryDecoder, "Date", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Date", pg_bin_dec_date, rb_cPG_SimpleDecoder, rb_mPG_BinaryDecoder ); return Qnil; } /* * Document-class: PG::BinaryDecoder::String < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL text output to * to Ruby String object. The output value will have the character encoding * set with PG::Connection#internal_encoding= . * */ void init_pg_binary_decoder(void) { /* This module encapsulates all decoder classes with binary input format */ rb_mPG_BinaryDecoder = rb_define_module_under( rb_mPG, "BinaryDecoder" ); rb_define_private_method(rb_singleton_class(rb_mPG_BinaryDecoder), "init_date", init_pg_bin_decoder_date, 0); /* Make RDoc aware of the decoder classes... */ /* dummy = rb_define_class_under( rb_mPG_BinaryDecoder, "Boolean", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Boolean", pg_bin_dec_boolean, rb_cPG_SimpleDecoder, rb_mPG_BinaryDecoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryDecoder, "Integer", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Integer", pg_bin_dec_integer, rb_cPG_SimpleDecoder, rb_mPG_BinaryDecoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryDecoder, "Float", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Float", pg_bin_dec_float, rb_cPG_SimpleDecoder, rb_mPG_BinaryDecoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryDecoder, "String", rb_cPG_SimpleDecoder ); */ pg_define_coder( "String", pg_text_dec_string, rb_cPG_SimpleDecoder, rb_mPG_BinaryDecoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryDecoder, "Bytea", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Bytea", pg_bin_dec_bytea, rb_cPG_SimpleDecoder, rb_mPG_BinaryDecoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryDecoder, "Timestamp", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Timestamp", pg_bin_dec_timestamp, rb_cPG_SimpleDecoder, rb_mPG_BinaryDecoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryDecoder, "ToBase64", rb_cPG_CompositeDecoder ); */ pg_define_coder( "ToBase64", pg_bin_dec_to_base64, rb_cPG_CompositeDecoder, rb_mPG_BinaryDecoder ); } pg-1.5.5/ext/pg_type_map_all_strings.c0000644000004100000410000000636614563476204020057 0ustar www-datawww-data/* * pg_type_map_all_strings.c - PG::TypeMapAllStrings class extension * $Id$ * * This is the default typemap. * */ #include "pg.h" static const rb_data_type_t pg_tmas_type = { "PG::TypeMapAllStrings", { pg_typemap_mark, RUBY_TYPED_DEFAULT_FREE, pg_typemap_memsize, pg_compact_callback(pg_typemap_compact), }, &pg_typemap_type, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; VALUE rb_cTypeMapAllStrings; VALUE pg_typemap_all_strings; static VALUE pg_tmas_fit_to_result( VALUE self, VALUE result ) { return self; } static VALUE pg_tmas_result_value( t_typemap *p_typemap, VALUE result, int tuple, int field ) { VALUE ret; char * val; int len; t_pg_result *p_result = pgresult_get_this(result); if (PQgetisnull(p_result->pgresult, tuple, field)) { return Qnil; } val = PQgetvalue( p_result->pgresult, tuple, field ); len = PQgetlength( p_result->pgresult, tuple, field ); if ( 0 == PQfformat(p_result->pgresult, field) ) { ret = pg_text_dec_string(NULL, val, len, tuple, field, p_result->enc_idx); } else { ret = pg_bin_dec_bytea(NULL, val, len, tuple, field, p_result->enc_idx); } return ret; } static VALUE pg_tmas_fit_to_query( VALUE self, VALUE params ) { return self; } static t_pg_coder * pg_tmas_typecast_query_param( t_typemap *p_typemap, VALUE param_value, int field ) { return NULL; } static int pg_tmas_fit_to_copy_get( VALUE self ) { /* We can not predict the number of columns for copy */ return 0; } static VALUE pg_tmas_typecast_copy_get( t_typemap *p_typemap, VALUE field_str, int fieldno, int format, int enc_idx ) { rb_str_modify(field_str); if( format == 0 ){ PG_ENCODING_SET_NOCHECK( field_str, enc_idx ); } else { PG_ENCODING_SET_NOCHECK( field_str, rb_ascii8bit_encindex() ); } return field_str; } static VALUE pg_tmas_s_allocate( VALUE klass ) { t_typemap *this; VALUE self; self = TypedData_Make_Struct( klass, t_typemap, &pg_tmas_type, this ); this->funcs.fit_to_result = pg_tmas_fit_to_result; this->funcs.fit_to_query = pg_tmas_fit_to_query; this->funcs.fit_to_copy_get = pg_tmas_fit_to_copy_get; this->funcs.typecast_result_value = pg_tmas_result_value; this->funcs.typecast_query_param = pg_tmas_typecast_query_param; this->funcs.typecast_copy_get = pg_tmas_typecast_copy_get; return self; } void init_pg_type_map_all_strings(void) { /* * Document-class: PG::TypeMapAllStrings < PG::TypeMap * * This type map casts all values received from the database server to Strings * and sends all values to the server after conversion to String by +#to_s+ . * That means, it is hard coded to PG::TextEncoder::String for value encoding * and to PG::TextDecoder::String for text format respectively PG::BinaryDecoder::Bytea * for binary format received from the server. * * It is suitable for type casting query bind parameters, result values and * COPY IN/OUT data. * * This is the default type map for each PG::Connection . * */ rb_cTypeMapAllStrings = rb_define_class_under( rb_mPG, "TypeMapAllStrings", rb_cTypeMap ); rb_define_alloc_func( rb_cTypeMapAllStrings, pg_tmas_s_allocate ); pg_typemap_all_strings = rb_obj_freeze( rb_funcall( rb_cTypeMapAllStrings, rb_intern("new"), 0 )); rb_gc_register_address( &pg_typemap_all_strings ); } pg-1.5.5/ext/pg_copy_coder.c0000644000004100000410000006731114563476204015763 0ustar www-datawww-data/* * pg_copycoder.c - PG::Coder class extension * */ #include "pg.h" #include "pg_util.h" #define ISOCTAL(c) (((c) >= '0') && ((c) <= '7')) #define OCTVALUE(c) ((c) - '0') VALUE rb_cPG_CopyCoder; VALUE rb_cPG_CopyEncoder; VALUE rb_cPG_CopyDecoder; typedef struct { t_pg_coder comp; VALUE typemap; VALUE null_string; char delimiter; } t_pg_copycoder; static void pg_copycoder_mark( void *_this ) { t_pg_copycoder *this = (t_pg_copycoder *)_this; rb_gc_mark_movable(this->typemap); rb_gc_mark_movable(this->null_string); } static size_t pg_copycoder_memsize( const void *_this ) { const t_pg_copycoder *this = (const t_pg_copycoder *)_this; return sizeof(*this); } static void pg_copycoder_compact( void *_this ) { t_pg_copycoder *this = (t_pg_copycoder *)_this; pg_coder_compact(&this->comp); pg_gc_location(this->typemap); pg_gc_location(this->null_string); } static const rb_data_type_t pg_copycoder_type = { "PG::CopyCoder", { pg_copycoder_mark, RUBY_TYPED_DEFAULT_FREE, pg_copycoder_memsize, pg_compact_callback(pg_copycoder_compact), }, &pg_coder_type, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; static VALUE pg_copycoder_encoder_allocate( VALUE klass ) { t_pg_copycoder *this; VALUE self = TypedData_Make_Struct( klass, t_pg_copycoder, &pg_copycoder_type, this ); pg_coder_init_encoder( self ); RB_OBJ_WRITE(self, &this->typemap, pg_typemap_all_strings); this->delimiter = '\t'; RB_OBJ_WRITE(self, &this->null_string, rb_str_new_cstr("\\N")); return self; } static VALUE pg_copycoder_decoder_allocate( VALUE klass ) { t_pg_copycoder *this; VALUE self = TypedData_Make_Struct( klass, t_pg_copycoder, &pg_copycoder_type, this ); pg_coder_init_decoder( self ); RB_OBJ_WRITE(self, &this->typemap, pg_typemap_all_strings); this->delimiter = '\t'; RB_OBJ_WRITE(self, &this->null_string, rb_str_new_cstr("\\N")); return self; } /* * call-seq: * coder.delimiter = String * * Specifies the character that separates columns within each row (line) of the file. * The default is a tab character in text format. * This must be a single one-byte character. * * This option is ignored when using binary format. */ static VALUE pg_copycoder_delimiter_set(VALUE self, VALUE delimiter) { t_pg_copycoder *this = RTYPEDDATA_DATA(self); rb_check_frozen(self); StringValue(delimiter); if(RSTRING_LEN(delimiter) != 1) rb_raise( rb_eArgError, "delimiter size must be one byte"); this->delimiter = *RSTRING_PTR(delimiter); return delimiter; } /* * call-seq: * coder.delimiter -> String * * The character that separates columns within each row (line) of the file. */ static VALUE pg_copycoder_delimiter_get(VALUE self) { t_pg_copycoder *this = RTYPEDDATA_DATA(self); return rb_str_new(&this->delimiter, 1); } /* * Specifies the string that represents a null value. * The default is \\N (backslash-N) in text format. * You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings. * * This option is ignored when using binary format. */ static VALUE pg_copycoder_null_string_set(VALUE self, VALUE null_string) { t_pg_copycoder *this = RTYPEDDATA_DATA(self); rb_check_frozen(self); StringValue(null_string); RB_OBJ_WRITE(self, &this->null_string, null_string); return null_string; } /* * The string that represents a null value. */ static VALUE pg_copycoder_null_string_get(VALUE self) { t_pg_copycoder *this = RTYPEDDATA_DATA(self); return this->null_string; } /* * call-seq: * coder.type_map = map * * Defines how single columns are encoded or decoded. * +map+ must be a kind of PG::TypeMap . * * Defaults to a PG::TypeMapAllStrings , so that PG::TextEncoder::String respectively * PG::TextDecoder::String is used for encoding/decoding of each column. * */ static VALUE pg_copycoder_type_map_set(VALUE self, VALUE type_map) { t_pg_copycoder *this = RTYPEDDATA_DATA( self ); rb_check_frozen(self); if ( !rb_obj_is_kind_of(type_map, rb_cTypeMap) ){ rb_raise( rb_eTypeError, "wrong elements type %s (expected some kind of PG::TypeMap)", rb_obj_classname( type_map ) ); } RB_OBJ_WRITE(self, &this->typemap, type_map); return type_map; } /* * call-seq: * coder.type_map -> PG::TypeMap * * The PG::TypeMap that will be used for encoding and decoding of columns. */ static VALUE pg_copycoder_type_map_get(VALUE self) { t_pg_copycoder *this = RTYPEDDATA_DATA( self ); return this->typemap; } /* * Document-class: PG::TextEncoder::CopyRow < PG::CopyEncoder * * This class encodes one row of arbitrary columns for transmission as COPY data in text format. * See the {COPY command}[http://www.postgresql.org/docs/current/static/sql-copy.html] * for description of the format. * * It is intended to be used in conjunction with PG::Connection#put_copy_data . * * The columns are expected as Array of values. The single values are encoded as defined * in the assigned #type_map. If no type_map was assigned, all values are converted to * strings by PG::TextEncoder::String. * * Example with default type map ( TypeMapAllStrings ): * conn.exec "create table my_table (a text,b int,c bool)" * enco = PG::TextEncoder::CopyRow.new * conn.copy_data "COPY my_table FROM STDIN", enco do * conn.put_copy_data ["astring", 7, false] * conn.put_copy_data ["string2", 42, true] * end * This creates +my_table+ and inserts two rows. * * It is possible to manually assign a type encoder for each column per PG::TypeMapByColumn, * or to make use of PG::BasicTypeMapBasedOnResult to assign them based on the table OIDs. * * See also PG::TextDecoder::CopyRow for the decoding direction with * PG::Connection#get_copy_data . */ static int pg_text_enc_copy_row(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { t_pg_copycoder *this = (t_pg_copycoder *)conv; t_pg_coder_enc_func enc_func; static t_pg_coder *p_elem_coder; int i; t_typemap *p_typemap; char *current_out; char *end_capa_ptr; p_typemap = RTYPEDDATA_DATA( this->typemap ); p_typemap->funcs.fit_to_query( this->typemap, value ); /* Allocate a new string with embedded capacity and realloc exponential when needed. */ PG_RB_STR_NEW( *intermediate, current_out, end_capa_ptr ); PG_ENCODING_SET_NOCHECK(*intermediate, enc_idx); for( i=0; i 0 ){ PG_RB_STR_ENSURE_CAPA( *intermediate, 1, current_out, end_capa_ptr ); *current_out++ = this->delimiter; } switch(TYPE(entry)){ case T_NIL: PG_RB_STR_ENSURE_CAPA( *intermediate, RSTRING_LEN(this->null_string), current_out, end_capa_ptr ); memcpy( current_out, RSTRING_PTR(this->null_string), RSTRING_LEN(this->null_string) ); current_out += RSTRING_LEN(this->null_string); break; default: p_elem_coder = p_typemap->funcs.typecast_query_param(p_typemap, entry, i); enc_func = pg_coder_enc_func(p_elem_coder); /* 1st pass for retiving the required memory space */ strlen = enc_func(p_elem_coder, entry, NULL, &subint, enc_idx); if( strlen == -1 ){ /* we can directly use String value in subint */ strlen = RSTRING_LENINT(subint); /* size of string assuming the worst case, that every character must be escaped. */ PG_RB_STR_ENSURE_CAPA( *intermediate, strlen * 2, current_out, end_capa_ptr ); /* Copy string from subint with backslash escaping */ for(ptr1 = RSTRING_PTR(subint); ptr1 < RSTRING_PTR(subint) + strlen; ptr1++) { /* Escape backslash itself, newline, carriage return, and the current delimiter character. */ if(*ptr1 == '\\' || *ptr1 == '\n' || *ptr1 == '\r' || *ptr1 == this->delimiter){ *current_out++ = '\\'; } *current_out++ = *ptr1; } } else { /* 2nd pass for writing the data to prepared buffer */ /* size of string assuming the worst case, that every character must be escaped. */ PG_RB_STR_ENSURE_CAPA( *intermediate, strlen * 2, current_out, end_capa_ptr ); /* Place the unescaped string at current output position. */ strlen = enc_func(p_elem_coder, entry, current_out, &subint, enc_idx); ptr1 = current_out; ptr2 = current_out + strlen; /* count required backlashs */ for(backslashs = 0; ptr1 != ptr2; ptr1++) { /* Escape backslash itself, newline, carriage return, and the current delimiter character. */ if(*ptr1 == '\\' || *ptr1 == '\n' || *ptr1 == '\r' || *ptr1 == this->delimiter){ backslashs++; } } ptr1 = current_out + strlen; ptr2 = current_out + strlen + backslashs; current_out = ptr2; /* Then store the escaped string on the final position, walking * right to left, until all backslashs are placed. */ while( ptr1 != ptr2 ) { *--ptr2 = *--ptr1; if(*ptr1 == '\\' || *ptr1 == '\n' || *ptr1 == '\r' || *ptr1 == this->delimiter){ *--ptr2 = '\\'; } } } } } PG_RB_STR_ENSURE_CAPA( *intermediate, 1, current_out, end_capa_ptr ); *current_out++ = '\n'; rb_str_set_len( *intermediate, current_out - RSTRING_PTR(*intermediate) ); return -1; } /* * Document-class: PG::BinaryEncoder::CopyRow < PG::CopyEncoder * * This class encodes one row of arbitrary columns for transmission as COPY data in binary format. * See the {COPY command}[http://www.postgresql.org/docs/current/static/sql-copy.html] * for description of the format. * * It is intended to be used in conjunction with PG::Connection#put_copy_data . * * The columns are expected as Array of values. The single values are encoded as defined * in the assigned #type_map. If no type_map was assigned, all values are converted to * strings by PG::BinaryEncoder::String. * * Example with default type map ( TypeMapAllStrings ): * conn.exec "create table my_table (a text,b int,c bool)" * enco = PG::BinaryEncoder::CopyRow.new * conn.copy_data "COPY my_table FROM STDIN WITH (FORMAT binary)", enco do * conn.put_copy_data ["astring", "\x00\x00\x00\a", "\x00"] * conn.put_copy_data ["string2", "\x00\x00\x00*", "\x01"] * end * This creates +my_table+ and inserts two rows with binary fields. * * The binary format is less portable and less readable than the text format. * It is therefore recommended to either manually assign a type encoder for each column per PG::TypeMapByColumn, * or to make use of PG::BasicTypeMapBasedOnResult to assign them based on the table OIDs. * * Manually assigning a type encoder works per type map like so: * * conn.exec "create table my_table (a text,b int,c bool)" * tm = PG::TypeMapByColumn.new( [ * PG::BinaryEncoder::String.new, * PG::BinaryEncoder::Int4.new, * PG::BinaryEncoder::Boolean.new] ) * enco = PG::BinaryEncoder::CopyRow.new( type_map: tm ) * conn.copy_data "COPY my_table FROM STDIN WITH (FORMAT binary)", enco do * conn.put_copy_data ["astring", 7, false] * conn.put_copy_data ["string2", 42, true] * end * * See also PG::BinaryDecoder::CopyRow for the decoding direction with * PG::Connection#get_copy_data . */ static int pg_bin_enc_copy_row(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { t_pg_copycoder *this = (t_pg_copycoder *)conv; int i; t_typemap *p_typemap; char *current_out; char *end_capa_ptr; p_typemap = RTYPEDDATA_DATA( this->typemap ); p_typemap->funcs.fit_to_query( this->typemap, value ); /* Allocate a new string with embedded capacity and realloc exponential when needed. */ PG_RB_STR_NEW( *intermediate, current_out, end_capa_ptr ); PG_ENCODING_SET_NOCHECK(*intermediate, enc_idx); /* 2 bytes for number of fields */ PG_RB_STR_ENSURE_CAPA( *intermediate, 2, current_out, end_capa_ptr ); write_nbo16(RARRAY_LEN(value), current_out); current_out += 2; for( i=0; ifuncs.typecast_query_param(p_typemap, entry, i); enc_func = pg_coder_enc_func(p_elem_coder); /* 1st pass for retiving the required memory space */ strlen = enc_func(p_elem_coder, entry, NULL, &subint, enc_idx); if( strlen == -1 ){ /* we can directly use String value in subint */ strlen = RSTRING_LENINT(subint); PG_RB_STR_ENSURE_CAPA( *intermediate, 4 + strlen, current_out, end_capa_ptr ); /* 4 bytes length */ write_nbo32(strlen, current_out); current_out += 4; memcpy( current_out, RSTRING_PTR(subint), strlen ); current_out += strlen; } else { /* 2nd pass for writing the data to prepared buffer */ PG_RB_STR_ENSURE_CAPA( *intermediate, 4 + strlen, current_out, end_capa_ptr ); /* 4 bytes length */ write_nbo32(strlen, current_out); current_out += 4; /* Place the string at current output position. */ strlen = enc_func(p_elem_coder, entry, current_out, &subint, enc_idx); current_out += strlen; } } } rb_str_set_len( *intermediate, current_out - RSTRING_PTR(*intermediate) ); return -1; } /* * Return decimal value for a hexadecimal digit */ static int GetDecimalFromHex(char hex) { if (hex >= '0' && hex <= '9') return hex - '0'; else if (hex >= 'a' && hex <= 'f') return hex - 'a' + 10; else if (hex >= 'A' && hex <= 'F') return hex - 'A' + 10; else return -1; } /* * Document-class: PG::TextDecoder::CopyRow < PG::CopyDecoder * * This class decodes one row of arbitrary columns received as COPY data in text format. * See the {COPY command}[http://www.postgresql.org/docs/current/static/sql-copy.html] * for description of the format. * * It is intended to be used in conjunction with PG::Connection#get_copy_data . * * The columns are retrieved as Array of values. The single values are decoded as defined * in the assigned #type_map. If no type_map was assigned, all values are converted to * strings by PG::TextDecoder::String. * * Example with default type map ( TypeMapAllStrings ): * conn.exec("CREATE TABLE my_table AS VALUES('astring', 7, FALSE), ('string2', 42, TRUE) ") * * deco = PG::TextDecoder::CopyRow.new * conn.copy_data "COPY my_table TO STDOUT", deco do * while row=conn.get_copy_data * p row * end * end * This prints all rows of +my_table+ : * ["astring", "7", "f"] * ["string2", "42", "t"] * * Example with column based type map: * tm = PG::TypeMapByColumn.new( [ * PG::TextDecoder::String.new, * PG::TextDecoder::Integer.new, * PG::TextDecoder::Boolean.new] ) * deco = PG::TextDecoder::CopyRow.new( type_map: tm ) * conn.copy_data "COPY my_table TO STDOUT", deco do * while row=conn.get_copy_data * p row * end * end * This prints the rows with type casted columns: * ["astring", 7, false] * ["string2", 42, true] * * Instead of manually assigning a type decoder for each column, PG::BasicTypeMapForResults * can be used to assign them based on the table OIDs. * * See also PG::TextEncoder::CopyRow for the encoding direction with * PG::Connection#put_copy_data . */ /* * Parse the current line into separate attributes (fields), * performing de-escaping as needed. * * All fields are gathered into a ruby Array. The de-escaped field data is written * into to a ruby String. This object is reused for non string columns. * For String columns the field value is directly used as return value and no * reuse of the memory is done. * * The parser is thankfully borrowed from the PostgreSQL sources: * src/backend/commands/copy.c */ static VALUE pg_text_dec_copy_row(t_pg_coder *conv, const char *input_line, int len, int _tuple, int _field, int enc_idx) { t_pg_copycoder *this = (t_pg_copycoder *)conv; /* Return value: array */ VALUE array; /* Current field */ VALUE field_str; char delimc = this->delimiter; int fieldno; int expected_fields; char *output_ptr; const char *cur_ptr; const char *line_end_ptr; char *end_capa_ptr; t_typemap *p_typemap; p_typemap = RTYPEDDATA_DATA( this->typemap ); expected_fields = p_typemap->funcs.fit_to_copy_get( this->typemap ); /* The received input string will probably have this->nfields fields. */ array = rb_ary_new2(expected_fields); /* Allocate a new string with embedded capacity and realloc later with * exponential growing size when needed. */ PG_RB_STR_NEW( field_str, output_ptr, end_capa_ptr ); /* set pointer variables for loop */ cur_ptr = input_line; line_end_ptr = input_line + len; /* Outer loop iterates over fields */ fieldno = 0; for (;;) { int found_delim = 0; const char *start_ptr; const char *end_ptr; long input_len; /* Remember start of field on input side */ start_ptr = cur_ptr; /* * Scan data for field. * * Note that in this loop, we are scanning to locate the end of field * and also speculatively performing de-escaping. Once we find the * end-of-field, we can match the raw field contents against the null * marker string. Only after that comparison fails do we know that * de-escaping is actually the right thing to do; therefore we *must * not* throw any syntax errors before we've done the null-marker * check. */ for (;;) { /* The current character in the input string. */ char c; end_ptr = cur_ptr; if (cur_ptr >= line_end_ptr) break; c = *cur_ptr++; if (c == delimc){ found_delim = 1; break; } if (c == '\n'){ break; } if (c == '\\'){ if (cur_ptr >= line_end_ptr) break; c = *cur_ptr++; switch (c){ case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': { /* handle \013 */ int val; val = OCTVALUE(c); if (cur_ptr < line_end_ptr) { c = *cur_ptr; if (ISOCTAL(c)) { cur_ptr++; val = (val << 3) + OCTVALUE(c); if (cur_ptr < line_end_ptr) { c = *cur_ptr; if (ISOCTAL(c)) { cur_ptr++; val = (val << 3) + OCTVALUE(c); } } } } c = val & 0377; } break; case 'x': /* Handle \x3F */ if (cur_ptr < line_end_ptr) { char hexchar = *cur_ptr; int val = GetDecimalFromHex(hexchar);; if (val >= 0) { cur_ptr++; if (cur_ptr < line_end_ptr) { int val2; hexchar = *cur_ptr; val2 = GetDecimalFromHex(hexchar); if (val2 >= 0) { cur_ptr++; val = (val << 4) + val2; } } c = val & 0xff; } } break; case 'b': c = '\b'; break; case 'f': c = '\f'; break; case 'n': c = '\n'; break; case 'r': c = '\r'; break; case 't': c = '\t'; break; case 'v': c = '\v'; break; /* * in all other cases, take the char after '\' * literally */ } } PG_RB_STR_ENSURE_CAPA( field_str, 1, output_ptr, end_capa_ptr ); /* Add c to output string */ *output_ptr++ = c; } if (!found_delim && cur_ptr < line_end_ptr) rb_raise( rb_eArgError, "trailing data after linefeed at position: %ld", (long)(cur_ptr - input_line) + 1 ); /* Check whether raw input matched null marker */ input_len = end_ptr - start_ptr; if (input_len == RSTRING_LEN(this->null_string) && strncmp(start_ptr, RSTRING_PTR(this->null_string), input_len) == 0) { rb_ary_push(array, Qnil); } else { VALUE field_value; rb_str_set_len( field_str, output_ptr - RSTRING_PTR(field_str) ); field_value = p_typemap->funcs.typecast_copy_get( p_typemap, field_str, fieldno, 0, enc_idx ); rb_ary_push(array, field_value); if( field_value == field_str ){ /* Our output string will be send to the user, so we can not reuse * it for the next field. */ PG_RB_STR_NEW( field_str, output_ptr, end_capa_ptr ); } } /* Reset the pointer to the start of the output/buffer string. */ output_ptr = RSTRING_PTR(field_str); fieldno++; /* Done if we hit EOL instead of a delim */ if (!found_delim) break; } return array; } static const char BinarySignature[11] = "PGCOPY\n\377\r\n\0"; /* * Document-class: PG::BinaryDecoder::CopyRow < PG::CopyDecoder * * This class decodes one row of arbitrary columns received as COPY data in binary format. * See the {COPY command}[http://www.postgresql.org/docs/current/static/sql-copy.html] * for description of the format. * * It is intended to be used in conjunction with PG::Connection#get_copy_data . * * The columns are retrieved as Array of values. The single values are decoded as defined * in the assigned #type_map. If no type_map was assigned, all values are converted to * strings by PG::BinaryDecoder::String. * * Example with default type map ( TypeMapAllStrings ): * conn.exec("CREATE TABLE my_table AS VALUES('astring', 7, FALSE), ('string2', 42, TRUE) ") * * deco = PG::BinaryDecoder::CopyRow.new * conn.copy_data "COPY my_table TO STDOUT WITH (FORMAT binary)", deco do * while row=conn.get_copy_data * p row * end * end * This prints all rows of +my_table+ in binary format: * ["astring", "\x00\x00\x00\a", "\x00"] * ["string2", "\x00\x00\x00*", "\x01"] * * Example with column based type map: * tm = PG::TypeMapByColumn.new( [ * PG::BinaryDecoder::String.new, * PG::BinaryDecoder::Integer.new, * PG::BinaryDecoder::Boolean.new] ) * deco = PG::BinaryDecoder::CopyRow.new( type_map: tm ) * conn.copy_data "COPY my_table TO STDOUT WITH (FORMAT binary)", deco do * while row=conn.get_copy_data * p row * end * end * This prints the rows with type casted columns: * ["astring", 7, false] * ["string2", 42, true] * * Instead of manually assigning a type decoder for each column, PG::BasicTypeMapForResults * can be used to assign them based on the table OIDs. * * See also PG::BinaryEncoder::CopyRow for the encoding direction with * PG::Connection#put_copy_data . */ static VALUE pg_bin_dec_copy_row(t_pg_coder *conv, const char *input_line, int len, int _tuple, int _field, int enc_idx) { t_pg_copycoder *this = (t_pg_copycoder *)conv; /* Return value: array */ VALUE array; /* Current field */ VALUE field_str; int nfields; int expected_fields; int fieldno; char *output_ptr; const char *cur_ptr; const char *line_end_ptr; char *end_capa_ptr; t_typemap *p_typemap; p_typemap = RTYPEDDATA_DATA( this->typemap ); expected_fields = p_typemap->funcs.fit_to_copy_get( this->typemap ); /* Allocate a new string with embedded capacity and realloc later with * exponential growing size when needed. */ PG_RB_STR_NEW( field_str, output_ptr, end_capa_ptr ); /* set pointer variables for loop */ cur_ptr = input_line; line_end_ptr = input_line + len; if (line_end_ptr - cur_ptr >= 11 && memcmp(cur_ptr, BinarySignature, 11) == 0){ /* binary COPY header signature detected -> just drop it */ int ext_bytes; cur_ptr += 11; /* read flags */ if (line_end_ptr - cur_ptr < 4 ) goto length_error; cur_ptr += 4; /* read header extensions */ if (line_end_ptr - cur_ptr < 4 ) goto length_error; ext_bytes = read_nbo32(cur_ptr); if (ext_bytes < 0) goto length_error; cur_ptr += 4; if (line_end_ptr - cur_ptr < ext_bytes ) goto length_error; cur_ptr += ext_bytes; } /* read row header */ if (line_end_ptr - cur_ptr < 2 ) goto length_error; nfields = read_nbo16(cur_ptr); cur_ptr += 2; /* COPY data trailer? */ if (nfields < 0) { if (nfields != -1) goto length_error; array = Qnil; } else { array = rb_ary_new2(expected_fields); for( fieldno = 0; fieldno < nfields; fieldno++){ long input_len; VALUE field_value; /* read field size */ if (line_end_ptr - cur_ptr < 4 ) goto length_error; input_len = read_nbo32(cur_ptr); cur_ptr += 4; if (input_len < 0) { if (input_len != -1) goto length_error; /* NULL indicator */ rb_ary_push(array, Qnil); } else { if (line_end_ptr - cur_ptr < input_len ) goto length_error; /* copy input data to field_str */ PG_RB_STR_ENSURE_CAPA( field_str, input_len, output_ptr, end_capa_ptr ); memcpy(output_ptr, cur_ptr, input_len); cur_ptr += input_len; output_ptr += input_len; /* convert field_str through the type map */ rb_str_set_len( field_str, output_ptr - RSTRING_PTR(field_str) ); field_value = p_typemap->funcs.typecast_copy_get( p_typemap, field_str, fieldno, 1, enc_idx ); rb_ary_push(array, field_value); if( field_value == field_str ){ /* Our output string will be send to the user, so we can not reuse * it for the next field. */ PG_RB_STR_NEW( field_str, output_ptr, end_capa_ptr ); } } /* Reset the pointer to the start of the output/buffer string. */ output_ptr = RSTRING_PTR(field_str); } } if (cur_ptr < line_end_ptr) rb_raise( rb_eArgError, "trailing data after row data at position: %ld", (long)(cur_ptr - input_line) + 1 ); return array; length_error: rb_raise( rb_eArgError, "premature end of COPY data at position: %ld", (long)(cur_ptr - input_line) + 1 ); } void init_pg_copycoder(void) { VALUE coder; /* Document-class: PG::CopyCoder < PG::Coder * * This is the base class for all type cast classes for COPY data, */ rb_cPG_CopyCoder = rb_define_class_under( rb_mPG, "CopyCoder", rb_cPG_Coder ); rb_define_method( rb_cPG_CopyCoder, "type_map=", pg_copycoder_type_map_set, 1 ); rb_define_method( rb_cPG_CopyCoder, "type_map", pg_copycoder_type_map_get, 0 ); rb_define_method( rb_cPG_CopyCoder, "delimiter=", pg_copycoder_delimiter_set, 1 ); rb_define_method( rb_cPG_CopyCoder, "delimiter", pg_copycoder_delimiter_get, 0 ); rb_define_method( rb_cPG_CopyCoder, "null_string=", pg_copycoder_null_string_set, 1 ); rb_define_method( rb_cPG_CopyCoder, "null_string", pg_copycoder_null_string_get, 0 ); /* Document-class: PG::CopyEncoder < PG::CopyCoder */ rb_cPG_CopyEncoder = rb_define_class_under( rb_mPG, "CopyEncoder", rb_cPG_CopyCoder ); rb_define_alloc_func( rb_cPG_CopyEncoder, pg_copycoder_encoder_allocate ); /* Document-class: PG::CopyDecoder < PG::CopyCoder */ rb_cPG_CopyDecoder = rb_define_class_under( rb_mPG, "CopyDecoder", rb_cPG_CopyCoder ); rb_define_alloc_func( rb_cPG_CopyDecoder, pg_copycoder_decoder_allocate ); /* Make RDoc aware of the encoder classes... */ /* rb_mPG_TextEncoder = rb_define_module_under( rb_mPG, "TextEncoder" ); */ /* dummy = rb_define_class_under( rb_mPG_TextEncoder, "CopyRow", rb_cPG_CopyEncoder ); */ coder = pg_define_coder( "CopyRow", pg_text_enc_copy_row, rb_cPG_CopyEncoder, rb_mPG_TextEncoder ); rb_include_module( coder, rb_mPG_BinaryFormatting ); /* rb_mPG_BinaryEncoder = rb_define_module_under( rb_mPG, "BinaryEncoder" ); */ /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "CopyRow", rb_cPG_CopyEncoder ); */ pg_define_coder( "CopyRow", pg_bin_enc_copy_row, rb_cPG_CopyEncoder, rb_mPG_BinaryEncoder ); /* rb_mPG_TextDecoder = rb_define_module_under( rb_mPG, "TextDecoder" ); */ /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "CopyRow", rb_cPG_CopyDecoder ); */ coder = pg_define_coder( "CopyRow", pg_text_dec_copy_row, rb_cPG_CopyDecoder, rb_mPG_TextDecoder ); /* Although CopyRow is a text decoder, data can contain zero bytes and are not zero terminated. * They are handled like binaries. So format is set to 1 (binary). */ rb_include_module( coder, rb_mPG_BinaryFormatting ); /* rb_mPG_BinaryDecoder = rb_define_module_under( rb_mPG, "BinaryDecoder" ); */ /* dummy = rb_define_class_under( rb_mPG_BinaryDecoder, "CopyRow", rb_cPG_CopyDecoder ); */ pg_define_coder( "CopyRow", pg_bin_dec_copy_row, rb_cPG_CopyDecoder, rb_mPG_BinaryDecoder ); } pg-1.5.5/ext/pg.c0000644000004100000410000006724014563476204013556 0ustar www-datawww-data/* * pg.c - Toplevel extension * $Id$ * * Author/s: * * - Jeff Davis * - Guy Decoux (ts) * - Michael Granger * - Lars Kanis * - Dave Lee * - Eiji Matsumoto * - Yukihiro Matsumoto * - Noboru Saitou * * See Contributors.rdoc for the many additional fine people that have contributed * to this library over the years. * * Copyright (c) 1997-2019 by the authors. * * You may redistribute this software under the same terms as Ruby itself; see * https://www.ruby-lang.org/en/about/license.txt or the BSDL file in the source * for details. * * Portions of the code are from the PostgreSQL project, and are distributed * under the terms of the PostgreSQL license, included in the file "POSTGRES". * * Portions copyright LAIKA, Inc. * * * The following functions are part of libpq, but not available from ruby-pg, * because they are deprecated, obsolete, or generally not useful: * * - PQfreemem -- unnecessary: copied to ruby object, then freed. Ruby object's * memory is freed when it is garbage collected. * - PQprint -- not very useful * - PQsetdb -- not very useful * - PQoidStatus -- deprecated, use PQoidValue * - PQrequestCancel -- deprecated, use PQcancel * - PQfn -- use a prepared statement instead * - PQgetline -- deprecated, use PQgetCopyData * - PQgetlineAsync -- deprecated, use PQgetCopyData * - PQputline -- deprecated, use PQputCopyData * - PQputnbytes -- deprecated, use PQputCopyData * - PQendcopy -- deprecated, use PQputCopyEnd */ #include "pg.h" int pg_skip_deprecation_warning; VALUE rb_mPG; VALUE rb_mPGconstants; /* * Document-class: PG::Error * * This is the exception class raised when an error is returned from * a libpq API call. * * The attributes +connection+ and +result+ are set to the connection * object and result set object, respectively. * * If the connection object or result set object is not available from * the context in which the error was encountered, it is +nil+. */ /* * M17n functions */ /** * The mapping from canonical encoding names in PostgreSQL to ones in Ruby. */ const char * const (pg_enc_pg2ruby_mapping[][2]) = { {"UTF8", "UTF-8" }, {"BIG5", "Big5" }, {"EUC_CN", "GB2312" }, {"EUC_JP", "EUC-JP" }, {"EUC_JIS_2004", "EUC-JP" }, {"EUC_KR", "EUC-KR" }, {"EUC_TW", "EUC-TW" }, {"GB18030", "GB18030" }, {"GBK", "GBK" }, {"ISO_8859_5", "ISO-8859-5" }, {"ISO_8859_6", "ISO-8859-6" }, {"ISO_8859_7", "ISO-8859-7" }, {"ISO_8859_8", "ISO-8859-8" }, /* {"JOHAB", "JOHAB" }, dummy */ {"KOI8", "KOI8-R" }, {"KOI8R", "KOI8-R" }, {"KOI8U", "KOI8-U" }, {"LATIN1", "ISO-8859-1" }, {"LATIN2", "ISO-8859-2" }, {"LATIN3", "ISO-8859-3" }, {"LATIN4", "ISO-8859-4" }, {"LATIN5", "ISO-8859-9" }, {"LATIN6", "ISO-8859-10" }, {"LATIN7", "ISO-8859-13" }, {"LATIN8", "ISO-8859-14" }, {"LATIN9", "ISO-8859-15" }, {"LATIN10", "ISO-8859-16" }, {"MULE_INTERNAL", "Emacs-Mule" }, {"SJIS", "Windows-31J" }, {"SHIFT_JIS_2004","Windows-31J" }, /* {"SQL_ASCII", NULL }, special case*/ {"UHC", "CP949" }, {"WIN866", "IBM866" }, {"WIN874", "Windows-874" }, {"WIN1250", "Windows-1250"}, {"WIN1251", "Windows-1251"}, {"WIN1252", "Windows-1252"}, {"WIN1253", "Windows-1253"}, {"WIN1254", "Windows-1254"}, {"WIN1255", "Windows-1255"}, {"WIN1256", "Windows-1256"}, {"WIN1257", "Windows-1257"}, {"WIN1258", "Windows-1258"} }; /* * Return the given PostgreSQL encoding ID as an rb_encoding. * * - returns NULL if the client encoding is 'SQL_ASCII'. * - returns ASCII-8BIT if the client encoding is unknown. */ static rb_encoding * pg_get_pg_encoding_as_rb_encoding( int enc_id ) { const char *name = pg_encoding_to_char( enc_id ); return pg_get_pg_encname_as_rb_encoding( name ); } /* * Return the given PostgreSQL encoding name as an rb_encoding. */ rb_encoding * pg_get_pg_encname_as_rb_encoding( const char *pg_encname ) { size_t i; /* Trying looking it up in the conversion table */ for ( i = 0; i < sizeof(pg_enc_pg2ruby_mapping)/sizeof(pg_enc_pg2ruby_mapping[0]); ++i ) { if ( strcmp(pg_encname, pg_enc_pg2ruby_mapping[i][0]) == 0 ) return rb_enc_find( pg_enc_pg2ruby_mapping[i][1] ); } /* Fallthrough to ASCII-8BIT */ return rb_ascii8bit_encoding(); } /* * Get the client encoding of the specified connection handle and return it as a rb_encoding. */ rb_encoding * pg_conn_enc_get( PGconn *conn ) { int enc_id = PQclientEncoding( conn ); return pg_get_pg_encoding_as_rb_encoding( enc_id ); } /* * Returns the given rb_encoding as the equivalent PostgreSQL encoding string. */ const char * pg_get_rb_encoding_as_pg_encoding( rb_encoding *enc ) { const char *rb_encname = rb_enc_name( enc ); const char *encname = NULL; size_t i; for (i = 0; i < sizeof(pg_enc_pg2ruby_mapping)/sizeof(pg_enc_pg2ruby_mapping[0]); ++i) { if (strcmp(rb_encname, pg_enc_pg2ruby_mapping[i][1]) == 0) { encname = pg_enc_pg2ruby_mapping[i][0]; } } if ( !encname ) encname = "SQL_ASCII"; return encname; } /* * Ensures that the given string has enough capacity to take expand_len * more data bytes. The new data part of the String is not initialized. * * current_out must be a pointer within the data part of the String object. * This pointer is returned and possibly adjusted, because the location of the data * part of the String can change through this function. * * PG_RB_STR_ENSURE_CAPA can be used to do fast inline checks of the remaining capacity. * end_capa it is then set to the first byte after the currently reserved memory, * if not NULL. * * Before the String can be used with other string functions or returned to Ruby space, * the string length has to be set with rb_str_set_len(). * * Usage example: * * VALUE string; * char *current_out, *end_capa; * PG_RB_STR_NEW( string, current_out, end_capa ); * while( data_is_going_to_be_processed ){ * PG_RB_STR_ENSURE_CAPA( string, 2, current_out, end_capa ); * *current_out++ = databyte1; * *current_out++ = databyte2; * } * rb_str_set_len( string, current_out - RSTRING_PTR(string) ); * */ char * pg_rb_str_ensure_capa( VALUE str, long expand_len, char *curr_ptr, char **end_ptr ) { long curr_len = curr_ptr - RSTRING_PTR(str); long curr_capa = rb_str_capacity( str ); if( curr_capa < curr_len + expand_len ){ rb_str_set_len( str, curr_len ); rb_str_modify_expand( str, (curr_len + expand_len) * 2 - curr_capa ); curr_ptr = RSTRING_PTR(str) + curr_len; } if( end_ptr ) *end_ptr = RSTRING_PTR(str) + rb_str_capacity( str ); return curr_ptr; } /************************************************************************** * Module Methods **************************************************************************/ /* * call-seq: * PG.library_version -> Integer * * Get the version of the libpq library in use. The number is formed by * converting the major, minor, and revision numbers into two-decimal- * digit numbers and appending them together. * For example, version 7.4.2 will be returned as 70402, and version * 8.1 will be returned as 80100 (leading zeroes are not shown). Zero * is returned if the connection is bad. */ static VALUE pg_s_library_version(VALUE self) { UNUSED( self ); return INT2NUM(PQlibVersion()); } /* * call-seq: * PG.isthreadsafe -> Boolean * PG.is_threadsafe? -> Boolean * PG.threadsafe? -> Boolean * * Returns +true+ if libpq is thread-safe, +false+ otherwise. */ static VALUE pg_s_threadsafe_p(VALUE self) { UNUSED( self ); return PQisthreadsafe() ? Qtrue : Qfalse; } static int pg_to_bool_int(VALUE value) { switch( TYPE(value) ){ case T_FALSE: return 0; case T_TRUE: return 1; default: return NUM2INT(value); } } /* * call-seq: * PG.init_openssl(do_ssl, do_crypto) -> nil * * Allows applications to select which security libraries to initialize. * * If your application initializes libssl and/or libcrypto libraries and libpq is * built with SSL support, you should call PG.init_openssl() to tell libpq that the * libssl and/or libcrypto libraries have been initialized by your application, * so that libpq will not also initialize those libraries. * * When do_ssl is +true+, libpq will initialize the OpenSSL library before first * opening a database connection. When do_crypto is +true+, the libcrypto library * will be initialized. By default (if PG.init_openssl() is not called), both libraries * are initialized. When SSL support is not compiled in, this function is present but does nothing. * * If your application uses and initializes either OpenSSL or its underlying libcrypto library, * you must call this function with +false+ for the appropriate parameter(s) before first opening * a database connection. Also be sure that you have done that initialization before opening a * database connection. * */ static VALUE pg_s_init_openssl(VALUE self, VALUE do_ssl, VALUE do_crypto) { UNUSED( self ); PQinitOpenSSL(pg_to_bool_int(do_ssl), pg_to_bool_int(do_crypto)); return Qnil; } /* * call-seq: * PG.init_ssl(do_ssl) -> nil * * Allows applications to select which security libraries to initialize. * * This function is equivalent to PG.init_openssl(do_ssl, do_ssl) . It is sufficient for * applications that initialize both or neither of OpenSSL and libcrypto. */ static VALUE pg_s_init_ssl(VALUE self, VALUE do_ssl) { UNUSED( self ); PQinitSSL(pg_to_bool_int(do_ssl)); return Qnil; } /************************************************************************** * Initializer **************************************************************************/ void Init_pg_ext(void) { #ifdef HAVE_RB_EXT_RACTOR_SAFE rb_ext_ractor_safe(PQisthreadsafe()); #endif if( RTEST(rb_eval_string("ENV['PG_SKIP_DEPRECATION_WARNING']")) ){ /* Set all bits to disable all deprecation warnings. */ pg_skip_deprecation_warning = 0xFFFF; } else { pg_skip_deprecation_warning = 0; } rb_mPG = rb_define_module( "PG" ); rb_mPGconstants = rb_define_module_under( rb_mPG, "Constants" ); /************************* * PG module methods *************************/ rb_define_singleton_method( rb_mPG, "library_version", pg_s_library_version, 0 ); rb_define_singleton_method( rb_mPG, "isthreadsafe", pg_s_threadsafe_p, 0 ); SINGLETON_ALIAS( rb_mPG, "is_threadsafe?", "isthreadsafe" ); SINGLETON_ALIAS( rb_mPG, "threadsafe?", "isthreadsafe" ); rb_define_singleton_method( rb_mPG, "init_openssl", pg_s_init_openssl, 2 ); rb_define_singleton_method( rb_mPG, "init_ssl", pg_s_init_ssl, 1 ); /****** PG::Connection CLASS CONSTANTS: Connection Status ******/ /* Connection succeeded */ rb_define_const(rb_mPGconstants, "CONNECTION_OK", INT2FIX(CONNECTION_OK)); /* Connection failed */ rb_define_const(rb_mPGconstants, "CONNECTION_BAD", INT2FIX(CONNECTION_BAD)); /****** PG::Connection CLASS CONSTANTS: Nonblocking connection status ******/ /* Waiting for connection to be made. */ rb_define_const(rb_mPGconstants, "CONNECTION_STARTED", INT2FIX(CONNECTION_STARTED)); /* Connection OK; waiting to send. */ rb_define_const(rb_mPGconstants, "CONNECTION_MADE", INT2FIX(CONNECTION_MADE)); /* Waiting for a response from the server. */ rb_define_const(rb_mPGconstants, "CONNECTION_AWAITING_RESPONSE", INT2FIX(CONNECTION_AWAITING_RESPONSE)); /* Received authentication; waiting for backend startup. */ rb_define_const(rb_mPGconstants, "CONNECTION_AUTH_OK", INT2FIX(CONNECTION_AUTH_OK)); /* This state is no longer used. */ rb_define_const(rb_mPGconstants, "CONNECTION_SETENV", INT2FIX(CONNECTION_SETENV)); /* Negotiating SSL encryption. */ rb_define_const(rb_mPGconstants, "CONNECTION_SSL_STARTUP", INT2FIX(CONNECTION_SSL_STARTUP)); /* Internal state - PG.connect() needed. */ rb_define_const(rb_mPGconstants, "CONNECTION_NEEDED", INT2FIX(CONNECTION_NEEDED)); #if PG_MAJORVERSION_NUM >= 10 /* Checking if session is read-write. Available since PostgreSQL-10. */ rb_define_const(rb_mPGconstants, "CONNECTION_CHECK_WRITABLE", INT2FIX(CONNECTION_CHECK_WRITABLE)); #endif #if PG_MAJORVERSION_NUM >= 10 /* Consuming any extra messages. Available since PostgreSQL-10. */ rb_define_const(rb_mPGconstants, "CONNECTION_CONSUME", INT2FIX(CONNECTION_CONSUME)); #endif #if PG_MAJORVERSION_NUM >= 12 /* Negotiating GSSAPI. Available since PostgreSQL-12. */ rb_define_const(rb_mPGconstants, "CONNECTION_GSS_STARTUP", INT2FIX(CONNECTION_GSS_STARTUP)); #endif #if PG_MAJORVERSION_NUM >= 13 /* Checking target server properties. Available since PostgreSQL-13. */ rb_define_const(rb_mPGconstants, "CONNECTION_CHECK_TARGET", INT2FIX(CONNECTION_CHECK_TARGET)); #endif #if PG_MAJORVERSION_NUM >= 14 /* Checking if server is in standby mode. Available since PostgreSQL-14. */ rb_define_const(rb_mPGconstants, "CONNECTION_CHECK_STANDBY", INT2FIX(CONNECTION_CHECK_STANDBY)); #endif /****** PG::Connection CLASS CONSTANTS: Nonblocking connection polling status ******/ /* Async connection is waiting to read */ rb_define_const(rb_mPGconstants, "PGRES_POLLING_READING", INT2FIX(PGRES_POLLING_READING)); /* Async connection is waiting to write */ rb_define_const(rb_mPGconstants, "PGRES_POLLING_WRITING", INT2FIX(PGRES_POLLING_WRITING)); /* Async connection failed or was reset */ rb_define_const(rb_mPGconstants, "PGRES_POLLING_FAILED", INT2FIX(PGRES_POLLING_FAILED)); /* Async connection succeeded */ rb_define_const(rb_mPGconstants, "PGRES_POLLING_OK", INT2FIX(PGRES_POLLING_OK)); /****** PG::Connection CLASS CONSTANTS: Transaction Status ******/ /* Transaction is currently idle ( Connection#transaction_status ) */ rb_define_const(rb_mPGconstants, "PQTRANS_IDLE", INT2FIX(PQTRANS_IDLE)); /* Transaction is currently active; query has been sent to the server, but not yet completed. ( Connection#transaction_status ) */ rb_define_const(rb_mPGconstants, "PQTRANS_ACTIVE", INT2FIX(PQTRANS_ACTIVE)); /* Transaction is currently idle, in a valid transaction block ( Connection#transaction_status ) */ rb_define_const(rb_mPGconstants, "PQTRANS_INTRANS", INT2FIX(PQTRANS_INTRANS)); /* Transaction is currently idle, in a failed transaction block ( Connection#transaction_status ) */ rb_define_const(rb_mPGconstants, "PQTRANS_INERROR", INT2FIX(PQTRANS_INERROR)); /* Transaction's connection is bad ( Connection#transaction_status ) */ rb_define_const(rb_mPGconstants, "PQTRANS_UNKNOWN", INT2FIX(PQTRANS_UNKNOWN)); /****** PG::Connection CLASS CONSTANTS: Error Verbosity ******/ /* Error verbosity level ( Connection#set_error_verbosity ). * In TERSE mode, returned messages include severity, primary text, and position only; this will normally fit on a single line. */ rb_define_const(rb_mPGconstants, "PQERRORS_TERSE", INT2FIX(PQERRORS_TERSE)); /* Error verbosity level ( Connection#set_error_verbosity ). * The DEFAULT mode produces messages that include the above plus any detail, hint, or context fields (these might span multiple lines). */ rb_define_const(rb_mPGconstants, "PQERRORS_DEFAULT", INT2FIX(PQERRORS_DEFAULT)); /* Error verbosity level ( Connection#set_error_verbosity ). * The VERBOSE mode includes all available fields. */ rb_define_const(rb_mPGconstants, "PQERRORS_VERBOSE", INT2FIX(PQERRORS_VERBOSE)); /* PQERRORS_SQLSTATE was introduced in PG-12 together with PQresultMemorySize() */ #ifdef HAVE_PQRESULTMEMORYSIZE /* Error verbosity level ( Connection#set_error_verbosity ). * The SQLSTATE mode includes only the error severity and the SQLSTATE error code, if one is available (if not, the output is like TERSE mode). * * Available since PostgreSQL-12. */ rb_define_const(rb_mPGconstants, "PQERRORS_SQLSTATE", INT2FIX(PQERRORS_SQLSTATE)); #endif #ifdef HAVE_PQRESULTVERBOSEERRORMESSAGE /* See Connection#set_error_context_visibility */ rb_define_const(rb_mPGconstants, "PQSHOW_CONTEXT_NEVER", INT2FIX(PQSHOW_CONTEXT_NEVER)); /* See Connection#set_error_context_visibility */ rb_define_const(rb_mPGconstants, "PQSHOW_CONTEXT_ERRORS", INT2FIX(PQSHOW_CONTEXT_ERRORS)); /* See Connection#set_error_context_visibility */ rb_define_const(rb_mPGconstants, "PQSHOW_CONTEXT_ALWAYS", INT2FIX(PQSHOW_CONTEXT_ALWAYS)); #endif /****** PG::Connection CLASS CONSTANTS: Check Server Status ******/ /* Server is accepting connections. */ rb_define_const(rb_mPGconstants, "PQPING_OK", INT2FIX(PQPING_OK)); /* Server is alive but rejecting connections. */ rb_define_const(rb_mPGconstants, "PQPING_REJECT", INT2FIX(PQPING_REJECT)); /* Could not establish connection. */ rb_define_const(rb_mPGconstants, "PQPING_NO_RESPONSE", INT2FIX(PQPING_NO_RESPONSE)); /* Connection not attempted (bad params). */ rb_define_const(rb_mPGconstants, "PQPING_NO_ATTEMPT", INT2FIX(PQPING_NO_ATTEMPT)); /****** PG::Connection CLASS CONSTANTS: Large Objects ******/ /* Flag for Connection#lo_creat, Connection#lo_open -- open for writing */ rb_define_const(rb_mPGconstants, "INV_WRITE", INT2FIX(INV_WRITE)); /* Flag for Connection#lo_creat, Connection#lo_open -- open for reading */ rb_define_const(rb_mPGconstants, "INV_READ", INT2FIX(INV_READ)); /* Flag for Connection#lo_lseek -- seek from object start */ rb_define_const(rb_mPGconstants, "SEEK_SET", INT2FIX(SEEK_SET)); /* Flag for Connection#lo_lseek -- seek from current position */ rb_define_const(rb_mPGconstants, "SEEK_CUR", INT2FIX(SEEK_CUR)); /* Flag for Connection#lo_lseek -- seek from object end */ rb_define_const(rb_mPGconstants, "SEEK_END", INT2FIX(SEEK_END)); /****** PG::Result CONSTANTS: result status ******/ /* Result#result_status constant - The string sent to the server was empty. */ rb_define_const(rb_mPGconstants, "PGRES_EMPTY_QUERY", INT2FIX(PGRES_EMPTY_QUERY)); /* Result#result_status constant - Successful completion of a command returning no data. */ rb_define_const(rb_mPGconstants, "PGRES_COMMAND_OK", INT2FIX(PGRES_COMMAND_OK)); /* Result#result_status constant - Successful completion of a command returning data (such as a SELECT or SHOW). */ rb_define_const(rb_mPGconstants, "PGRES_TUPLES_OK", INT2FIX(PGRES_TUPLES_OK)); /* Result#result_status constant - Copy Out (from server) data transfer started. */ rb_define_const(rb_mPGconstants, "PGRES_COPY_OUT", INT2FIX(PGRES_COPY_OUT)); /* Result#result_status constant - Copy In (to server) data transfer started. */ rb_define_const(rb_mPGconstants, "PGRES_COPY_IN", INT2FIX(PGRES_COPY_IN)); /* Result#result_status constant - The server’s response was not understood. */ rb_define_const(rb_mPGconstants, "PGRES_BAD_RESPONSE", INT2FIX(PGRES_BAD_RESPONSE)); /* Result#result_status constant - A nonfatal error (a notice or warning) occurred. */ rb_define_const(rb_mPGconstants, "PGRES_NONFATAL_ERROR",INT2FIX(PGRES_NONFATAL_ERROR)); /* Result#result_status constant - A fatal error occurred. */ rb_define_const(rb_mPGconstants, "PGRES_FATAL_ERROR", INT2FIX(PGRES_FATAL_ERROR)); /* Result#result_status constant - Copy In/Out data transfer in progress. */ rb_define_const(rb_mPGconstants, "PGRES_COPY_BOTH", INT2FIX(PGRES_COPY_BOTH)); /* Result#result_status constant - Single tuple from larger resultset. */ rb_define_const(rb_mPGconstants, "PGRES_SINGLE_TUPLE", INT2FIX(PGRES_SINGLE_TUPLE)); #ifdef HAVE_PQENTERPIPELINEMODE /* Result#result_status constant - The PG::Result represents a synchronization point in pipeline mode, requested by Connection#pipeline_sync. * * This status occurs only when pipeline mode has been selected. */ rb_define_const(rb_mPGconstants, "PGRES_PIPELINE_SYNC", INT2FIX(PGRES_PIPELINE_SYNC)); /* Result#result_status constant - The PG::Result represents a pipeline that has received an error from the server. * * Connection#get_result must be called repeatedly, and each time it will return this status code until the end of the current pipeline, at which point it will return PG::PGRES_PIPELINE_SYNC and normal processing can resume. */ rb_define_const(rb_mPGconstants, "PGRES_PIPELINE_ABORTED", INT2FIX(PGRES_PIPELINE_ABORTED)); #endif /****** Result CONSTANTS: result error field codes ******/ /* Result#result_error_field argument constant * * The severity; the field contents are ERROR, FATAL, or PANIC (in an error message), or WARNING, NOTICE, DEBUG, INFO, or LOG (in a notice message), or a localized translation * of one of these. * Always present. */ rb_define_const(rb_mPGconstants, "PG_DIAG_SEVERITY", INT2FIX(PG_DIAG_SEVERITY)); #ifdef PG_DIAG_SEVERITY_NONLOCALIZED /* Result#result_error_field argument constant * * The severity; the field contents are ERROR, FATAL, or PANIC (in an error message), or WARNING, NOTICE, DEBUG, INFO, or LOG (in a notice message). * This is identical to the PG_DIAG_SEVERITY field except that the contents are never localized. * * Available since PostgreSQL-9.6 */ rb_define_const(rb_mPGconstants, "PG_DIAG_SEVERITY_NONLOCALIZED", INT2FIX(PG_DIAG_SEVERITY_NONLOCALIZED)); #endif /* Result#result_error_field argument constant * * The SQLSTATE code for the error. * The SQLSTATE code identies the type of error that has occurred; it can be used by front-end applications to perform specific operations (such as error handling) in response to a particular database error. * For a list of the possible SQLSTATE codes, see Appendix A. * This field is not localizable, and is always present. */ rb_define_const(rb_mPGconstants, "PG_DIAG_SQLSTATE", INT2FIX(PG_DIAG_SQLSTATE)); /* Result#result_error_field argument constant * * The primary human-readable error message (typically one line). * Always present. */ rb_define_const(rb_mPGconstants, "PG_DIAG_MESSAGE_PRIMARY", INT2FIX(PG_DIAG_MESSAGE_PRIMARY)); /* Result#result_error_field argument constant * * Detail: an optional secondary error message carrying more detail about the problem. * Might run to multiple lines. */ rb_define_const(rb_mPGconstants, "PG_DIAG_MESSAGE_DETAIL", INT2FIX(PG_DIAG_MESSAGE_DETAIL)); /* Result#result_error_field argument constant * * Hint: an optional suggestion what to do about the problem. * This is intended to differ from detail in that it offers advice (potentially inappropriate) rather than hard facts. * Might run to multiple lines. */ rb_define_const(rb_mPGconstants, "PG_DIAG_MESSAGE_HINT", INT2FIX(PG_DIAG_MESSAGE_HINT)); /* Result#result_error_field argument constant * * A string containing a decimal integer indicating an error cursor position as an index into the original statement string. * * The first character has index 1, and positions are measured in characters not bytes. */ rb_define_const(rb_mPGconstants, "PG_DIAG_STATEMENT_POSITION", INT2FIX(PG_DIAG_STATEMENT_POSITION)); /* Result#result_error_field argument constant * * This is defined the same as the PG_DIAG_STATEMENT_POSITION field, but it is used when the cursor position refers to an internally generated command rather than the one submitted by the client. * The PG_DIAG_INTERNAL_QUERY field will always appear when this field appears. */ rb_define_const(rb_mPGconstants, "PG_DIAG_INTERNAL_POSITION", INT2FIX(PG_DIAG_INTERNAL_POSITION)); /* Result#result_error_field argument constant * * The text of a failed internally-generated command. * This could be, for example, a SQL query issued by a PL/pgSQL function. */ rb_define_const(rb_mPGconstants, "PG_DIAG_INTERNAL_QUERY", INT2FIX(PG_DIAG_INTERNAL_QUERY)); /* Result#result_error_field argument constant * * An indication of the context in which the error occurred. * Presently this includes a call stack traceback of active procedural language functions and internally-generated queries. * The trace is one entry per line, most recent first. */ rb_define_const(rb_mPGconstants, "PG_DIAG_CONTEXT", INT2FIX(PG_DIAG_CONTEXT)); /* Result#result_error_field argument constant * * The file name of the source-code location where the error was reported. */ rb_define_const(rb_mPGconstants, "PG_DIAG_SOURCE_FILE", INT2FIX(PG_DIAG_SOURCE_FILE)); /* Result#result_error_field argument constant * * The line number of the source-code location where the error was reported. */ rb_define_const(rb_mPGconstants, "PG_DIAG_SOURCE_LINE", INT2FIX(PG_DIAG_SOURCE_LINE)); /* Result#result_error_field argument constant * * The name of the source-code function reporting the error. */ rb_define_const(rb_mPGconstants, "PG_DIAG_SOURCE_FUNCTION", INT2FIX(PG_DIAG_SOURCE_FUNCTION)); #ifdef PG_DIAG_TABLE_NAME /* Result#result_error_field argument constant * * If the error was associated with a specific database object, the name of the schema containing that object, if any. */ rb_define_const(rb_mPGconstants, "PG_DIAG_SCHEMA_NAME", INT2FIX(PG_DIAG_SCHEMA_NAME)); /* Result#result_error_field argument constant * * If the error was associated with a specific table, the name of the table. * (When this field is present, the schema name field provides the name of the table's schema.) */ rb_define_const(rb_mPGconstants, "PG_DIAG_TABLE_NAME", INT2FIX(PG_DIAG_TABLE_NAME)); /* Result#result_error_field argument constant * * If the error was associated with a specific table column, the name of the column. * (When this field is present, the schema and table name fields identify the table.) */ rb_define_const(rb_mPGconstants, "PG_DIAG_COLUMN_NAME", INT2FIX(PG_DIAG_COLUMN_NAME)); /* Result#result_error_field argument constant * * If the error was associated with a specific datatype, the name of the datatype. * (When this field is present, the schema name field provides the name of the datatype's schema.) */ rb_define_const(rb_mPGconstants, "PG_DIAG_DATATYPE_NAME", INT2FIX(PG_DIAG_DATATYPE_NAME)); /* Result#result_error_field argument constant * * If the error was associated with a specific constraint, the name of the constraint. * The table or domain that the constraint belongs to is reported using the fields listed above. * (For this purpose, indexes are treated as constraints, even if they weren't created with constraint syntax.) */ rb_define_const(rb_mPGconstants, "PG_DIAG_CONSTRAINT_NAME", INT2FIX(PG_DIAG_CONSTRAINT_NAME)); #endif #ifdef HAVE_PQENTERPIPELINEMODE /* Connection#pipeline_status constant * * The libpq connection is in pipeline mode. */ rb_define_const(rb_mPGconstants, "PQ_PIPELINE_ON", INT2FIX(PQ_PIPELINE_ON)); /* Connection#pipeline_status constant * * The libpq connection is not in pipeline mode. */ rb_define_const(rb_mPGconstants, "PQ_PIPELINE_OFF", INT2FIX(PQ_PIPELINE_OFF)); /* Connection#pipeline_status constant * * The libpq connection is in pipeline mode and an error occurred while processing the current pipeline. * The aborted flag is cleared when PQgetResult returns a result of type PGRES_PIPELINE_SYNC. */ rb_define_const(rb_mPGconstants, "PQ_PIPELINE_ABORTED", INT2FIX(PQ_PIPELINE_ABORTED)); #endif /* Invalid OID constant */ rb_define_const(rb_mPGconstants, "INVALID_OID", INT2FIX(InvalidOid)); rb_define_const(rb_mPGconstants, "InvalidOid", INT2FIX(InvalidOid)); /* PostgreSQL compiled in default port */ rb_define_const(rb_mPGconstants, "DEF_PGPORT", INT2FIX(DEF_PGPORT)); /* Add the constants to the toplevel namespace */ rb_include_module( rb_mPG, rb_mPGconstants ); /* Initialize the main extension classes */ init_pg_connection(); init_pg_result(); init_pg_errors(); init_pg_type_map(); init_pg_type_map_all_strings(); init_pg_type_map_by_class(); init_pg_type_map_by_column(); init_pg_type_map_by_mri_type(); init_pg_type_map_by_oid(); init_pg_type_map_in_ruby(); init_pg_coder(); init_pg_text_encoder(); init_pg_text_decoder(); init_pg_binary_encoder(); init_pg_binary_decoder(); init_pg_copycoder(); init_pg_recordcoder(); init_pg_tuple(); } pg-1.5.5/ext/pg_type_map_by_class.c0000644000004100000410000001642714563476204017334 0ustar www-datawww-data/* * pg_type_map_by_class.c - PG::TypeMapByClass class extension * $Id$ * * This type map can be used to select value encoders based on the class * of the given value to be send. * */ #include "pg.h" static VALUE rb_cTypeMapByClass; typedef struct { t_typemap typemap; VALUE klass_to_coder; VALUE self; struct pg_tmbk_coder_cache_entry { VALUE klass; t_pg_coder *p_coder; } cache_row[0x100]; } t_tmbk; /* * We use 8 Bits of the klass object id as index to a 256 entry cache. * This avoids full lookups in most cases. */ #define CACHE_LOOKUP(this, klass) ( &this->cache_row[(((unsigned long)klass) >> 8) & 0xff] ) static t_pg_coder * pg_tmbk_lookup_klass(t_tmbk *this, VALUE klass, VALUE param_value) { t_pg_coder *p_coder; struct pg_tmbk_coder_cache_entry *p_ce; p_ce = CACHE_LOOKUP(this, klass); /* Is the cache entry for the expected klass? */ if( p_ce->klass == klass ) { p_coder = p_ce->p_coder; } else { /* No, then do a full lookup based on the ancestors. */ VALUE obj = rb_hash_lookup( this->klass_to_coder, klass ); if( NIL_P(obj) ){ int i; VALUE ancestors = rb_mod_ancestors( klass ); Check_Type( ancestors, T_ARRAY ); /* Don't look at the first element, it's expected to equal klass. */ for( i=1; iklass_to_coder, rb_ary_entry( ancestors, i) ); if( !NIL_P(obj) ) break; } } if(NIL_P(obj)){ p_coder = NULL; }else if(rb_obj_is_kind_of(obj, rb_cPG_Coder)){ TypedData_Get_Struct(obj, t_pg_coder, &pg_coder_type, p_coder); }else{ if( RB_TYPE_P(obj, T_SYMBOL) ){ /* A Symbol: Call the method with this name. */ obj = rb_funcall(this->self, SYM2ID(obj), 1, param_value); }else{ /* A Proc object (or something that responds to #call). */ obj = rb_funcall(obj, rb_intern("call"), 1, param_value); } if( NIL_P(obj) ){ p_coder = NULL; }else{ /* Check retrieved coder type */ TypedData_Get_Struct(obj, t_pg_coder, &pg_coder_type, p_coder); } /* We can not cache coders retrieved by ruby code, because we can not anticipate * the returned Coder object. */ return p_coder; } /* Write the retrieved coder to the cache */ p_ce->klass = klass; p_ce->p_coder = p_coder; } return p_coder; } static t_pg_coder * pg_tmbk_typecast_query_param( t_typemap *p_typemap, VALUE param_value, int field ) { t_tmbk *this = (t_tmbk *)p_typemap; t_pg_coder *p_coder; p_coder = pg_tmbk_lookup_klass( this, rb_obj_class(param_value), param_value ); if( !p_coder ){ t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); return default_tm->funcs.typecast_query_param( default_tm, param_value, field ); } return p_coder; } static VALUE pg_tmbk_fit_to_query( VALUE self, VALUE params ) { t_tmbk *this = (t_tmbk *)RTYPEDDATA_DATA(self); /* Nothing to check at this typemap, but ensure that the default type map fits. */ t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); default_tm->funcs.fit_to_query( this->typemap.default_typemap, params ); return self; } static void pg_tmbk_mark( void *_this ) { t_tmbk *this = (t_tmbk *)_this; pg_typemap_mark(&this->typemap); rb_gc_mark_movable(this->klass_to_coder); rb_gc_mark_movable(this->self); } static size_t pg_tmbk_memsize( const void *_this ) { const t_tmbk *this = (const t_tmbk *)_this; return sizeof(*this); } static void pg_tmbk_compact(void *ptr) { t_tmbk *this = (t_tmbk *)ptr; pg_typemap_compact(&this->typemap); pg_gc_location(this->klass_to_coder); pg_gc_location(this->self); /* Clear the cache, to be safe from changes of klass VALUE by GC.compact. */ memset(&this->cache_row, 0, sizeof(this->cache_row)); } static const rb_data_type_t pg_tmbk_type = { "PG::TypeMapByClass", { pg_tmbk_mark, RUBY_TYPED_DEFAULT_FREE, pg_tmbk_memsize, pg_compact_callback(pg_tmbk_compact), }, &pg_typemap_type, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; static VALUE pg_tmbk_s_allocate( VALUE klass ) { t_tmbk *this; VALUE self; self = TypedData_Make_Struct( klass, t_tmbk, &pg_tmbk_type, this ); this->typemap.funcs.fit_to_result = pg_typemap_fit_to_result; this->typemap.funcs.fit_to_query = pg_tmbk_fit_to_query; this->typemap.funcs.fit_to_copy_get = pg_typemap_fit_to_copy_get; this->typemap.funcs.typecast_result_value = pg_typemap_result_value; this->typemap.funcs.typecast_query_param = pg_tmbk_typecast_query_param; this->typemap.funcs.typecast_copy_get = pg_typemap_typecast_copy_get; RB_OBJ_WRITE(self, &this->typemap.default_typemap, pg_typemap_all_strings); /* We need to store self in the this-struct, because pg_tmbk_typecast_query_param(), * is called with the this-pointer only. */ this->self = self; RB_OBJ_WRITE(self, &this->klass_to_coder, rb_hash_new()); /* The cache is properly initialized by TypedData_Make_Struct(). */ return self; } /* * call-seq: * typemap.[class] = coder * * Assigns a new PG::Coder object to the type map. The encoder * is chosen for all values that are a kind of the given +class+ . * * +coder+ can be one of the following: * * +nil+ - Values are forwarded to the #default_type_map . * * a PG::Coder - Values are encoded by the given encoder * * a Symbol - The method of this type map (or a derivation) that is called for each value to sent. * It must return a PG::Coder or +nil+ . * * a Proc - The Proc object is called for each value. It must return a PG::Coder or +nil+ . * */ static VALUE pg_tmbk_aset( VALUE self, VALUE klass, VALUE coder ) { t_tmbk *this = RTYPEDDATA_DATA( self ); rb_check_frozen(self); if(NIL_P(coder)){ rb_hash_delete( this->klass_to_coder, klass ); }else{ rb_hash_aset( this->klass_to_coder, klass, coder ); } /* The cache lookup key can be a derivation of the klass. * So we can not expire the cache selectively. */ memset( &this->cache_row, 0, sizeof(this->cache_row) ); return coder; } /* * call-seq: * typemap.[class] -> coder * * Returns the encoder object for the given +class+ */ static VALUE pg_tmbk_aref( VALUE self, VALUE klass ) { t_tmbk *this = RTYPEDDATA_DATA( self ); return rb_hash_lookup(this->klass_to_coder, klass); } /* * call-seq: * typemap.coders -> Hash * * Returns all classes and their assigned encoder object. */ static VALUE pg_tmbk_coders( VALUE self ) { t_tmbk *this = RTYPEDDATA_DATA( self ); return rb_obj_freeze(rb_hash_dup(this->klass_to_coder)); } void init_pg_type_map_by_class(void) { /* * Document-class: PG::TypeMapByClass < PG::TypeMap * * This type map casts values based on the class or the ancestors of the given value * to be sent. * * This type map is usable for type casting query bind parameters and COPY data * for PG::Connection#put_copy_data . Therefore only encoders might be assigned by * the #[]= method. */ rb_cTypeMapByClass = rb_define_class_under( rb_mPG, "TypeMapByClass", rb_cTypeMap ); rb_define_alloc_func( rb_cTypeMapByClass, pg_tmbk_s_allocate ); rb_define_method( rb_cTypeMapByClass, "[]=", pg_tmbk_aset, 2 ); rb_define_method( rb_cTypeMapByClass, "[]", pg_tmbk_aref, 1 ); rb_define_method( rb_cTypeMapByClass, "coders", pg_tmbk_coders, 0 ); /* rb_mDefaultTypeMappable = rb_define_module_under( rb_cTypeMap, "DefaultTypeMappable"); */ rb_include_module( rb_cTypeMapByClass, rb_mDefaultTypeMappable ); } pg-1.5.5/ext/pg_binary_encoder.c0000644000004100000410000002521514563476204016615 0ustar www-datawww-data/* * pg_column_map.c - PG::ColumnMap class extension * $Id$ * */ #include "pg.h" #include "pg_util.h" #ifdef HAVE_INTTYPES_H #include #endif VALUE rb_mPG_BinaryEncoder; static ID s_id_year; static ID s_id_month; static ID s_id_day; /* * Document-class: PG::BinaryEncoder::Boolean < PG::SimpleEncoder * * This is the encoder class for the PostgreSQL boolean type. * * It accepts true and false. Other values will raise an exception. * */ static int pg_bin_enc_boolean(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { char mybool; if (value == Qtrue) { mybool = 1; } else if (value == Qfalse) { mybool = 0; } else { rb_raise( rb_eTypeError, "wrong data for binary boolean converter" ); } if(out) *out = mybool; return 1; } /* * Document-class: PG::BinaryEncoder::Int2 < PG::SimpleEncoder * * This is the encoder class for the PostgreSQL +int2+ (alias +smallint+) type. * * Non-Number values are expected to have method +to_i+ defined. * */ static int pg_bin_enc_int2(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { if(out){ write_nbo16(NUM2INT(*intermediate), out); }else{ *intermediate = pg_obj_to_i(value); } return 2; } /* * Document-class: PG::BinaryEncoder::Int4 < PG::SimpleEncoder * * This is the encoder class for the PostgreSQL +int4+ (alias +integer+) type. * * Non-Number values are expected to have method +to_i+ defined. * */ static int pg_bin_enc_int4(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { if(out){ write_nbo32(NUM2LONG(*intermediate), out); }else{ *intermediate = pg_obj_to_i(value); } return 4; } /* * Document-class: PG::BinaryEncoder::Int8 < PG::SimpleEncoder * * This is the encoder class for the PostgreSQL +int8+ (alias +bigint+) type. * * Non-Number values are expected to have method +to_i+ defined. * */ static int pg_bin_enc_int8(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { if(out){ write_nbo64(NUM2LL(*intermediate), out); }else{ *intermediate = pg_obj_to_i(value); } return 8; } /* * Document-class: PG::BinaryEncoder::Float4 < PG::SimpleEncoder * * This is the binary encoder class for the PostgreSQL +float4+ type. * */ static int pg_bin_enc_float4(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { union { float f; int32_t i; } swap4; if(out){ swap4.f = NUM2DBL(*intermediate); write_nbo32(swap4.i, out); }else{ *intermediate = value; } return 4; } /* * Document-class: PG::BinaryEncoder::Float8 < PG::SimpleEncoder * * This is the binary encoder class for the PostgreSQL +float8+ type. * */ static int pg_bin_enc_float8(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { union { double f; int64_t i; } swap8; if(out){ swap8.f = NUM2DBL(*intermediate); write_nbo64(swap8.i, out); }else{ *intermediate = value; } return 8; } #define PG_INT32_MIN (-0x7FFFFFFF-1) #define PG_INT32_MAX (0x7FFFFFFF) #define PG_INT64_MIN (-0x7FFFFFFFFFFFFFFFL - 1) #define PG_INT64_MAX 0x7FFFFFFFFFFFFFFFL /* * Document-class: PG::BinaryEncoder::Timestamp < PG::SimpleEncoder * * This is a encoder class for conversion of Ruby Time objects to PostgreSQL binary timestamps. * * The following flags can be used to specify timezone interpretation: * * +PG::Coder::TIMESTAMP_DB_UTC+ : Send timestamp as UTC time (default) * * +PG::Coder::TIMESTAMP_DB_LOCAL+ : Send timestamp as local time (slower) * * Example: * enco = PG::BinaryEncoder::Timestamp.new(flags: PG::Coder::TIMESTAMP_DB_UTC) * enco.encode(Time.utc(2000, 1, 1)) # => "\x00\x00\x00\x00\x00\x00\x00\x00" * * String values are expected to contain a binary data with a length of 8 byte. * */ static int pg_bin_enc_timestamp(t_pg_coder *this, VALUE value, char *out, VALUE *intermediate, int enc_idx) { if(out){ int64_t timestamp; struct timespec ts; /* second call -> write data to *out */ switch(TYPE(*intermediate)){ case T_STRING: return pg_coder_enc_to_s(this, value, out, intermediate, enc_idx); case T_TRUE: write_nbo64(PG_INT64_MAX, out); return 8; case T_FALSE: write_nbo64(PG_INT64_MIN, out); return 8; } ts = rb_time_timespec(*intermediate); /* PostgreSQL's timestamp is based on year 2000 and Ruby's time is based on 1970. * Adjust the 30 years difference. */ timestamp = ((int64_t)ts.tv_sec - 10957L * 24L * 3600L) * 1000000 + ((int64_t)ts.tv_nsec / 1000); if( this->flags & PG_CODER_TIMESTAMP_DB_LOCAL ) { /* send as local time */ timestamp += NUM2LL(rb_funcall(*intermediate, rb_intern("utc_offset"), 0)) * 1000000; } write_nbo64(timestamp, out); }else{ /* first call -> determine the required length */ if(TYPE(value) == T_STRING){ char *pstr = RSTRING_PTR(value); if(RSTRING_LEN(value) >= 1){ switch(pstr[0]) { case 'I': case 'i': *intermediate = Qtrue; return 8; case '-': if (RSTRING_LEN(value) >= 2 && (pstr[1] == 'I' || pstr[1] == 'i')) { *intermediate = Qfalse; return 8; } } } return pg_coder_enc_to_s(this, value, out, intermediate, enc_idx); } if( this->flags & PG_CODER_TIMESTAMP_DB_LOCAL ) { /* make a local time, so that utc_offset is set */ value = rb_funcall(value, rb_intern("getlocal"), 0); } *intermediate = value; } return 8; } #define POSTGRES_EPOCH_JDATE 2451545 /* == date2j(2000, 1, 1) */ int date2j(int year, int month, int day) { int julian; int century; if (month > 2) { month += 1; year += 4800; } else { month += 13; year += 4799; } century = year / 100; julian = year * 365 - 32167; julian += year / 4 - century + century / 4; julian += 7834 * month / 256 + day; return julian; } /* date2j() */ /* * Document-class: PG::BinaryEncoder::Date < PG::SimpleEncoder * * This is a encoder class for conversion of Ruby Date objects to PostgreSQL binary date. * * String values are expected to contain a binary data with a length of 4 byte. * */ static int pg_bin_enc_date(t_pg_coder *this, VALUE value, char *out, VALUE *intermediate, int enc_idx) { if(out){ /* second call -> write data to *out */ switch(TYPE(*intermediate)){ case T_STRING: return pg_coder_enc_to_s(this, value, out, intermediate, enc_idx); case T_TRUE: write_nbo32(PG_INT32_MAX, out); return 4; case T_FALSE: write_nbo32(PG_INT32_MIN, out); return 4; } VALUE year = rb_funcall(value, s_id_year, 0); VALUE month = rb_funcall(value, s_id_month, 0); VALUE day = rb_funcall(value, s_id_day, 0); int jday = date2j(NUM2INT(year), NUM2INT(month), NUM2INT(day)) - POSTGRES_EPOCH_JDATE; write_nbo32(jday, out); }else{ /* first call -> determine the required length */ if(TYPE(value) == T_STRING){ char *pstr = RSTRING_PTR(value); if(RSTRING_LEN(value) >= 1){ switch(pstr[0]) { case 'I': case 'i': *intermediate = Qtrue; return 4; case '-': if (RSTRING_LEN(value) >= 2 && (pstr[1] == 'I' || pstr[1] == 'i')) { *intermediate = Qfalse; return 4; } } } return pg_coder_enc_to_s(this, value, out, intermediate, enc_idx); } *intermediate = value; } return 4; } /* * Document-class: PG::BinaryEncoder::FromBase64 < PG::CompositeEncoder * * This is an encoder class for conversion of base64 encoded data * to it's binary representation. * */ static int pg_bin_enc_from_base64(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { int strlen; VALUE subint; t_pg_composite_coder *this = (t_pg_composite_coder *)conv; t_pg_coder_enc_func enc_func = pg_coder_enc_func(this->elem); if(out){ /* Second encoder pass, if required */ strlen = enc_func(this->elem, value, out, intermediate, enc_idx); strlen = base64_decode( out, out, strlen ); return strlen; } else { /* First encoder pass */ strlen = enc_func(this->elem, value, NULL, &subint, enc_idx); if( strlen == -1 ){ /* Encoded string is returned in subint */ VALUE out_str; strlen = RSTRING_LENINT(subint); out_str = rb_str_new(NULL, BASE64_DECODED_SIZE(strlen)); strlen = base64_decode( RSTRING_PTR(out_str), RSTRING_PTR(subint), strlen); rb_str_set_len( out_str, strlen ); *intermediate = out_str; return -1; } else { *intermediate = subint; return BASE64_DECODED_SIZE(strlen); } } } void init_pg_binary_encoder(void) { s_id_year = rb_intern("year"); s_id_month = rb_intern("month"); s_id_day = rb_intern("day"); /* This module encapsulates all encoder classes with binary output format */ rb_mPG_BinaryEncoder = rb_define_module_under( rb_mPG, "BinaryEncoder" ); /* Make RDoc aware of the encoder classes... */ /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "Boolean", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Boolean", pg_bin_enc_boolean, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "Int2", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Int2", pg_bin_enc_int2, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "Int4", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Int4", pg_bin_enc_int4, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "Int8", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Int8", pg_bin_enc_int8, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "Float4", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Float4", pg_bin_enc_float4, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "Float8", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Float8", pg_bin_enc_float8, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "String", rb_cPG_SimpleEncoder ); */ pg_define_coder( "String", pg_coder_enc_to_s, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "Bytea", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Bytea", pg_coder_enc_to_s, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "Timestamp", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Timestamp", pg_bin_enc_timestamp, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "Date", rb_cPG_SimpleEncoder ); */ pg_define_coder( "Date", pg_bin_enc_date, rb_cPG_SimpleEncoder, rb_mPG_BinaryEncoder ); /* dummy = rb_define_class_under( rb_mPG_BinaryEncoder, "FromBase64", rb_cPG_CompositeEncoder ); */ pg_define_coder( "FromBase64", pg_bin_enc_from_base64, rb_cPG_CompositeEncoder, rb_mPG_BinaryEncoder ); } pg-1.5.5/ext/pg_text_decoder.c0000644000004100000410000006720414563476204016307 0ustar www-datawww-data/* * pg_text_decoder.c - PG::TextDecoder module * $Id$ * */ /* * * Type casts for decoding PostgreSQL string representations to Ruby objects. * * Decoder classes are defined with pg_define_coder(). This creates a new coder class and * assigns a decoder function. * * Signature of all type cast decoders is: * VALUE decoder_function(t_pg_coder *this, const char *val, int len, int tuple, int field, int enc_idx) * * Params: * this - The data part of the coder object that belongs to the decoder function. * val, len - The text or binary data to decode. * The caller ensures, that text data (format=0) is zero terminated so that val[len]=0. * The memory should be used read-only by the callee. * tuple - Row of the value within the result set. * field - Column of the value within the result set. * enc_idx - Index of the Encoding that any output String should get assigned. * * Returns: * The type casted Ruby object. * */ #include "ruby/version.h" #include "pg.h" #include "pg_util.h" #ifdef HAVE_INTTYPES_H #include #endif #include #include #if !defined(_WIN32) #include #include #endif #include VALUE rb_mPG_TextDecoder; static ID s_id_Rational; static ID s_id_new; static ID s_id_utc; static ID s_id_getlocal; static ID s_id_BigDecimal; static VALUE s_IPAddr; static VALUE s_vmasks4; static VALUE s_vmasks6; static VALUE s_nan, s_pos_inf, s_neg_inf; static int use_ipaddr_alloc; static ID s_id_lshift; static ID s_id_add; static ID s_id_mask; static ID s_ivar_family; static ID s_ivar_addr; static ID s_ivar_mask_addr; /* * Document-class: PG::TextDecoder::Boolean < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL boolean type * to Ruby true or false values. * */ static VALUE pg_text_dec_boolean(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { if (len < 1) { rb_raise( rb_eTypeError, "wrong data for text boolean converter in tuple %d field %d", tuple, field); } return *val == 't' ? Qtrue : Qfalse; } /* * Document-class: PG::TextDecoder::String < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL text output to * to Ruby String object. The output value will have the character encoding * set with PG::Connection#internal_encoding= . * */ VALUE pg_text_dec_string(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { VALUE ret = rb_str_new( val, len ); PG_ENCODING_SET_NOCHECK( ret, enc_idx ); return ret; } /* * Document-class: PG::TextDecoder::Integer < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL integer types * to Ruby Integer objects. * */ static VALUE pg_text_dec_integer(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { long i; int max_len; if( sizeof(i) >= 8 && FIXNUM_MAX >= 1000000000000000000LL ){ /* 64 bit system can safely handle all numbers up to 18 digits as Fixnum */ max_len = 18; } else if( sizeof(i) >= 4 && FIXNUM_MAX >= 1000000000LL ){ /* 32 bit system can safely handle all numbers up to 9 digits as Fixnum */ max_len = 9; } else { /* unknown -> don't use fast path for int conversion */ max_len = 0; } if( len <= max_len ){ /* rb_cstr2inum() seems to be slow, so we do the int conversion by hand. * This proved to be 40% faster by the following benchmark: * * conn.type_mapping_for_results = PG::BasicTypeMapForResults.new conn * Benchmark.measure do * conn.exec("select generate_series(1,1000000)").values } * end */ const char *val_pos = val; char digit = *val_pos; int neg; int error = 0; if( digit=='-' ){ neg = 1; i = 0; }else if( digit>='0' && digit<='9' ){ neg = 0; i = digit - '0'; } else { error = 1; } while (!error && (digit=*++val_pos)) { if( digit>='0' && digit<='9' ){ i = i * 10 + (digit - '0'); } else { error = 1; } } if( !error ){ return LONG2FIX(neg ? -i : i); } } /* Fallback to ruby method if number too big or unrecognized. */ return rb_cstr2inum(val, 10); } /* * Document-class: PG::TextDecoder::Numeric < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL numeric types * to Ruby BigDecimal objects. * */ static VALUE pg_text_dec_numeric(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { return rb_funcall(rb_cObject, s_id_BigDecimal, 1, rb_str_new(val, len)); } /* called per autoload when TextDecoder::Numeric is used */ static VALUE init_pg_text_decoder_numeric(VALUE rb_mPG_TextDecoder) { rb_require("bigdecimal"); s_id_BigDecimal = rb_intern("BigDecimal"); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Numeric", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Numeric", pg_text_dec_numeric, rb_cPG_SimpleDecoder, rb_mPG_TextDecoder ); return Qnil; } /* * Document-class: PG::TextDecoder::Float < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL float4 and float8 types * to Ruby Float objects. * */ static VALUE pg_text_dec_float(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { switch(*val) { case 'N': return s_nan; case 'I': return s_pos_inf; case '-': if (val[1] == 'I') { return s_neg_inf; } else { return rb_float_new(rb_cstr_to_dbl(val, Qfalse)); } default: return rb_float_new(rb_cstr_to_dbl(val, Qfalse)); } } struct pg_blob_initialization { char *blob_string; size_t length; }; static VALUE pg_create_blob(VALUE v) { struct pg_blob_initialization *bi = (struct pg_blob_initialization *)v; return rb_str_new(bi->blob_string, bi->length); } static VALUE pg_pq_freemem(VALUE mem) { PQfreemem((void *)mem); return Qfalse; } /* * Document-class: PG::TextDecoder::Bytea < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL bytea type * to binary String objects. * */ static VALUE pg_text_dec_bytea(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { struct pg_blob_initialization bi; bi.blob_string = (char *)PQunescapeBytea((unsigned char*)val, &bi.length); if (bi.blob_string == NULL) { rb_raise(rb_eNoMemError, "PQunescapeBytea failure: probably not enough memory"); } return rb_ensure(pg_create_blob, (VALUE)&bi, pg_pq_freemem, (VALUE)bi.blob_string); } /* * array_isspace() --- a non-locale-dependent isspace() * * We used to use isspace() for parsing array values, but that has * undesirable results: an array value might be silently interpreted * differently depending on the locale setting. Now we just hard-wire * the traditional ASCII definition of isspace(). */ static int array_isspace(char ch) { if (ch == ' ' || ch == '\t' || ch == '\n' || ch == '\r' || ch == '\v' || ch == '\f') return 1; return 0; } static int array_isdim(char ch) { if ( (ch >= '0' && ch <= '9') || (ch == '-') || (ch == '+') || (ch == ':') ) return 1; return 0; } static void array_parser_error(t_pg_composite_coder *this, const char *text){ if( (this->comp.flags & PG_CODER_FORMAT_ERROR_MASK) == PG_CODER_FORMAT_ERROR_TO_RAISE ){ rb_raise( rb_eTypeError, "%s", text ); } } /* * Array parser functions are thankfully borrowed from here: * https://github.com/dockyard/pg_array_parser */ static VALUE read_array_without_dim(t_pg_composite_coder *this, int *index, const char *c_pg_array_string, int array_string_length, char *word, int enc_idx, int tuple, int field, t_pg_coder_dec_func dec_func) { /* Return value: array */ VALUE array; int word_index = 0; /* The current character in the input string. */ char c; /* 0: Currently outside a quoted string, current word never quoted * 1: Currently inside a quoted string * -1: Currently outside a quoted string, current word previously quoted */ int openQuote = 0; /* Inside quoted input means the next character should be treated literally, * instead of being treated as a metacharacter. * Outside of quoted input, means that the word shouldn't be pushed to the array, * used when the last entry was a subarray (which adds to the array itself). */ int escapeNext = 0; array = rb_ary_new(); /* Special case the empty array, so it doesn't need to be handled manually inside * the loop. */ if(((*index) < array_string_length) && c_pg_array_string[*index] == '}') { return array; } for(;(*index) < array_string_length; ++(*index)) { c = c_pg_array_string[*index]; if(openQuote < 1) { if(c == this->delimiter || c == '}') { if(!escapeNext) { if(openQuote == 0 && word_index == 4 && !strncmp(word, "NULL", word_index)) { rb_ary_push(array, Qnil); } else { VALUE val; word[word_index] = 0; val = dec_func(this->elem, word, word_index, tuple, field, enc_idx); rb_ary_push(array, val); } } if(c == '}') { return array; } escapeNext = 0; openQuote = 0; word_index = 0; } else if(c == '"') { openQuote = 1; } else if(c == '{') { VALUE subarray; (*index)++; subarray = read_array_without_dim(this, index, c_pg_array_string, array_string_length, word, enc_idx, tuple, field, dec_func); rb_ary_push(array, subarray); escapeNext = 1; } else if(c == 0) { array_parser_error( this, "premature end of the array string" ); return array; } else { word[word_index] = c; word_index++; } } else if (escapeNext) { word[word_index] = c; word_index++; escapeNext = 0; } else if (c == '\\') { escapeNext = 1; } else if (c == '"') { openQuote = -1; } else { word[word_index] = c; word_index++; } } array_parser_error( this, "premature end of the array string" ); return array; } /* * Document-class: PG::TextDecoder::Array < PG::CompositeDecoder * * This is a decoder class for PostgreSQL array types. * * It returns an Array with possibly an arbitrary number of sub-Arrays. * All values are decoded according to the #elements_type accessor. * Sub-arrays are decoded recursively. * * This decoder simply ignores any dimension decorations preceding the array values. * It returns all array values as regular ruby Array with a zero based index, regardless of the index given in the dimension decoration. * * An array decoder which respects dimension decorations is waiting to be implemented. * */ static VALUE pg_text_dec_array(t_pg_coder *conv, const char *c_pg_array_string, int array_string_length, int tuple, int field, int enc_idx) { int index = 0; int ndim = 0; VALUE ret; t_pg_composite_coder *this = (t_pg_composite_coder *)conv; /* * If the input string starts with dimension info, read and use that. * Otherwise, we require the input to be in curly-brace style, and we * prescan the input to determine dimensions. * * Dimension info takes the form of one or more [n] or [m:n] items. The * outer loop iterates once per dimension item. */ for (;;) { /* * Note: we currently allow whitespace between, but not within, * dimension items. */ while (array_isspace(c_pg_array_string[index])) index++; if (c_pg_array_string[index] != '[') break; /* no more dimension items */ index++; while (array_isdim(c_pg_array_string[index])) index++; if (c_pg_array_string[index] != ']'){ array_parser_error( this, "missing \"]\" in array dimensions"); break; } index++; ndim++; } if (ndim == 0) { /* No array dimensions */ } else { /* If array dimensions are given, expect '=' operator */ if (c_pg_array_string[index] != '=') { array_parser_error( this, "missing assignment operator"); index-=2; /* jump back to before "]" so that we don't break behavior to pg < 1.1 */ } index++; while (array_isspace(c_pg_array_string[index])) index++; } if (c_pg_array_string[index] != '{') array_parser_error( this, "array value must start with \"{\" or dimension information"); index++; if ( index < array_string_length && c_pg_array_string[index] == '}' ) { /* avoid buffer allocation for empty array */ ret = rb_ary_new(); } else { t_pg_coder_dec_func dec_func = pg_coder_dec_func(this->elem, 0); /* create a buffer of the same length, as that will be the worst case */ VALUE buf = rb_str_new(NULL, array_string_length); char *word = RSTRING_PTR(buf); ret = read_array_without_dim(this, &index, c_pg_array_string, array_string_length, word, enc_idx, tuple, field, dec_func); RB_GC_GUARD(buf); } if (c_pg_array_string[index] != '}' ) array_parser_error( this, "array value must end with \"}\""); index++; /* only whitespace is allowed after the closing brace */ for(;index < array_string_length; ++index) { if (!array_isspace(c_pg_array_string[index])) array_parser_error( this, "malformed array literal: Junk after closing right brace."); } return ret; } /* * Document-class: PG::TextDecoder::Identifier < PG::SimpleDecoder * * This is the decoder class for PostgreSQL identifiers. * * Returns an Array of identifiers: * PG::TextDecoder::Identifier.new.decode('schema."table"."column"') * => ["schema", "table", "column"] * */ static VALUE pg_text_dec_identifier(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { /* Return value: array */ VALUE array; VALUE elem; int word_index = 0; int index; /* Use a buffer of the same length, as that will be the worst case */ PG_VARIABLE_LENGTH_ARRAY(char, word, len + 1, NAMEDATALEN) /* The current character in the input string. */ char c; /* 0: Currently outside a quoted string * 1: Currently inside a quoted string, last char was a quote * 2: Currently inside a quoted string, last char was no quote */ int openQuote = 0; array = rb_ary_new(); for(index = 0; index < len; ++index) { c = val[index]; if(c == '.' && openQuote < 2 ) { word[word_index] = 0; elem = pg_text_dec_string(conv, word, word_index, tuple, field, enc_idx); rb_ary_push(array, elem); openQuote = 0; word_index = 0; } else if(c == '"') { if (openQuote == 1) { word[word_index] = c; word_index++; openQuote = 2; } else if (openQuote == 2){ openQuote = 1; } else { openQuote = 2; } } else { word[word_index] = c; word_index++; } } word[word_index] = 0; elem = pg_text_dec_string(conv, word, word_index, tuple, field, enc_idx); rb_ary_push(array, elem); return array; } /* * Document-class: PG::TextDecoder::FromBase64 < PG::CompositeDecoder * * This is a decoder class for conversion of base64 encoded data * to it's binary representation. It outputs a binary Ruby String * or some other Ruby object, if a #elements_type decoder was defined. * */ static VALUE pg_text_dec_from_base64(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { t_pg_composite_coder *this = (t_pg_composite_coder *)conv; t_pg_coder_dec_func dec_func = pg_coder_dec_func(this->elem, this->comp.format); int decoded_len; /* create a buffer of the expected decoded length */ VALUE out_value = rb_str_new(NULL, BASE64_DECODED_SIZE(len)); decoded_len = base64_decode( RSTRING_PTR(out_value), val, len ); rb_str_set_len(out_value, decoded_len); /* Is it a pure String conversion? Then we can directly send out_value to the user. */ if( this->comp.format == 0 && dec_func == pg_text_dec_string ){ PG_ENCODING_SET_NOCHECK( out_value, enc_idx ); return out_value; } if( this->comp.format == 1 && dec_func == pg_bin_dec_bytea ){ PG_ENCODING_SET_NOCHECK( out_value, rb_ascii8bit_encindex() ); return out_value; } out_value = dec_func(this->elem, RSTRING_PTR(out_value), decoded_len, tuple, field, enc_idx); return out_value; } static inline int char_to_digit(char c) { return c - '0'; } static int str2_to_int(const char *str) { return char_to_digit(str[0]) * 10 + char_to_digit(str[1]); } static int parse_year(const char **str) { int year = 0; int i; const char * p = *str; for(i = 0; isdigit(*p) && i < 7; i++, p++) { year = 10 * year + char_to_digit(*p); } *str = p; return year; } #define TZ_NEG 1 #define TZ_POS 2 /* * Document-class: PG::TextDecoder::Timestamp < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL text timestamps * to Ruby Time objects. * * The following flags can be used to specify time interpretation when no timezone is given: * * +PG::Coder::TIMESTAMP_DB_UTC+ : Interpret timestamp as UTC time (default) * * +PG::Coder::TIMESTAMP_DB_LOCAL+ : Interpret timestamp as local time * * +PG::Coder::TIMESTAMP_APP_UTC+ : Return timestamp as UTC time (default) * * +PG::Coder::TIMESTAMP_APP_LOCAL+ : Return timestamp as local time * * Example: * deco = PG::TextDecoder::Timestamp.new(flags: PG::Coder::TIMESTAMP_DB_UTC | PG::Coder::TIMESTAMP_APP_LOCAL) * deco.decode("2000-01-01 00:00:00") # => 2000-01-01 01:00:00 +0100 * deco.decode("2000-01-01 00:00:00.123-06") # => 2000-01-01 00:00:00 -0600 */ static VALUE pg_text_dec_timestamp(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { const char *str = val; int year, mon, day; int hour, min, sec; int nsec = 0; int tz_given = 0; int tz_hour = 0; int tz_min = 0; int tz_sec = 0; year = parse_year(&str); if ( year > 0 && str[0] == '-' && isdigit(str[1]) && isdigit(str[2]) && str[3] == '-' && isdigit(str[4]) && isdigit(str[5]) && str[6] == ' ' && isdigit(str[7]) && isdigit(str[8]) && str[9] == ':' && isdigit(str[10]) && isdigit(str[11]) && str[12] == ':' && isdigit(str[13]) && isdigit(str[14]) ) { mon = str2_to_int(str+1); str += 3; day = str2_to_int(str+1); str += 3; hour = str2_to_int(str+1); str += 3; min = str2_to_int(str+1); str += 3; sec = str2_to_int(str+1); str += 3; if (str[0] == '.' && isdigit(str[1])) { /* nano second part, up to 9 digits */ static const int coef[9] = { 100000000, 10000000, 1000000, 100000, 10000, 1000, 100, 10, 1 }; int i; str++; for (i = 0; i < 9 && isdigit(*str); i++) { nsec += coef[i] * char_to_digit(*str++); } /* consume digits smaller than nsec */ while(isdigit(*str)) str++; } if ((str[0] == '+' || str[0] == '-') && isdigit(str[1]) && isdigit(str[2])) { tz_given = str[0] == '-' ? TZ_NEG : TZ_POS; tz_hour = str2_to_int(str+1); str += 3; if (str[0] == ':' && isdigit(str[1]) && isdigit(str[2])) { tz_min = str2_to_int(str+1); str += 3; } if (str[0] == ':' && isdigit(str[1]) && isdigit(str[2])) { tz_sec = str2_to_int(str+1); str += 3; } } if (str[0] == ' ' && str[1] == 'B' && str[2] == 'C') { year = -year + 1; str += 3; } if (*str == '\0') { /* must have consumed all the string */ VALUE sec_value; VALUE gmt_offset_value; VALUE res; #if (RUBY_API_VERSION_MAJOR > 2 || (RUBY_API_VERSION_MAJOR == 2 && RUBY_API_VERSION_MINOR >= 3)) && defined(HAVE_TIMEGM) /* Fast path for time conversion */ struct tm tm; struct timespec ts; tm.tm_year = year - 1900; tm.tm_mon = mon - 1; tm.tm_mday = day; tm.tm_hour = hour; tm.tm_min = min; tm.tm_sec = sec; tm.tm_isdst = -1; if (tz_given) { /* with timezone */ time_t time = timegm(&tm); if (time != -1){ int gmt_offset; gmt_offset = tz_hour * 3600 + tz_min * 60 + tz_sec; if (tz_given == TZ_NEG) { gmt_offset = - gmt_offset; } ts.tv_sec = time - gmt_offset; ts.tv_nsec = nsec; return rb_time_timespec_new(&ts, gmt_offset); } } else { /* without timezone */ time_t time; if( conv->flags & PG_CODER_TIMESTAMP_DB_LOCAL ) { time = mktime(&tm); } else { time = timegm(&tm); } if (time != -1){ ts.tv_sec = time; ts.tv_nsec = nsec; return rb_time_timespec_new(&ts, conv->flags & PG_CODER_TIMESTAMP_APP_LOCAL ? INT_MAX : INT_MAX-1); } } /* Some libc implementations fail to convert certain values, * so that we fall through to the slow path. */ #endif if (nsec) { int sec_numerator = sec * 1000000 + nsec / 1000; int sec_denominator = 1000000; sec_value = rb_funcall(Qnil, s_id_Rational, 2, INT2NUM(sec_numerator), INT2NUM(sec_denominator)); } else { sec_value = INT2NUM(sec); } if (tz_given) { /* with timezone */ int gmt_offset; gmt_offset = tz_hour * 3600 + tz_min * 60 + tz_sec; if (tz_given == TZ_NEG) { gmt_offset = - gmt_offset; } gmt_offset_value = INT2NUM(gmt_offset); } else { /* without timezone */ gmt_offset_value = conv->flags & PG_CODER_TIMESTAMP_DB_LOCAL ? Qnil : INT2NUM(0); } res = rb_funcall(rb_cTime, s_id_new, 7, INT2NUM(year), INT2NUM(mon), INT2NUM(day), INT2NUM(hour), INT2NUM(min), sec_value, gmt_offset_value); if (tz_given) { /* with timezone */ return res; } else { /* without timezone */ if( (conv->flags & PG_CODER_TIMESTAMP_DB_LOCAL) && (conv->flags & PG_CODER_TIMESTAMP_APP_LOCAL) ) { return res; } else if( conv->flags & PG_CODER_TIMESTAMP_APP_LOCAL ) { return rb_funcall(res, s_id_getlocal, 0); } else { return rb_funcall(res, s_id_utc, 0); } } } } /* fall through to string conversion */ return pg_text_dec_string(conv, val, len, tuple, field, enc_idx); } /* * Document-class: PG::TextDecoder::Inet < PG::SimpleDecoder * * This is a decoder class for conversion of PostgreSQL inet type * to Ruby IPAddr values. * */ static VALUE pg_text_dec_inet(t_pg_coder *conv, const char *val, int len, int tuple, int field, int enc_idx) { VALUE ip; #if defined(_WIN32) ip = rb_str_new(val, len); ip = rb_class_new_instance(1, &ip, s_IPAddr); #else VALUE ip_int; VALUE vmasks; char dst[16]; char buf[64]; int af = strchr(val, '.') ? AF_INET : AF_INET6; int mask = -1; if (len >= 64) { rb_raise(rb_eTypeError, "too long data for text inet converter in tuple %d field %d", tuple, field); } if (len >= 4) { if (val[len-2] == '/') { mask = val[len-1] - '0'; memcpy(buf, val, len-2); buf[len-2] = '\0'; val = buf; } else if (val[len-3] == '/') { mask = (val[len-2]- '0')*10 + val[len-1] - '0'; memcpy(buf, val, len-3); buf[len-3] = '\0'; val = buf; } else if (val[len-4] == '/') { mask = (val[len-3]- '0')*100 + (val[len-2]- '0')*10 + val[len-1] - '0'; memcpy(buf, val, len-4); buf[len-4] = '\0'; val = buf; } } if (1 != inet_pton(af, val, dst)) { rb_raise(rb_eTypeError, "wrong data for text inet converter in tuple %d field %d val", tuple, field); } if (af == AF_INET) { unsigned int ip_int_native; if (mask == -1) { mask = 32; } else if (mask < 0 || mask > 32) { rb_raise(rb_eTypeError, "invalid mask for IPv4: %d", mask); } vmasks = s_vmasks4; ip_int_native = read_nbo32(dst); /* Work around broken IPAddr behavior of converting portion of address after netmask to 0 */ switch (mask) { case 0: ip_int_native = 0; break; case 32: /* nothing to do */ break; default: ip_int_native &= ~((1UL<<(32-mask))-1); break; } ip_int = UINT2NUM(ip_int_native); } else { unsigned long long * dstllp = (unsigned long long *)dst; unsigned long long ip_int_native1; unsigned long long ip_int_native2; if (mask == -1) { mask = 128; } else if (mask < 0 || mask > 128) { rb_raise(rb_eTypeError, "invalid mask for IPv6: %d", mask); } vmasks = s_vmasks6; ip_int_native1 = read_nbo64(dstllp); dstllp++; ip_int_native2 = read_nbo64(dstllp); if (mask == 128) { /* nothing to do */ } else if (mask == 64) { ip_int_native2 = 0; } else if (mask == 0) { ip_int_native1 = 0; ip_int_native2 = 0; } else if (mask < 64) { ip_int_native1 &= ~((1ULL<<(64-mask))-1); ip_int_native2 = 0; } else { ip_int_native2 &= ~((1ULL<<(128-mask))-1); } /* 4 Bignum allocations */ ip_int = ULL2NUM(ip_int_native1); ip_int = rb_funcall(ip_int, s_id_lshift, 1, INT2NUM(64)); ip_int = rb_funcall(ip_int, s_id_add, 1, ULL2NUM(ip_int_native2)); } if (use_ipaddr_alloc) { ip = rb_obj_alloc(s_IPAddr); rb_ivar_set(ip, s_ivar_family, INT2NUM(af)); rb_ivar_set(ip, s_ivar_addr, ip_int); rb_ivar_set(ip, s_ivar_mask_addr, RARRAY_AREF(vmasks, mask)); } else { VALUE ip_args[2]; ip_args[0] = ip_int; ip_args[1] = INT2NUM(af); ip = rb_class_new_instance(2, ip_args, s_IPAddr); ip = rb_funcall(ip, s_id_mask, 1, INT2NUM(mask)); } #endif return ip; } /* called per autoload when TextDecoder::Inet is used */ static VALUE init_pg_text_decoder_inet(VALUE rb_mPG_TextDecoder) { rb_require("ipaddr"); s_IPAddr = rb_funcall(rb_cObject, rb_intern("const_get"), 1, rb_str_new2("IPAddr")); rb_global_variable(&s_IPAddr); s_ivar_family = rb_intern("@family"); s_ivar_addr = rb_intern("@addr"); s_ivar_mask_addr = rb_intern("@mask_addr"); s_id_lshift = rb_intern("<<"); s_id_add = rb_intern("+"); s_id_mask = rb_intern("mask"); use_ipaddr_alloc = RTEST(rb_eval_string("IPAddr.new.instance_variables.sort == [:@addr, :@family, :@mask_addr]")); s_vmasks4 = rb_eval_string("a = [0]*33; a[0] = 0; a[32] = 0xffffffff; 31.downto(1){|i| a[i] = a[i+1] - (1 << (31 - i))}; a.freeze"); rb_global_variable(&s_vmasks4); s_vmasks6 = rb_eval_string("a = [0]*129; a[0] = 0; a[128] = 0xffffffffffffffffffffffffffffffff; 127.downto(1){|i| a[i] = a[i+1] - (1 << (127 - i))}; a.freeze"); rb_global_variable(&s_vmasks6); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Inet", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Inet", pg_text_dec_inet, rb_cPG_SimpleDecoder, rb_mPG_TextDecoder); return Qnil; } void init_pg_text_decoder(void) { s_id_Rational = rb_intern("Rational"); s_id_new = rb_intern("new"); s_id_utc = rb_intern("utc"); s_id_getlocal = rb_intern("getlocal"); s_nan = rb_eval_string("0.0/0.0"); rb_global_variable(&s_nan); s_pos_inf = rb_eval_string("1.0/0.0"); rb_global_variable(&s_pos_inf); s_neg_inf = rb_eval_string("-1.0/0.0"); rb_global_variable(&s_neg_inf); /* This module encapsulates all decoder classes with text input format */ rb_mPG_TextDecoder = rb_define_module_under( rb_mPG, "TextDecoder" ); rb_define_private_method(rb_singleton_class(rb_mPG_TextDecoder), "init_inet", init_pg_text_decoder_inet, 0); rb_define_private_method(rb_singleton_class(rb_mPG_TextDecoder), "init_numeric", init_pg_text_decoder_numeric, 0); /* Make RDoc aware of the decoder classes... */ /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Boolean", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Boolean", pg_text_dec_boolean, rb_cPG_SimpleDecoder, rb_mPG_TextDecoder ); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Integer", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Integer", pg_text_dec_integer, rb_cPG_SimpleDecoder, rb_mPG_TextDecoder ); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Float", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Float", pg_text_dec_float, rb_cPG_SimpleDecoder, rb_mPG_TextDecoder ); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "String", rb_cPG_SimpleDecoder ); */ pg_define_coder( "String", pg_text_dec_string, rb_cPG_SimpleDecoder, rb_mPG_TextDecoder ); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Bytea", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Bytea", pg_text_dec_bytea, rb_cPG_SimpleDecoder, rb_mPG_TextDecoder ); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Identifier", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Identifier", pg_text_dec_identifier, rb_cPG_SimpleDecoder, rb_mPG_TextDecoder ); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Timestamp", rb_cPG_SimpleDecoder ); */ pg_define_coder( "Timestamp", pg_text_dec_timestamp, rb_cPG_SimpleDecoder, rb_mPG_TextDecoder); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "Array", rb_cPG_CompositeDecoder ); */ pg_define_coder( "Array", pg_text_dec_array, rb_cPG_CompositeDecoder, rb_mPG_TextDecoder ); /* dummy = rb_define_class_under( rb_mPG_TextDecoder, "FromBase64", rb_cPG_CompositeDecoder ); */ pg_define_coder( "FromBase64", pg_text_dec_from_base64, rb_cPG_CompositeDecoder, rb_mPG_TextDecoder ); } pg-1.5.5/ext/pg_type_map_by_column.c0000644000004100000410000002362414563476204017521 0ustar www-datawww-data/* * pg_column_map.c - PG::ColumnMap class extension * $Id$ * */ #include "pg.h" static VALUE rb_cTypeMapByColumn; static ID s_id_decode; static ID s_id_encode; static VALUE pg_tmbc_s_allocate( VALUE klass ); static VALUE pg_tmbc_fit_to_result( VALUE self, VALUE result ) { int nfields; t_tmbc *this = RTYPEDDATA_DATA( self ); t_typemap *default_tm; VALUE sub_typemap; nfields = PQnfields( pgresult_get(result) ); if ( this->nfields != nfields ) { rb_raise( rb_eArgError, "number of result fields (%d) does not match number of mapped columns (%d)", nfields, this->nfields ); } /* Ensure that the default type map fits equally. */ default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); sub_typemap = default_tm->funcs.fit_to_result( this->typemap.default_typemap, result ); /* Did the default type return the same object ? */ if( sub_typemap == this->typemap.default_typemap ){ return self; } else { /* Our default type map built a new object, so we need to propagate it * and build a copy of this type map and set it as default there.. */ VALUE new_typemap = pg_tmbc_s_allocate( rb_cTypeMapByColumn ); size_t struct_size = sizeof(t_tmbc) + sizeof(struct pg_tmbc_converter) * nfields; t_tmbc *p_new_typemap = (t_tmbc *)xmalloc(struct_size); memcpy( p_new_typemap, this, struct_size ); p_new_typemap->typemap.default_typemap = sub_typemap; RTYPEDDATA_DATA(new_typemap) = p_new_typemap; return new_typemap; } } static VALUE pg_tmbc_fit_to_query( VALUE self, VALUE params ) { int nfields; t_tmbc *this = RTYPEDDATA_DATA( self ); t_typemap *default_tm; nfields = (int)RARRAY_LEN( params ); if ( this->nfields != nfields ) { rb_raise( rb_eArgError, "number of result fields (%d) does not match number of mapped columns (%d)", nfields, this->nfields ); } /* Ensure that the default type map fits equally. */ default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); default_tm->funcs.fit_to_query( this->typemap.default_typemap, params ); return self; } static int pg_tmbc_fit_to_copy_get( VALUE self ) { t_tmbc *this = RTYPEDDATA_DATA( self ); /* Ensure that the default type map fits equally. */ t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); default_tm->funcs.fit_to_copy_get( this->typemap.default_typemap ); return this->nfields; } VALUE pg_tmbc_result_value( t_typemap *p_typemap, VALUE result, int tuple, int field ) { t_pg_coder *p_coder = NULL; t_pg_result *p_result = pgresult_get_this(result); t_tmbc *this = (t_tmbc *) p_typemap; t_typemap *default_tm; if (PQgetisnull(p_result->pgresult, tuple, field)) { return Qnil; } p_coder = this->convs[field].cconv; if( p_coder ){ char * val = PQgetvalue( p_result->pgresult, tuple, field ); int len = PQgetlength( p_result->pgresult, tuple, field ); if( p_coder->dec_func ){ return p_coder->dec_func(p_coder, val, len, tuple, field, p_result->enc_idx); } else { t_pg_coder_dec_func dec_func; dec_func = pg_coder_dec_func( p_coder, PQfformat(p_result->pgresult, field) ); return dec_func(p_coder, val, len, tuple, field, p_result->enc_idx); } } default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); return default_tm->funcs.typecast_result_value( default_tm, result, tuple, field ); } static t_pg_coder * pg_tmbc_typecast_query_param( t_typemap *p_typemap, VALUE param_value, int field ) { t_tmbc *this = (t_tmbc *) p_typemap; /* Number of fields were already checked in pg_tmbc_fit_to_query() */ t_pg_coder *p_coder = this->convs[field].cconv; if( !p_coder ){ t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); return default_tm->funcs.typecast_query_param( default_tm, param_value, field ); } return p_coder; } static VALUE pg_tmbc_typecast_copy_get( t_typemap *p_typemap, VALUE field_str, int fieldno, int format, int enc_idx ) { t_tmbc *this = (t_tmbc *) p_typemap; t_pg_coder *p_coder; t_pg_coder_dec_func dec_func; if ( fieldno >= this->nfields || fieldno < 0 ) { rb_raise( rb_eArgError, "number of copy fields (%d) exceeds number of mapped columns (%d)", fieldno, this->nfields ); } p_coder = this->convs[fieldno].cconv; if( !p_coder ){ t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); return default_tm->funcs.typecast_copy_get( default_tm, field_str, fieldno, format, enc_idx ); } dec_func = pg_coder_dec_func( p_coder, format ); /* Is it a pure String conversion? Then we can directly send field_str to the user. */ if( dec_func == pg_text_dec_string ){ rb_str_modify(field_str); PG_ENCODING_SET_NOCHECK( field_str, enc_idx ); return field_str; } if( dec_func == pg_bin_dec_bytea ){ rb_str_modify(field_str); PG_ENCODING_SET_NOCHECK( field_str, rb_ascii8bit_encindex() ); return field_str; } return dec_func( p_coder, RSTRING_PTR(field_str), RSTRING_LENINT(field_str), 0, fieldno, enc_idx ); } const struct pg_typemap_funcs pg_tmbc_funcs = { pg_tmbc_fit_to_result, pg_tmbc_fit_to_query, pg_tmbc_fit_to_copy_get, pg_tmbc_result_value, pg_tmbc_typecast_query_param, pg_tmbc_typecast_copy_get }; static void pg_tmbc_mark( void *_this ) { t_tmbc *this = (t_tmbc *)_this; int i; /* allocated but not initialized ? */ if( this == (t_tmbc *)&pg_typemap_funcs ) return; pg_typemap_mark(&this->typemap); for( i=0; infields; i++){ t_pg_coder *p_coder = this->convs[i].cconv; if( p_coder ) rb_gc_mark_movable(p_coder->coder_obj); } } static size_t pg_tmbc_memsize( const void *_this ) { const t_tmbc *this = (const t_tmbc *)_this; return sizeof(t_tmbc) + sizeof(struct pg_tmbc_converter) * this->nfields; } static void pg_tmbc_compact( void *_this ) { t_tmbc *this = (t_tmbc *)_this; int i; /* allocated but not initialized ? */ if( this == (t_tmbc *)&pg_typemap_funcs ) return; pg_typemap_compact(&this->typemap); for( i=0; infields; i++){ t_pg_coder *p_coder = this->convs[i].cconv; if( p_coder ) pg_gc_location(p_coder->coder_obj); } } static void pg_tmbc_free( void *_this ) { t_tmbc *this = (t_tmbc *)_this; /* allocated but not initialized ? */ if( this == (t_tmbc *)&pg_typemap_funcs ) return; xfree( this ); } static const rb_data_type_t pg_tmbc_type = { "PG::TypeMapByColumn", { pg_tmbc_mark, pg_tmbc_free, pg_tmbc_memsize, pg_compact_callback(pg_tmbc_compact), }, &pg_typemap_type, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; static VALUE pg_tmbc_s_allocate( VALUE klass ) { /* Use pg_typemap_funcs as interim struct until #initialize is called. */ return TypedData_Wrap_Struct( klass, &pg_tmbc_type, (t_tmbc *)&pg_typemap_funcs ); } VALUE pg_tmbc_allocate(void) { return pg_tmbc_s_allocate(rb_cTypeMapByColumn); } /* * call-seq: * PG::TypeMapByColumn.new( coders ) * * Builds a new type map and assigns a list of coders for the given column. * +coders+ must be an Array of PG::Coder objects or +nil+ values. * The length of the Array corresponds to * the number of columns or bind parameters this type map is usable for. * * A +nil+ value will forward the given field to the #default_type_map . */ static VALUE pg_tmbc_init(VALUE self, VALUE conv_ary) { long i; t_tmbc *this; int conv_ary_len; rb_check_frozen(self); Check_Type(conv_ary, T_ARRAY); conv_ary_len = RARRAY_LENINT(conv_ary); this = xmalloc(sizeof(t_tmbc) + sizeof(struct pg_tmbc_converter) * conv_ary_len); /* Set nfields to 0 at first, so that GC mark function doesn't access uninitialized memory. */ this->nfields = 0; this->typemap.funcs = pg_tmbc_funcs; RB_OBJ_WRITE(self, &this->typemap.default_typemap, pg_typemap_all_strings); RTYPEDDATA_DATA(self) = this; for(i=0; iconvs[i].cconv = NULL; } else { t_pg_coder *p_coder; /* Check argument type and store the coder pointer */ TypedData_Get_Struct(obj, t_pg_coder, &pg_coder_type, p_coder); RB_OBJ_WRITTEN(self, Qnil, p_coder->coder_obj); this->convs[i].cconv = p_coder; } } this->nfields = conv_ary_len; return self; } /* * call-seq: * typemap.coders -> Array * * Array of PG::Coder objects. The length of the Array corresponds to * the number of columns or bind parameters this type map is usable for. */ static VALUE pg_tmbc_coders(VALUE self) { int i; t_tmbc *this = RTYPEDDATA_DATA( self ); VALUE ary_coders = rb_ary_new(); for( i=0; infields; i++){ t_pg_coder *conv = this->convs[i].cconv; if( conv ) { rb_ary_push( ary_coders, conv->coder_obj ); } else { rb_ary_push( ary_coders, Qnil ); } } return rb_obj_freeze(ary_coders); } void init_pg_type_map_by_column(void) { s_id_decode = rb_intern("decode"); s_id_encode = rb_intern("encode"); /* * Document-class: PG::TypeMapByColumn < PG::TypeMap * * This type map casts values by a coder assigned per field/column. * * Each PG::TypeMapByColumn has a fixed list of either encoders or decoders, * that is defined at TypeMapByColumn.new . A type map with encoders is usable for type casting * query bind parameters and COPY data for PG::Connection#put_copy_data . * A type map with decoders is usable for type casting of result values and * COPY data from PG::Connection#get_copy_data . * * PG::TypeMapByColumn objects are in particular useful in conjunction with prepared statements, * since they can be cached alongside with the statement handle. * * This type map strategy is also used internally by PG::TypeMapByOid, when the * number of rows of a result set exceeds a given limit. */ rb_cTypeMapByColumn = rb_define_class_under( rb_mPG, "TypeMapByColumn", rb_cTypeMap ); rb_define_alloc_func( rb_cTypeMapByColumn, pg_tmbc_s_allocate ); rb_define_method( rb_cTypeMapByColumn, "initialize", pg_tmbc_init, 1 ); rb_define_method( rb_cTypeMapByColumn, "coders", pg_tmbc_coders, 0 ); /* rb_mDefaultTypeMappable = rb_define_module_under( rb_cTypeMap, "DefaultTypeMappable"); */ rb_include_module( rb_cTypeMapByColumn, rb_mDefaultTypeMappable ); } pg-1.5.5/ext/pg_coder.c0000644000004100000410000004147014563476204014727 0ustar www-datawww-data/* * pg_coder.c - PG::Coder class extension * */ #include "pg.h" VALUE rb_cPG_Coder; VALUE rb_cPG_SimpleCoder; VALUE rb_cPG_SimpleEncoder; VALUE rb_cPG_SimpleDecoder; VALUE rb_cPG_CompositeCoder; VALUE rb_cPG_CompositeEncoder; VALUE rb_cPG_CompositeDecoder; VALUE rb_mPG_BinaryFormatting; static ID s_id_encode; static ID s_id_decode; static ID s_id_CFUNC; static VALUE pg_coder_allocate( VALUE klass ) { rb_raise( rb_eTypeError, "PG::Coder cannot be instantiated directly"); } void pg_coder_init_encoder( VALUE self ) { t_pg_coder *this = RTYPEDDATA_DATA( self ); VALUE klass = rb_class_of(self); if( rb_const_defined( klass, s_id_CFUNC ) ){ VALUE cfunc = rb_const_get( klass, s_id_CFUNC ); this->enc_func = RTYPEDDATA_DATA(cfunc); } else { this->enc_func = NULL; } this->dec_func = NULL; RB_OBJ_WRITE(self, &this->coder_obj, self); this->oid = 0; this->format = 0; this->flags = 0; rb_iv_set( self, "@name", Qnil ); } void pg_coder_init_decoder( VALUE self ) { t_pg_coder *this = RTYPEDDATA_DATA( self ); VALUE klass = rb_class_of(self); this->enc_func = NULL; if( rb_const_defined( klass, s_id_CFUNC ) ){ VALUE cfunc = rb_const_get( klass, s_id_CFUNC ); this->dec_func = RTYPEDDATA_DATA(cfunc); } else { this->dec_func = NULL; } RB_OBJ_WRITE(self, &this->coder_obj, self); this->oid = 0; this->format = 0; this->flags = 0; rb_iv_set( self, "@name", Qnil ); } static size_t pg_coder_memsize(const void *_this) { const t_pg_coder *this = (const t_pg_coder *)_this; return sizeof(*this); } static size_t pg_composite_coder_memsize(const void *_this) { const t_pg_composite_coder *this = (const t_pg_composite_coder *)_this; return sizeof(*this); } void pg_coder_compact(void *_this) { t_pg_coder *this = (t_pg_coder *)_this; pg_gc_location(this->coder_obj); } static void pg_composite_coder_compact(void *_this) { t_pg_composite_coder *this = (t_pg_composite_coder *)_this; pg_coder_compact(&this->comp); } const rb_data_type_t pg_coder_type = { "PG::Coder", { (RUBY_DATA_FUNC) NULL, RUBY_TYPED_DEFAULT_FREE, pg_coder_memsize, pg_compact_callback(pg_coder_compact), }, 0, 0, // IMPORTANT: WB_PROTECTED objects must only use the RB_OBJ_WRITE() // macro to update VALUE references, as to trigger write barriers. RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; static VALUE pg_simple_encoder_allocate( VALUE klass ) { t_pg_coder *this; VALUE self = TypedData_Make_Struct( klass, t_pg_coder, &pg_coder_type, this ); pg_coder_init_encoder( self ); return self; } static const rb_data_type_t pg_composite_coder_type = { "PG::CompositeCoder", { (RUBY_DATA_FUNC) NULL, RUBY_TYPED_DEFAULT_FREE, pg_composite_coder_memsize, pg_compact_callback(pg_composite_coder_compact), }, &pg_coder_type, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; static VALUE pg_composite_encoder_allocate( VALUE klass ) { t_pg_composite_coder *this; VALUE self = TypedData_Make_Struct( klass, t_pg_composite_coder, &pg_composite_coder_type, this ); pg_coder_init_encoder( self ); this->elem = NULL; this->needs_quotation = 1; this->delimiter = ','; rb_iv_set( self, "@elements_type", Qnil ); return self; } static VALUE pg_simple_decoder_allocate( VALUE klass ) { t_pg_coder *this; VALUE self = TypedData_Make_Struct( klass, t_pg_coder, &pg_coder_type, this ); pg_coder_init_decoder( self ); return self; } static VALUE pg_composite_decoder_allocate( VALUE klass ) { t_pg_composite_coder *this; VALUE self = TypedData_Make_Struct( klass, t_pg_composite_coder, &pg_composite_coder_type, this ); pg_coder_init_decoder( self ); this->elem = NULL; this->needs_quotation = 1; this->delimiter = ','; rb_iv_set( self, "@elements_type", Qnil ); return self; } /* * call-seq: * coder.encode( value [, encoding] ) * * Encodes the given Ruby object into string representation, without * sending data to/from the database server. * * A nil value is passed through. * */ static VALUE pg_coder_encode(int argc, VALUE *argv, VALUE self) { VALUE res; VALUE intermediate; VALUE value; int len, len2; int enc_idx; t_pg_coder *this = RTYPEDDATA_DATA(self); if(argc < 1 || argc > 2){ rb_raise(rb_eArgError, "wrong number of arguments (%i for 1..2)", argc); }else if(argc == 1){ enc_idx = rb_ascii8bit_encindex(); }else{ enc_idx = rb_to_encoding_index(argv[1]); } value = argv[0]; if( NIL_P(value) ) return Qnil; if( !this->enc_func ){ rb_raise(rb_eRuntimeError, "no encoder function defined"); } len = this->enc_func( this, value, NULL, &intermediate, enc_idx ); if( len == -1 ){ /* The intermediate value is a String that can be used directly. */ return intermediate; } res = rb_str_new(NULL, len); PG_ENCODING_SET_NOCHECK(res, enc_idx); len2 = this->enc_func( this, value, RSTRING_PTR(res), &intermediate, enc_idx ); if( len < len2 ){ rb_bug("%s: result length of first encoder run (%i) is less than second run (%i)", rb_obj_classname( self ), len, len2 ); } rb_str_set_len( res, len2 ); RB_GC_GUARD(intermediate); return res; } /* * call-seq: * coder.decode( string, tuple=nil, field=nil ) * * Decodes the given string representation into a Ruby object, without * sending data to/from the database server. * * A nil value is passed through and non String values are expected to have * #to_str defined. * */ static VALUE pg_coder_decode(int argc, VALUE *argv, VALUE self) { char *val; int tuple = -1; int field = -1; VALUE res; t_pg_coder *this = RTYPEDDATA_DATA(self); if(argc < 1 || argc > 3){ rb_raise(rb_eArgError, "wrong number of arguments (%i for 1..3)", argc); }else if(argc >= 3){ tuple = NUM2INT(argv[1]); field = NUM2INT(argv[2]); } if( NIL_P(argv[0]) ) return Qnil; if( this->format == 0 ){ val = StringValueCStr(argv[0]); }else{ val = StringValuePtr(argv[0]); } if( !this->dec_func ){ rb_raise(rb_eRuntimeError, "no decoder function defined"); } res = this->dec_func(this, val, RSTRING_LENINT(argv[0]), tuple, field, ENCODING_GET(argv[0])); return res; } /* * call-seq: * coder.oid = Integer * * Specifies the type OID that is sent alongside with an encoded * query parameter value. * * The default is +0+. */ static VALUE pg_coder_oid_set(VALUE self, VALUE oid) { t_pg_coder *this = RTYPEDDATA_DATA(self); rb_check_frozen(self); this->oid = NUM2UINT(oid); return oid; } /* * call-seq: * coder.oid -> Integer * * The type OID that is sent alongside with an encoded * query parameter value. */ static VALUE pg_coder_oid_get(VALUE self) { t_pg_coder *this = RTYPEDDATA_DATA(self); return UINT2NUM(this->oid); } /* * call-seq: * coder.format = Integer * * Specifies the format code that is sent alongside with an encoded * query parameter value. * * The default is +0+. */ static VALUE pg_coder_format_set(VALUE self, VALUE format) { t_pg_coder *this = RTYPEDDATA_DATA(self); rb_check_frozen(self); this->format = NUM2INT(format); return format; } /* * call-seq: * coder.format -> Integer * * The format code that is sent alongside with an encoded * query parameter value. */ static VALUE pg_coder_format_get(VALUE self) { t_pg_coder *this = RTYPEDDATA_DATA(self); return INT2NUM(this->format); } /* * call-seq: * coder.flags = Integer * * Set coder specific bitwise OR-ed flags. * See the particular en- or decoder description for available flags. * * The default is +0+. */ static VALUE pg_coder_flags_set(VALUE self, VALUE flags) { t_pg_coder *this = RTYPEDDATA_DATA(self); rb_check_frozen(self); this->flags = NUM2INT(flags); return flags; } /* * call-seq: * coder.flags -> Integer * * Get current bitwise OR-ed coder flags. */ static VALUE pg_coder_flags_get(VALUE self) { t_pg_coder *this = RTYPEDDATA_DATA(self); return INT2NUM(this->flags); } /* * call-seq: * coder.needs_quotation = Boolean * * Specifies whether the assigned #elements_type requires quotation marks to * be transferred safely. Encoding with #needs_quotation=false is somewhat * faster. * * The default is +true+. This option is ignored for decoding of values. */ static VALUE pg_coder_needs_quotation_set(VALUE self, VALUE needs_quotation) { t_pg_composite_coder *this = RTYPEDDATA_DATA(self); rb_check_frozen(self); this->needs_quotation = RTEST(needs_quotation); return needs_quotation; } /* * call-seq: * coder.needs_quotation -> Boolean * * Specifies whether the assigned #elements_type requires quotation marks to * be transferred safely. */ static VALUE pg_coder_needs_quotation_get(VALUE self) { t_pg_composite_coder *this = RTYPEDDATA_DATA(self); return this->needs_quotation ? Qtrue : Qfalse; } /* * call-seq: * coder.delimiter = String * * Specifies the character that separates values within the composite type. * The default is a comma. * This must be a single one-byte character. */ static VALUE pg_coder_delimiter_set(VALUE self, VALUE delimiter) { t_pg_composite_coder *this = RTYPEDDATA_DATA(self); rb_check_frozen(self); StringValue(delimiter); if(RSTRING_LEN(delimiter) != 1) rb_raise( rb_eArgError, "delimiter size must be one byte"); this->delimiter = *RSTRING_PTR(delimiter); return delimiter; } /* * call-seq: * coder.delimiter -> String * * The character that separates values within the composite type. */ static VALUE pg_coder_delimiter_get(VALUE self) { t_pg_composite_coder *this = RTYPEDDATA_DATA(self); return rb_str_new(&this->delimiter, 1); } /* * call-seq: * coder.elements_type = coder * * Specifies the PG::Coder object that is used to encode or decode * the single elementes of this composite type. * * If set to +nil+ all values are encoded and decoded as String objects. */ static VALUE pg_coder_elements_type_set(VALUE self, VALUE elem_type) { t_pg_composite_coder *this = RTYPEDDATA_DATA( self ); rb_check_frozen(self); if ( NIL_P(elem_type) ){ this->elem = NULL; } else if ( rb_obj_is_kind_of(elem_type, rb_cPG_Coder) ){ this->elem = RTYPEDDATA_DATA( elem_type ); } else { rb_raise( rb_eTypeError, "wrong elements type %s (expected some kind of PG::Coder)", rb_obj_classname( elem_type ) ); } rb_iv_set( self, "@elements_type", elem_type ); return elem_type; } static const rb_data_type_t pg_coder_cfunc_type = { "PG::Coder::CFUNC", { (RUBY_DATA_FUNC)NULL, (RUBY_DATA_FUNC)NULL, (size_t (*)(const void *))NULL, }, 0, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; VALUE pg_define_coder( const char *name, void *func, VALUE base_klass, VALUE nsp ) { VALUE cfunc_obj = TypedData_Wrap_Struct( rb_cObject, &pg_coder_cfunc_type, func ); VALUE coder_klass = rb_define_class_under( nsp, name, base_klass ); if( nsp==rb_mPG_BinaryEncoder || nsp==rb_mPG_BinaryDecoder ) rb_include_module( coder_klass, rb_mPG_BinaryFormatting ); if( nsp==rb_mPG_BinaryEncoder || nsp==rb_mPG_TextEncoder ) rb_define_method( coder_klass, "encode", pg_coder_encode, -1 ); if( nsp==rb_mPG_BinaryDecoder || nsp==rb_mPG_TextDecoder ) rb_define_method( coder_klass, "decode", pg_coder_decode, -1 ); rb_define_const( coder_klass, "CFUNC", rb_obj_freeze(cfunc_obj) ); RB_GC_GUARD(cfunc_obj); return coder_klass; } static int pg_text_enc_in_ruby(t_pg_coder *conv, VALUE value, char *out, VALUE *intermediate, int enc_idx) { int arity = rb_obj_method_arity(conv->coder_obj, s_id_encode); if( arity == 1 ){ VALUE out_str = rb_funcall( conv->coder_obj, s_id_encode, 1, value ); StringValue( out_str ); *intermediate = rb_str_export_to_enc(out_str, rb_enc_from_index(enc_idx)); }else{ VALUE enc = rb_enc_from_encoding(rb_enc_from_index(enc_idx)); VALUE out_str = rb_funcall( conv->coder_obj, s_id_encode, 2, value, enc ); StringValue( out_str ); *intermediate = out_str; } return -1; } t_pg_coder_enc_func pg_coder_enc_func(t_pg_coder *this) { if( this ){ if( this->enc_func ){ return this->enc_func; }else{ return pg_text_enc_in_ruby; } }else{ /* no element encoder defined -> use std to_str conversion */ return pg_coder_enc_to_s; } } static VALUE pg_text_dec_in_ruby(t_pg_coder *this, const char *val, int len, int tuple, int field, int enc_idx) { VALUE string = pg_text_dec_string(this, val, len, tuple, field, enc_idx); return rb_funcall( this->coder_obj, s_id_decode, 3, string, INT2NUM(tuple), INT2NUM(field) ); } static VALUE pg_bin_dec_in_ruby(t_pg_coder *this, const char *val, int len, int tuple, int field, int enc_idx) { VALUE string = pg_bin_dec_bytea(this, val, len, tuple, field, enc_idx); return rb_funcall( this->coder_obj, s_id_decode, 3, string, INT2NUM(tuple), INT2NUM(field) ); } t_pg_coder_dec_func pg_coder_dec_func(t_pg_coder *this, int binary) { if( this ){ if( this->dec_func ){ return this->dec_func; }else{ return binary ? pg_bin_dec_in_ruby : pg_text_dec_in_ruby; } }else{ /* no element decoder defined -> use std String conversion */ return binary ? pg_bin_dec_bytea : pg_text_dec_string; } } void init_pg_coder(void) { s_id_encode = rb_intern("encode"); s_id_decode = rb_intern("decode"); s_id_CFUNC = rb_intern("CFUNC"); /* Document-class: PG::Coder < Object * * This is the base class for all type cast encoder and decoder classes. * * It can be used for implicit type casts by a PG::TypeMap or to * convert single values to/from their string representation by #encode * and #decode. * * Ruby +nil+ values are not handled by encoders, but are always transmitted * as SQL +NULL+ value. Vice versa SQL +NULL+ values are not handled by decoders, * but are always returned as a +nil+ value. */ rb_cPG_Coder = rb_define_class_under( rb_mPG, "Coder", rb_cObject ); rb_define_alloc_func( rb_cPG_Coder, pg_coder_allocate ); rb_define_method( rb_cPG_Coder, "oid=", pg_coder_oid_set, 1 ); rb_define_method( rb_cPG_Coder, "oid", pg_coder_oid_get, 0 ); rb_define_method( rb_cPG_Coder, "format=", pg_coder_format_set, 1 ); rb_define_method( rb_cPG_Coder, "format", pg_coder_format_get, 0 ); rb_define_method( rb_cPG_Coder, "flags=", pg_coder_flags_set, 1 ); rb_define_method( rb_cPG_Coder, "flags", pg_coder_flags_get, 0 ); /* define flags to be used with PG::Coder#flags= */ rb_define_const( rb_cPG_Coder, "TIMESTAMP_DB_UTC", INT2NUM(PG_CODER_TIMESTAMP_DB_UTC)); rb_define_const( rb_cPG_Coder, "TIMESTAMP_DB_LOCAL", INT2NUM(PG_CODER_TIMESTAMP_DB_LOCAL)); rb_define_const( rb_cPG_Coder, "TIMESTAMP_APP_UTC", INT2NUM(PG_CODER_TIMESTAMP_APP_UTC)); rb_define_const( rb_cPG_Coder, "TIMESTAMP_APP_LOCAL", INT2NUM(PG_CODER_TIMESTAMP_APP_LOCAL)); rb_define_const( rb_cPG_Coder, "FORMAT_ERROR_MASK", INT2NUM(PG_CODER_FORMAT_ERROR_MASK)); rb_define_const( rb_cPG_Coder, "FORMAT_ERROR_TO_RAISE", INT2NUM(PG_CODER_FORMAT_ERROR_TO_RAISE)); rb_define_const( rb_cPG_Coder, "FORMAT_ERROR_TO_STRING", INT2NUM(PG_CODER_FORMAT_ERROR_TO_STRING)); rb_define_const( rb_cPG_Coder, "FORMAT_ERROR_TO_PARTIAL", INT2NUM(PG_CODER_FORMAT_ERROR_TO_PARTIAL)); /* * Name of the coder or the corresponding data type. * * This accessor is only used in PG::Coder#inspect . */ rb_define_attr( rb_cPG_Coder, "name", 1, 1 ); /* Document-class: PG::SimpleCoder < PG::Coder */ rb_cPG_SimpleCoder = rb_define_class_under( rb_mPG, "SimpleCoder", rb_cPG_Coder ); /* Document-class: PG::SimpleEncoder < PG::SimpleCoder */ rb_cPG_SimpleEncoder = rb_define_class_under( rb_mPG, "SimpleEncoder", rb_cPG_SimpleCoder ); rb_define_alloc_func( rb_cPG_SimpleEncoder, pg_simple_encoder_allocate ); /* Document-class: PG::SimpleDecoder < PG::SimpleCoder */ rb_cPG_SimpleDecoder = rb_define_class_under( rb_mPG, "SimpleDecoder", rb_cPG_SimpleCoder ); rb_define_alloc_func( rb_cPG_SimpleDecoder, pg_simple_decoder_allocate ); /* Document-class: PG::CompositeCoder < PG::Coder * * This is the base class for all type cast classes of PostgreSQL types, * that are made up of some sub type. */ rb_cPG_CompositeCoder = rb_define_class_under( rb_mPG, "CompositeCoder", rb_cPG_Coder ); rb_define_method( rb_cPG_CompositeCoder, "elements_type=", pg_coder_elements_type_set, 1 ); rb_define_attr( rb_cPG_CompositeCoder, "elements_type", 1, 0 ); rb_define_method( rb_cPG_CompositeCoder, "needs_quotation=", pg_coder_needs_quotation_set, 1 ); rb_define_method( rb_cPG_CompositeCoder, "needs_quotation?", pg_coder_needs_quotation_get, 0 ); rb_define_method( rb_cPG_CompositeCoder, "delimiter=", pg_coder_delimiter_set, 1 ); rb_define_method( rb_cPG_CompositeCoder, "delimiter", pg_coder_delimiter_get, 0 ); /* Document-class: PG::CompositeEncoder < PG::CompositeCoder */ rb_cPG_CompositeEncoder = rb_define_class_under( rb_mPG, "CompositeEncoder", rb_cPG_CompositeCoder ); rb_define_alloc_func( rb_cPG_CompositeEncoder, pg_composite_encoder_allocate ); /* Document-class: PG::CompositeDecoder < PG::CompositeCoder */ rb_cPG_CompositeDecoder = rb_define_class_under( rb_mPG, "CompositeDecoder", rb_cPG_CompositeCoder ); rb_define_alloc_func( rb_cPG_CompositeDecoder, pg_composite_decoder_allocate ); rb_mPG_BinaryFormatting = rb_define_module_under( rb_cPG_Coder, "BinaryFormatting"); } pg-1.5.5/ext/pg_connection.c0000644000004100000410000041730014563476204015771 0ustar www-datawww-data/* * pg_connection.c - PG::Connection class extension * $Id$ * */ #include "pg.h" /* Number of bytes that are reserved on the stack for query params. */ #define QUERYDATA_BUFFER_SIZE 4000 VALUE rb_cPGconn; static ID s_id_encode; static ID s_id_autoclose_set; static VALUE sym_type, sym_format, sym_value; static VALUE sym_symbol, sym_string, sym_static_symbol; static VALUE pgconn_finish( VALUE ); static VALUE pgconn_set_default_encoding( VALUE self ); static VALUE pgconn_wait_for_flush( VALUE self ); static void pgconn_set_internal_encoding_index( VALUE ); static const rb_data_type_t pg_connection_type; static VALUE pgconn_async_flush(VALUE self); /* * Global functions */ /* * Convenience function to raise connection errors */ #ifdef __GNUC__ __attribute__((format(printf, 3, 4))) #endif static void pg_raise_conn_error( VALUE klass, VALUE self, const char *format, ...) { VALUE msg, error; va_list ap; va_start(ap, format); msg = rb_vsprintf(format, ap); va_end(ap); error = rb_exc_new_str(klass, msg); rb_iv_set(error, "@connection", self); rb_exc_raise(error); } /* * Fetch the PG::Connection object data pointer. */ t_pg_connection * pg_get_connection( VALUE self ) { t_pg_connection *this; TypedData_Get_Struct( self, t_pg_connection, &pg_connection_type, this); return this; } /* * Fetch the PG::Connection object data pointer and check it's * PGconn data pointer for sanity. */ static t_pg_connection * pg_get_connection_safe( VALUE self ) { t_pg_connection *this; TypedData_Get_Struct( self, t_pg_connection, &pg_connection_type, this); if ( !this->pgconn ) pg_raise_conn_error( rb_eConnectionBad, self, "connection is closed"); return this; } /* * Fetch the PGconn data pointer and check it for sanity. * * Note: This function is used externally by the sequel_pg gem, * so do changes carefully. * */ PGconn * pg_get_pgconn( VALUE self ) { t_pg_connection *this; TypedData_Get_Struct( self, t_pg_connection, &pg_connection_type, this); if ( !this->pgconn ){ pg_raise_conn_error( rb_eConnectionBad, self, "connection is closed"); } return this->pgconn; } /* * Close the associated socket IO object if there is one. */ static void pgconn_close_socket_io( VALUE self ) { t_pg_connection *this = pg_get_connection( self ); VALUE socket_io = this->socket_io; if ( RTEST(socket_io) ) { #if defined(_WIN32) if( rb_w32_unwrap_io_handle(this->ruby_sd) ) pg_raise_conn_error( rb_eConnectionBad, self, "Could not unwrap win32 socket handle"); #endif rb_funcall( socket_io, rb_intern("close"), 0 ); } RB_OBJ_WRITE(self, &this->socket_io, Qnil); } /* * Create a Ruby Array of Hashes out of a PGconninfoOptions array. */ static VALUE pgconn_make_conninfo_array( const PQconninfoOption *options ) { VALUE ary = rb_ary_new(); VALUE hash; int i = 0; if (!options) return Qnil; for(i = 0; options[i].keyword != NULL; i++) { hash = rb_hash_new(); if(options[i].keyword) rb_hash_aset(hash, ID2SYM(rb_intern("keyword")), rb_str_new2(options[i].keyword)); if(options[i].envvar) rb_hash_aset(hash, ID2SYM(rb_intern("envvar")), rb_str_new2(options[i].envvar)); if(options[i].compiled) rb_hash_aset(hash, ID2SYM(rb_intern("compiled")), rb_str_new2(options[i].compiled)); if(options[i].val) rb_hash_aset(hash, ID2SYM(rb_intern("val")), rb_str_new2(options[i].val)); if(options[i].label) rb_hash_aset(hash, ID2SYM(rb_intern("label")), rb_str_new2(options[i].label)); if(options[i].dispchar) rb_hash_aset(hash, ID2SYM(rb_intern("dispchar")), rb_str_new2(options[i].dispchar)); rb_hash_aset(hash, ID2SYM(rb_intern("dispsize")), INT2NUM(options[i].dispsize)); rb_ary_push(ary, hash); } return ary; } static const char *pg_cstr_enc(VALUE str, int enc_idx){ const char *ptr = StringValueCStr(str); if( ENCODING_GET(str) == enc_idx ){ return ptr; } else { str = rb_str_export_to_enc(str, rb_enc_from_index(enc_idx)); return StringValueCStr(str); } } /* * GC Mark function */ static void pgconn_gc_mark( void *_this ) { t_pg_connection *this = (t_pg_connection *)_this; rb_gc_mark_movable( this->socket_io ); rb_gc_mark_movable( this->notice_receiver ); rb_gc_mark_movable( this->notice_processor ); rb_gc_mark_movable( this->type_map_for_queries ); rb_gc_mark_movable( this->type_map_for_results ); rb_gc_mark_movable( this->trace_stream ); rb_gc_mark_movable( this->encoder_for_put_copy_data ); rb_gc_mark_movable( this->decoder_for_get_copy_data ); } static void pgconn_gc_compact( void *_this ) { t_pg_connection *this = (t_pg_connection *)_this; pg_gc_location( this->socket_io ); pg_gc_location( this->notice_receiver ); pg_gc_location( this->notice_processor ); pg_gc_location( this->type_map_for_queries ); pg_gc_location( this->type_map_for_results ); pg_gc_location( this->trace_stream ); pg_gc_location( this->encoder_for_put_copy_data ); pg_gc_location( this->decoder_for_get_copy_data ); } /* * GC Free function */ static void pgconn_gc_free( void *_this ) { t_pg_connection *this = (t_pg_connection *)_this; #if defined(_WIN32) if ( RTEST(this->socket_io) ) { if( rb_w32_unwrap_io_handle(this->ruby_sd) ){ rb_warn("pg: Could not unwrap win32 socket handle by garbage collector"); } } #endif if (this->pgconn != NULL) PQfinish( this->pgconn ); xfree(this); } /* * Object Size function */ static size_t pgconn_memsize( const void *_this ) { const t_pg_connection *this = (const t_pg_connection *)_this; return sizeof(*this); } static const rb_data_type_t pg_connection_type = { "PG::Connection", { pgconn_gc_mark, pgconn_gc_free, pgconn_memsize, pg_compact_callback(pgconn_gc_compact), }, 0, 0, RUBY_TYPED_WB_PROTECTED, }; /************************************************************************** * Class Methods **************************************************************************/ /* * Document-method: allocate * * call-seq: * PG::Connection.allocate -> conn */ static VALUE pgconn_s_allocate( VALUE klass ) { t_pg_connection *this; VALUE self = TypedData_Make_Struct( klass, t_pg_connection, &pg_connection_type, this ); this->pgconn = NULL; RB_OBJ_WRITE(self, &this->socket_io, Qnil); RB_OBJ_WRITE(self, &this->notice_receiver, Qnil); RB_OBJ_WRITE(self, &this->notice_processor, Qnil); RB_OBJ_WRITE(self, &this->type_map_for_queries, pg_typemap_all_strings); RB_OBJ_WRITE(self, &this->type_map_for_results, pg_typemap_all_strings); RB_OBJ_WRITE(self, &this->encoder_for_put_copy_data, Qnil); RB_OBJ_WRITE(self, &this->decoder_for_get_copy_data, Qnil); RB_OBJ_WRITE(self, &this->trace_stream, Qnil); rb_ivar_set(self, rb_intern("@calls_to_put_copy_data"), INT2FIX(0)); return self; } static VALUE pgconn_s_sync_connect(int argc, VALUE *argv, VALUE klass) { t_pg_connection *this; VALUE conninfo; VALUE self = pgconn_s_allocate( klass ); this = pg_get_connection( self ); conninfo = rb_funcall2( rb_cPGconn, rb_intern("parse_connect_args"), argc, argv ); this->pgconn = gvl_PQconnectdb(StringValueCStr(conninfo)); if(this->pgconn == NULL) rb_raise(rb_ePGerror, "PQconnectdb() unable to allocate PGconn structure"); if (PQstatus(this->pgconn) == CONNECTION_BAD) pg_raise_conn_error( rb_eConnectionBad, self, "%s", PQerrorMessage(this->pgconn)); pgconn_set_default_encoding( self ); if (rb_block_given_p()) { return rb_ensure(rb_yield, self, pgconn_finish, self); } return self; } /* * call-seq: * PG::Connection.connect_start(connection_hash) -> conn * PG::Connection.connect_start(connection_string) -> conn * PG::Connection.connect_start(host, port, options, tty, dbname, login, password) -> conn * * This is an asynchronous version of PG::Connection.new. * * Use #connect_poll to poll the status of the connection. * * NOTE: this does *not* set the connection's +client_encoding+ for you if * +Encoding.default_internal+ is set. To set it after the connection is established, * call #internal_encoding=. You can also set it automatically by setting * ENV['PGCLIENTENCODING'], or include the 'options' connection parameter. * * See also the 'sample' directory of this gem and the corresponding {libpq functions}[https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS]. * */ static VALUE pgconn_s_connect_start( int argc, VALUE *argv, VALUE klass ) { VALUE rb_conn; VALUE conninfo; t_pg_connection *this; /* * PG::Connection.connect_start must act as both alloc() and initialize() * because it is not invoked by calling new(). */ rb_conn = pgconn_s_allocate( klass ); this = pg_get_connection( rb_conn ); conninfo = rb_funcall2( klass, rb_intern("parse_connect_args"), argc, argv ); this->pgconn = gvl_PQconnectStart( StringValueCStr(conninfo) ); if( this->pgconn == NULL ) rb_raise(rb_ePGerror, "PQconnectStart() unable to allocate PGconn structure"); if ( PQstatus(this->pgconn) == CONNECTION_BAD ) pg_raise_conn_error( rb_eConnectionBad, rb_conn, "%s", PQerrorMessage(this->pgconn)); if ( rb_block_given_p() ) { return rb_ensure( rb_yield, rb_conn, pgconn_finish, rb_conn ); } return rb_conn; } static VALUE pgconn_s_sync_ping( int argc, VALUE *argv, VALUE klass ) { PGPing ping; VALUE conninfo; conninfo = rb_funcall2( klass, rb_intern("parse_connect_args"), argc, argv ); ping = gvl_PQping( StringValueCStr(conninfo) ); return INT2FIX((int)ping); } /* * Document-method: PG::Connection.conndefaults * * call-seq: * PG::Connection.conndefaults() -> Array * * Returns an array of hashes. Each hash has the keys: * [+:keyword+] * the name of the option * [+:envvar+] * the environment variable to fall back to * [+:compiled+] * the compiled in option as a secondary fallback * [+:val+] * the option's current value, or +nil+ if not known * [+:label+] * the label for the field * [+:dispchar+] * "" for normal, "D" for debug, and "*" for password * [+:dispsize+] * field size */ static VALUE pgconn_s_conndefaults(VALUE self) { PQconninfoOption *options = PQconndefaults(); VALUE array = pgconn_make_conninfo_array( options ); PQconninfoFree(options); UNUSED( self ); return array; } /* * Document-method: PG::Connection.conninfo_parse * * call-seq: * PG::Connection.conninfo_parse(conninfo_string) -> Array * * Returns parsed connection options from the provided connection string as an array of hashes. * Each hash has the same keys as PG::Connection.conndefaults() . * The values from the +conninfo_string+ are stored in the +:val+ key. */ static VALUE pgconn_s_conninfo_parse(VALUE self, VALUE conninfo) { VALUE array; char *errmsg = NULL; PQconninfoOption *options = PQconninfoParse(StringValueCStr(conninfo), &errmsg); if(errmsg){ VALUE error = rb_str_new_cstr(errmsg); PQfreemem(errmsg); rb_raise(rb_ePGerror, "%"PRIsVALUE, error); } array = pgconn_make_conninfo_array( options ); PQconninfoFree(options); UNUSED( self ); return array; } #ifdef HAVE_PQENCRYPTPASSWORDCONN static VALUE pgconn_sync_encrypt_password(int argc, VALUE *argv, VALUE self) { char *encrypted = NULL; VALUE rval = Qnil; VALUE password, username, algorithm; PGconn *conn = pg_get_pgconn(self); rb_scan_args( argc, argv, "21", &password, &username, &algorithm ); Check_Type(password, T_STRING); Check_Type(username, T_STRING); encrypted = gvl_PQencryptPasswordConn(conn, StringValueCStr(password), StringValueCStr(username), RTEST(algorithm) ? StringValueCStr(algorithm) : NULL); if ( encrypted ) { rval = rb_str_new2( encrypted ); PQfreemem( encrypted ); } else { pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); } return rval; } #endif /* * call-seq: * PG::Connection.encrypt_password( password, username ) -> String * * This is an older, deprecated version of #encrypt_password. * The difference is that this function always uses +md5+ as the encryption algorithm. * */ static VALUE pgconn_s_encrypt_password(VALUE self, VALUE password, VALUE username) { char *encrypted = NULL; VALUE rval = Qnil; UNUSED( self ); Check_Type(password, T_STRING); Check_Type(username, T_STRING); encrypted = PQencryptPassword(StringValueCStr(password), StringValueCStr(username)); rval = rb_str_new2( encrypted ); PQfreemem( encrypted ); return rval; } /************************************************************************** * PG::Connection INSTANCE METHODS **************************************************************************/ /* * call-seq: * conn.connect_poll() -> Integer * * Returns one of: * [+PGRES_POLLING_READING+] * wait until the socket is ready to read * [+PGRES_POLLING_WRITING+] * wait until the socket is ready to write * [+PGRES_POLLING_FAILED+] * the asynchronous connection has failed * [+PGRES_POLLING_OK+] * the asynchronous connection is ready * * Example: * require "io/wait" * * conn = PG::Connection.connect_start(dbname: 'mydatabase') * status = conn.connect_poll * while(status != PG::PGRES_POLLING_OK) do * # do some work while waiting for the connection to complete * if(status == PG::PGRES_POLLING_READING) * unless conn.socket_io.wait_readable(10.0) * raise "Asynchronous connection timed out!" * end * elsif(status == PG::PGRES_POLLING_WRITING) * unless conn.socket_io.wait_writable(10.0) * raise "Asynchronous connection timed out!" * end * end * status = conn.connect_poll * end * # now conn.status == CONNECTION_OK, and connection * # is ready. */ static VALUE pgconn_connect_poll(VALUE self) { PostgresPollingStatusType status; status = gvl_PQconnectPoll(pg_get_pgconn(self)); pgconn_close_socket_io(self); return INT2FIX((int)status); } /* * call-seq: * conn.finish * * Closes the backend connection. */ static VALUE pgconn_finish( VALUE self ) { t_pg_connection *this = pg_get_connection_safe( self ); pgconn_close_socket_io( self ); PQfinish( this->pgconn ); this->pgconn = NULL; return Qnil; } /* * call-seq: * conn.finished? -> boolean * * Returns +true+ if the backend connection has been closed. */ static VALUE pgconn_finished_p( VALUE self ) { t_pg_connection *this = pg_get_connection( self ); if ( this->pgconn ) return Qfalse; return Qtrue; } static VALUE pgconn_sync_reset( VALUE self ) { pgconn_close_socket_io( self ); gvl_PQreset( pg_get_pgconn(self) ); return self; } /* * call-seq: * conn.reset_start() -> nil * * Initiate a connection reset in a nonblocking manner. * This will close the current connection and attempt to * reconnect using the same connection parameters. * Use #reset_poll to check the status of the * connection reset. */ static VALUE pgconn_reset_start(VALUE self) { pgconn_close_socket_io( self ); if(gvl_PQresetStart(pg_get_pgconn(self)) == 0) pg_raise_conn_error( rb_eUnableToSend, self, "reset has failed"); return Qnil; } /* * call-seq: * conn.reset_poll -> Integer * * Checks the status of a connection reset operation. * See #connect_start and #connect_poll for * usage information and return values. */ static VALUE pgconn_reset_poll(VALUE self) { PostgresPollingStatusType status; status = gvl_PQresetPoll(pg_get_pgconn(self)); pgconn_close_socket_io(self); return INT2FIX((int)status); } /* * call-seq: * conn.db() * * Returns the connected database name. */ static VALUE pgconn_db(VALUE self) { char *db = PQdb(pg_get_pgconn(self)); if (!db) return Qnil; return rb_str_new2(db); } /* * call-seq: * conn.user() * * Returns the authenticated user name. */ static VALUE pgconn_user(VALUE self) { char *user = PQuser(pg_get_pgconn(self)); if (!user) return Qnil; return rb_str_new2(user); } /* * call-seq: * conn.pass() * * Returns the authenticated password. */ static VALUE pgconn_pass(VALUE self) { char *user = PQpass(pg_get_pgconn(self)); if (!user) return Qnil; return rb_str_new2(user); } /* * call-seq: * conn.host() * * Returns the server host name of the active connection. * This can be a host name, an IP address, or a directory path if the connection is via Unix socket. * (The path case can be distinguished because it will always be an absolute path, beginning with +/+ .) * * If the connection parameters specified both host and hostaddr, then +host+ will return the host information. * If only hostaddr was specified, then that is returned. * If multiple hosts were specified in the connection parameters, +host+ returns the host actually connected to. * * If there is an error producing the host information (perhaps if the connection has not been fully established or there was an error), it returns an empty string. * * If multiple hosts were specified in the connection parameters, it is not possible to rely on the result of +host+ until the connection is established. * The status of the connection can be checked using the function Connection#status . */ static VALUE pgconn_host(VALUE self) { char *host = PQhost(pg_get_pgconn(self)); if (!host) return Qnil; return rb_str_new2(host); } /* PQhostaddr() appeared in PostgreSQL-12 together with PQresultMemorySize() */ #if defined(HAVE_PQRESULTMEMORYSIZE) /* * call-seq: * conn.hostaddr() * * Returns the server IP address of the active connection. * This can be the address that a host name resolved to, or an IP address provided through the hostaddr parameter. * If there is an error producing the host information (perhaps if the connection has not been fully established or there was an error), it returns an empty string. * */ static VALUE pgconn_hostaddr(VALUE self) { char *host = PQhostaddr(pg_get_pgconn(self)); if (!host) return Qnil; return rb_str_new2(host); } #endif /* * call-seq: * conn.port() * * Returns the connected server port number. */ static VALUE pgconn_port(VALUE self) { char* port = PQport(pg_get_pgconn(self)); if (!port || port[0] == '\0') return INT2NUM(DEF_PGPORT); else return INT2NUM(atoi(port)); } /* * call-seq: * conn.tty() * * Obsolete function. */ static VALUE pgconn_tty(VALUE self) { return rb_str_new2(""); } /* * call-seq: * conn.options() * * Returns backend option string. */ static VALUE pgconn_options(VALUE self) { char *options = PQoptions(pg_get_pgconn(self)); if (!options) return Qnil; return rb_str_new2(options); } /* * call-seq: * conn.conninfo -> hash * * Returns the connection options used by a live connection. * * Available since PostgreSQL-9.3 */ static VALUE pgconn_conninfo( VALUE self ) { PGconn *conn = pg_get_pgconn(self); PQconninfoOption *options = PQconninfo( conn ); VALUE array = pgconn_make_conninfo_array( options ); PQconninfoFree(options); return array; } /* * call-seq: * conn.status() * * Returns the status of the connection, which is one: * PG::Constants::CONNECTION_OK * PG::Constants::CONNECTION_BAD * * ... and other constants of kind PG::Constants::CONNECTION_* * * This method returns the status of the last command from memory. * It doesn't do any socket access hence is not suitable to test the connectivity. * See check_socket for a way to verify the socket state. * * Example: * PG.constants.grep(/CONNECTION_/).find{|c| PG.const_get(c) == conn.status} # => :CONNECTION_OK */ static VALUE pgconn_status(VALUE self) { return INT2NUM(PQstatus(pg_get_pgconn(self))); } /* * call-seq: * conn.transaction_status() * * returns one of the following statuses: * PQTRANS_IDLE = 0 (connection idle) * PQTRANS_ACTIVE = 1 (command in progress) * PQTRANS_INTRANS = 2 (idle, within transaction block) * PQTRANS_INERROR = 3 (idle, within failed transaction) * PQTRANS_UNKNOWN = 4 (cannot determine status) */ static VALUE pgconn_transaction_status(VALUE self) { return INT2NUM(PQtransactionStatus(pg_get_pgconn(self))); } /* * call-seq: * conn.parameter_status( param_name ) -> String * * Returns the setting of parameter _param_name_, where * _param_name_ is one of * * +server_version+ * * +server_encoding+ * * +client_encoding+ * * +is_superuser+ * * +session_authorization+ * * +DateStyle+ * * +TimeZone+ * * +integer_datetimes+ * * +standard_conforming_strings+ * * Returns nil if the value of the parameter is not known. */ static VALUE pgconn_parameter_status(VALUE self, VALUE param_name) { const char *ret = PQparameterStatus(pg_get_pgconn(self), StringValueCStr(param_name)); if(ret == NULL) return Qnil; else return rb_str_new2(ret); } /* * call-seq: * conn.protocol_version -> Integer * * The 3.0 protocol will normally be used when communicating with PostgreSQL 7.4 * or later servers; pre-7.4 servers support only protocol 2.0. (Protocol 1.0 is * obsolete and not supported by libpq.) */ static VALUE pgconn_protocol_version(VALUE self) { return INT2NUM(PQprotocolVersion(pg_get_pgconn(self))); } /* * call-seq: * conn.server_version -> Integer * * The number is formed by converting the major, minor, and revision * numbers into two-decimal-digit numbers and appending them together. * For example, version 7.4.2 will be returned as 70402, and version * 8.1 will be returned as 80100 (leading zeroes are not shown). Zero * is returned if the connection is bad. * */ static VALUE pgconn_server_version(VALUE self) { return INT2NUM(PQserverVersion(pg_get_pgconn(self))); } /* * call-seq: * conn.error_message -> String * * Returns the error message most recently generated by an operation on the connection. * * Nearly all libpq functions will set a message for conn.error_message if they fail. * Note that by libpq convention, a nonempty error_message result can consist of multiple lines, and will include a trailing newline. */ static VALUE pgconn_error_message(VALUE self) { char *error = PQerrorMessage(pg_get_pgconn(self)); if (!error) return Qnil; return rb_str_new2(error); } /* * call-seq: * conn.socket() -> Integer * * This method is deprecated. Please use the more portable method #socket_io . * * Returns the socket's file descriptor for this connection. * IO.for_fd() can be used to build a proper IO object to the socket. * If you do so, you will likely also want to set autoclose=false * on it to prevent Ruby from closing the socket to PostgreSQL if it * goes out of scope. Alternatively, you can use #socket_io, which * creates an IO that's associated with the connection object itself, * and so won't go out of scope until the connection does. * * *Note:* On Windows the file descriptor is not usable, * since it can not be used to build a Ruby IO object. */ static VALUE pgconn_socket(VALUE self) { int sd; pg_deprecated(4, ("conn.socket is deprecated and should be replaced by conn.socket_io")); if( (sd = PQsocket(pg_get_pgconn(self))) < 0) pg_raise_conn_error( rb_eConnectionBad, self, "PQsocket() can't get socket descriptor"); return INT2NUM(sd); } /* * call-seq: * conn.socket_io() -> IO * * Fetch an IO object created from the Connection's underlying socket. * This object can be used per socket_io.wait_readable, socket_io.wait_writable or for IO.select to wait for events while running asynchronous API calls. * IO#wait_*able is is Fiber.scheduler compatible in contrast to IO.select. * * The IO object can change while the connection is established, but is memorized afterwards. * So be sure not to cache the IO object, but repeat calling conn.socket_io instead. * * Using this method also works on Windows in contrast to using #socket . * It also avoids the problem of the underlying connection being closed by Ruby when an IO created using IO.for_fd(conn.socket) goes out of scope. */ static VALUE pgconn_socket_io(VALUE self) { int sd; int ruby_sd; t_pg_connection *this = pg_get_connection_safe( self ); VALUE cSocket; VALUE socket_io = this->socket_io; if ( !RTEST(socket_io) ) { if( (sd = PQsocket(this->pgconn)) < 0){ pg_raise_conn_error( rb_eConnectionBad, self, "PQsocket() can't get socket descriptor"); } #ifdef _WIN32 ruby_sd = rb_w32_wrap_io_handle((HANDLE)(intptr_t)sd, O_RDWR|O_BINARY|O_NOINHERIT); if( ruby_sd == -1 ) pg_raise_conn_error( rb_eConnectionBad, self, "Could not wrap win32 socket handle"); this->ruby_sd = ruby_sd; #else ruby_sd = sd; #endif cSocket = rb_const_get(rb_cObject, rb_intern("BasicSocket")); socket_io = rb_funcall( cSocket, rb_intern("for_fd"), 1, INT2NUM(ruby_sd)); /* Disable autoclose feature */ rb_funcall( socket_io, s_id_autoclose_set, 1, Qfalse ); RB_OBJ_WRITE(self, &this->socket_io, socket_io); } return socket_io; } /* * call-seq: * conn.backend_pid() -> Integer * * Returns the process ID of the backend server * process for this connection. * Note that this is a PID on database server host. */ static VALUE pgconn_backend_pid(VALUE self) { return INT2NUM(PQbackendPID(pg_get_pgconn(self))); } typedef struct { struct sockaddr_storage addr; socklen_t salen; } SockAddr; /* Copy of struct pg_cancel from libpq-int.h * * See https://github.com/postgres/postgres/blame/master/src/interfaces/libpq/libpq-int.h#L577-L586 */ struct pg_cancel { SockAddr raddr; /* Remote address */ int be_pid; /* PID of backend --- needed for cancels */ int be_key; /* key of backend --- needed for cancels */ }; /* * call-seq: * conn.backend_key() -> Integer * * Returns the key of the backend server process for this connection. * This key can be used to cancel queries on the server. */ static VALUE pgconn_backend_key(VALUE self) { int be_key; struct pg_cancel *cancel; PGconn *conn = pg_get_pgconn(self); cancel = (struct pg_cancel*)PQgetCancel(conn); if(cancel == NULL) pg_raise_conn_error( rb_ePGerror, self, "Invalid connection!"); if( cancel->be_pid != PQbackendPID(conn) ) rb_raise(rb_ePGerror,"Unexpected binary struct layout - please file a bug report at ruby-pg!"); be_key = cancel->be_key; PQfreeCancel(cancel); return INT2NUM(be_key); } /* * call-seq: * conn.connection_needs_password() -> Boolean * * Returns +true+ if the authentication method required a * password, but none was available. +false+ otherwise. */ static VALUE pgconn_connection_needs_password(VALUE self) { return PQconnectionNeedsPassword(pg_get_pgconn(self)) ? Qtrue : Qfalse; } /* * call-seq: * conn.connection_used_password() -> Boolean * * Returns +true+ if the authentication method used * a caller-supplied password, +false+ otherwise. */ static VALUE pgconn_connection_used_password(VALUE self) { return PQconnectionUsedPassword(pg_get_pgconn(self)) ? Qtrue : Qfalse; } /* :TODO: get_ssl */ static VALUE pgconn_sync_exec_params( int, VALUE *, VALUE ); /* * call-seq: * conn.sync_exec(sql) -> PG::Result * conn.sync_exec(sql) {|pg_result| block } * * This function has the same behavior as #async_exec, but is implemented using the synchronous command processing API of libpq. * It's not recommended to use explicit sync or async variants but #exec instead, unless you have a good reason to do so. * * Both #sync_exec and #async_exec release the GVL while waiting for server response, so that concurrent threads will get executed. * However #async_exec has two advantages: * * 1. #async_exec can be aborted by signals (like Ctrl-C), while #exec blocks signal processing until the query is answered. * 2. Ruby VM gets notified about IO blocked operations and can pass them through Fiber.scheduler. * So only async_* methods are compatible to event based schedulers like the async gem. */ static VALUE pgconn_sync_exec(int argc, VALUE *argv, VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); PGresult *result = NULL; VALUE rb_pgresult; /* If called with no or nil parameters, use PQexec for compatibility */ if ( argc == 1 || (argc >= 2 && argc <= 4 && NIL_P(argv[1]) )) { VALUE query_str = argv[0]; result = gvl_PQexec(this->pgconn, pg_cstr_enc(query_str, this->enc_idx)); rb_pgresult = pg_new_result(result, self); pg_result_check(rb_pgresult); if (rb_block_given_p()) { return rb_ensure(rb_yield, rb_pgresult, pg_result_clear, rb_pgresult); } return rb_pgresult; } pg_deprecated(0, ("forwarding exec to exec_params is deprecated")); /* Otherwise, just call #exec_params instead for backward-compatibility */ return pgconn_sync_exec_params( argc, argv, self ); } struct linked_typecast_data { struct linked_typecast_data *next; char data[0]; }; /* This struct is allocated on the stack for all query execution functions. */ struct query_params_data { /* * Filled by caller */ /* The character encoding index of the connection. Any strings * given as query parameters are converted to this encoding. */ int enc_idx; /* Is the query function to execute one with types array? */ int with_types; /* Array of query params from user space */ VALUE params; /* The typemap given from user space */ VALUE typemap; /* * Filled by alloc_query_params() */ /* Wraps the pointer of allocated memory, if function parameters don't * fit in the memory_pool below. */ VALUE heap_pool; /* Pointer to the value string pointers (either within memory_pool or heap_pool). * The value strings itself are either directly within RString memory or, * in case of type casted values, within memory_pool or typecast_heap_chain. */ char **values; /* Pointer to the param lengths (either within memory_pool or heap_pool) */ int *lengths; /* Pointer to the format codes (either within memory_pool or heap_pool) */ int *formats; /* Pointer to the OID types (either within memory_pool or heap_pool) */ Oid *types; /* This array takes the string values for the timeframe of the query, * if param value conversion is required */ VALUE gc_array; /* Wraps a single linked list of allocated memory chunks for type casted params. * Used when the memory_pool is to small. */ VALUE typecast_heap_chain; /* This memory pool is used to place above query function parameters on it. */ char memory_pool[QUERYDATA_BUFFER_SIZE]; }; static void free_typecast_heap_chain(void *_chain_entry) { struct linked_typecast_data *chain_entry = (struct linked_typecast_data *)_chain_entry; while(chain_entry){ struct linked_typecast_data *next = chain_entry->next; xfree(chain_entry); chain_entry = next; } } static const rb_data_type_t pg_typecast_buffer_type = { "PG::Connection typecast buffer chain", { (RUBY_DATA_FUNC) NULL, free_typecast_heap_chain, (size_t (*)(const void *))NULL, }, 0, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED, }; static char * alloc_typecast_buf( VALUE *typecast_heap_chain, int len ) { /* Allocate a new memory chunk from heap */ struct linked_typecast_data *allocated = (struct linked_typecast_data *)xmalloc(sizeof(struct linked_typecast_data) + len); /* Did we already wrap a memory chain per T_DATA object? */ if( NIL_P( *typecast_heap_chain ) ){ /* Leave free'ing of the buffer chain to the GC, when paramsData has left the stack */ *typecast_heap_chain = TypedData_Wrap_Struct( rb_cObject, &pg_typecast_buffer_type, allocated ); allocated->next = NULL; } else { /* Append to the chain */ allocated->next = RTYPEDDATA_DATA( *typecast_heap_chain ); RTYPEDDATA_DATA( *typecast_heap_chain ) = allocated; } return &allocated->data[0]; } static const rb_data_type_t pg_query_heap_pool_type = { "PG::Connection query heap pool", { (RUBY_DATA_FUNC) NULL, RUBY_TYPED_DEFAULT_FREE, (size_t (*)(const void *))NULL, }, 0, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED, }; static int alloc_query_params(struct query_params_data *paramsData) { VALUE param_value; t_typemap *p_typemap; int nParams; int i=0; t_pg_coder *conv; unsigned int required_pool_size; char *memory_pool; Check_Type(paramsData->params, T_ARRAY); p_typemap = RTYPEDDATA_DATA( paramsData->typemap ); p_typemap->funcs.fit_to_query( paramsData->typemap, paramsData->params ); paramsData->heap_pool = Qnil; paramsData->typecast_heap_chain = Qnil; paramsData->gc_array = Qnil; nParams = (int)RARRAY_LEN(paramsData->params); required_pool_size = nParams * ( sizeof(char *) + sizeof(int) + sizeof(int) + (paramsData->with_types ? sizeof(Oid) : 0)); if( sizeof(paramsData->memory_pool) < required_pool_size ){ /* Allocate one combined memory pool for all possible function parameters */ memory_pool = (char*)xmalloc( required_pool_size ); /* Leave free'ing of the buffer to the GC, when paramsData has left the stack */ paramsData->heap_pool = TypedData_Wrap_Struct( rb_cObject, &pg_query_heap_pool_type, memory_pool ); required_pool_size = 0; }else{ /* Use stack memory for function parameters */ memory_pool = paramsData->memory_pool; } paramsData->values = (char **)memory_pool; paramsData->lengths = (int *)((char*)paramsData->values + sizeof(char *) * nParams); paramsData->formats = (int *)((char*)paramsData->lengths + sizeof(int) * nParams); paramsData->types = (Oid *)((char*)paramsData->formats + sizeof(int) * nParams); { char *typecast_buf = paramsData->memory_pool + required_pool_size; for ( i = 0; i < nParams; i++ ) { param_value = rb_ary_entry(paramsData->params, i); paramsData->formats[i] = 0; if( paramsData->with_types ) paramsData->types[i] = 0; /* Let the given typemap select a coder for this param */ conv = p_typemap->funcs.typecast_query_param(p_typemap, param_value, i); /* Using a coder object for the param_value? Then set it's format code and oid. */ if( conv ){ paramsData->formats[i] = conv->format; if( paramsData->with_types ) paramsData->types[i] = conv->oid; } else { /* No coder, but got we a hash form for the query param? * Then take format code and oid from there. */ if (TYPE(param_value) == T_HASH) { VALUE format_value = rb_hash_aref(param_value, sym_format); if( !NIL_P(format_value) ) paramsData->formats[i] = NUM2INT(format_value); if( paramsData->with_types ){ VALUE type_value = rb_hash_aref(param_value, sym_type); if( !NIL_P(type_value) ) paramsData->types[i] = NUM2UINT(type_value); } param_value = rb_hash_aref(param_value, sym_value); } } if( NIL_P(param_value) ){ paramsData->values[i] = NULL; paramsData->lengths[i] = 0; } else { t_pg_coder_enc_func enc_func = pg_coder_enc_func( conv ); VALUE intermediate; /* 1st pass for retiving the required memory space */ int len = enc_func(conv, param_value, NULL, &intermediate, paramsData->enc_idx); if( len == -1 ){ /* The intermediate value is a String that can be used directly. */ /* Ensure that the String object is zero terminated as expected by libpq. */ if( paramsData->formats[i] == 0 ) StringValueCStr(intermediate); /* In case a new string object was generated, make sure it doesn't get freed by the GC */ if( intermediate != param_value ){ if( NIL_P(paramsData->gc_array) ) paramsData->gc_array = rb_ary_new(); rb_ary_push(paramsData->gc_array, intermediate); } paramsData->values[i] = RSTRING_PTR(intermediate); paramsData->lengths[i] = RSTRING_LENINT(intermediate); } else { /* Is the stack memory pool too small to take the type casted value? */ if( sizeof(paramsData->memory_pool) < required_pool_size + len + 1){ typecast_buf = alloc_typecast_buf( ¶msData->typecast_heap_chain, len + 1 ); } /* 2nd pass for writing the data to prepared buffer */ len = enc_func(conv, param_value, typecast_buf, &intermediate, paramsData->enc_idx); paramsData->values[i] = typecast_buf; if( paramsData->formats[i] == 0 ){ /* text format strings must be zero terminated and lengths are ignored */ typecast_buf[len] = 0; typecast_buf += len + 1; required_pool_size += len + 1; } else { paramsData->lengths[i] = len; typecast_buf += len; required_pool_size += len; } } RB_GC_GUARD(intermediate); } } } return nParams; } static void free_query_params(struct query_params_data *paramsData) { /* currently nothing to free */ } void pgconn_query_assign_typemap( VALUE self, struct query_params_data *paramsData ) { if(NIL_P(paramsData->typemap)){ /* Use default typemap for queries. It's type is checked when assigned. */ paramsData->typemap = pg_get_connection(self)->type_map_for_queries; }else{ t_typemap *tm; UNUSED(tm); /* Check type of method param */ TypedData_Get_Struct(paramsData->typemap, t_typemap, &pg_typemap_type, tm); } } /* * call-seq: * conn.sync_exec_params(sql, params[, result_format[, type_map]] ) -> PG::Result * conn.sync_exec_params(sql, params[, result_format[, type_map]] ) {|pg_result| block } * * This function has the same behavior as #async_exec_params, but is implemented using the synchronous command processing API of libpq. * See #async_exec for the differences between the two API variants. * It's not recommended to use explicit sync or async variants but #exec_params instead, unless you have a good reason to do so. */ static VALUE pgconn_sync_exec_params( int argc, VALUE *argv, VALUE self ) { t_pg_connection *this = pg_get_connection_safe( self ); PGresult *result = NULL; VALUE rb_pgresult; VALUE command, in_res_fmt; int nParams; int resultFormat; struct query_params_data paramsData = { this->enc_idx }; /* For compatibility we accept 1 to 4 parameters */ rb_scan_args(argc, argv, "13", &command, ¶msData.params, &in_res_fmt, ¶msData.typemap); paramsData.with_types = 1; /* * For backward compatibility no or +nil+ for the second parameter * is passed to #exec */ if ( NIL_P(paramsData.params) ) { pg_deprecated(1, ("forwarding exec_params to exec is deprecated")); return pgconn_sync_exec( 1, argv, self ); } pgconn_query_assign_typemap( self, ¶msData ); resultFormat = NIL_P(in_res_fmt) ? 0 : NUM2INT(in_res_fmt); nParams = alloc_query_params( ¶msData ); result = gvl_PQexecParams(this->pgconn, pg_cstr_enc(command, paramsData.enc_idx), nParams, paramsData.types, (const char * const *)paramsData.values, paramsData.lengths, paramsData.formats, resultFormat); free_query_params( ¶msData ); rb_pgresult = pg_new_result(result, self); pg_result_check(rb_pgresult); if (rb_block_given_p()) { return rb_ensure(rb_yield, rb_pgresult, pg_result_clear, rb_pgresult); } return rb_pgresult; } /* * call-seq: * conn.sync_prepare(stmt_name, sql [, param_types ] ) -> PG::Result * * This function has the same behavior as #async_prepare, but is implemented using the synchronous command processing API of libpq. * See #async_exec for the differences between the two API variants. * It's not recommended to use explicit sync or async variants but #prepare instead, unless you have a good reason to do so. */ static VALUE pgconn_sync_prepare(int argc, VALUE *argv, VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); PGresult *result = NULL; VALUE rb_pgresult; VALUE name, command, in_paramtypes; VALUE param; int i = 0; int nParams = 0; Oid *paramTypes = NULL; const char *name_cstr; const char *command_cstr; int enc_idx = this->enc_idx; rb_scan_args(argc, argv, "21", &name, &command, &in_paramtypes); name_cstr = pg_cstr_enc(name, enc_idx); command_cstr = pg_cstr_enc(command, enc_idx); if(! NIL_P(in_paramtypes)) { Check_Type(in_paramtypes, T_ARRAY); nParams = (int)RARRAY_LEN(in_paramtypes); paramTypes = ALLOC_N(Oid, nParams); for(i = 0; i < nParams; i++) { param = rb_ary_entry(in_paramtypes, i); if(param == Qnil) paramTypes[i] = 0; else paramTypes[i] = NUM2UINT(param); } } result = gvl_PQprepare(this->pgconn, name_cstr, command_cstr, nParams, paramTypes); xfree(paramTypes); rb_pgresult = pg_new_result(result, self); pg_result_check(rb_pgresult); return rb_pgresult; } /* * call-seq: * conn.sync_exec_prepared(statement_name [, params, result_format[, type_map]] ) -> PG::Result * conn.sync_exec_prepared(statement_name [, params, result_format[, type_map]] ) {|pg_result| block } * * This function has the same behavior as #async_exec_prepared, but is implemented using the synchronous command processing API of libpq. * See #async_exec for the differences between the two API variants. * It's not recommended to use explicit sync or async variants but #exec_prepared instead, unless you have a good reason to do so. */ static VALUE pgconn_sync_exec_prepared(int argc, VALUE *argv, VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); PGresult *result = NULL; VALUE rb_pgresult; VALUE name, in_res_fmt; int nParams; int resultFormat; struct query_params_data paramsData = { this->enc_idx }; rb_scan_args(argc, argv, "13", &name, ¶msData.params, &in_res_fmt, ¶msData.typemap); paramsData.with_types = 0; if(NIL_P(paramsData.params)) { paramsData.params = rb_ary_new2(0); } pgconn_query_assign_typemap( self, ¶msData ); resultFormat = NIL_P(in_res_fmt) ? 0 : NUM2INT(in_res_fmt); nParams = alloc_query_params( ¶msData ); result = gvl_PQexecPrepared(this->pgconn, pg_cstr_enc(name, paramsData.enc_idx), nParams, (const char * const *)paramsData.values, paramsData.lengths, paramsData.formats, resultFormat); free_query_params( ¶msData ); rb_pgresult = pg_new_result(result, self); pg_result_check(rb_pgresult); if (rb_block_given_p()) { return rb_ensure(rb_yield, rb_pgresult, pg_result_clear, rb_pgresult); } return rb_pgresult; } /* * call-seq: * conn.sync_describe_prepared( statement_name ) -> PG::Result * * This function has the same behavior as #async_describe_prepared, but is implemented using the synchronous command processing API of libpq. * See #async_exec for the differences between the two API variants. * It's not recommended to use explicit sync or async variants but #describe_prepared instead, unless you have a good reason to do so. */ static VALUE pgconn_sync_describe_prepared(VALUE self, VALUE stmt_name) { PGresult *result; VALUE rb_pgresult; t_pg_connection *this = pg_get_connection_safe( self ); const char *stmt; if(NIL_P(stmt_name)) { stmt = NULL; } else { stmt = pg_cstr_enc(stmt_name, this->enc_idx); } result = gvl_PQdescribePrepared(this->pgconn, stmt); rb_pgresult = pg_new_result(result, self); pg_result_check(rb_pgresult); return rb_pgresult; } /* * call-seq: * conn.sync_describe_portal( portal_name ) -> PG::Result * * This function has the same behavior as #async_describe_portal, but is implemented using the synchronous command processing API of libpq. * See #async_exec for the differences between the two API variants. * It's not recommended to use explicit sync or async variants but #describe_portal instead, unless you have a good reason to do so. */ static VALUE pgconn_sync_describe_portal(VALUE self, VALUE stmt_name) { PGresult *result; VALUE rb_pgresult; t_pg_connection *this = pg_get_connection_safe( self ); const char *stmt; if(NIL_P(stmt_name)) { stmt = NULL; } else { stmt = pg_cstr_enc(stmt_name, this->enc_idx); } result = gvl_PQdescribePortal(this->pgconn, stmt); rb_pgresult = pg_new_result(result, self); pg_result_check(rb_pgresult); return rb_pgresult; } /* * call-seq: * conn.make_empty_pgresult( status ) -> PG::Result * * Constructs and empty PG::Result with status _status_. * _status_ may be one of: * * +PGRES_EMPTY_QUERY+ * * +PGRES_COMMAND_OK+ * * +PGRES_TUPLES_OK+ * * +PGRES_COPY_OUT+ * * +PGRES_COPY_IN+ * * +PGRES_BAD_RESPONSE+ * * +PGRES_NONFATAL_ERROR+ * * +PGRES_FATAL_ERROR+ * * +PGRES_COPY_BOTH+ * * +PGRES_SINGLE_TUPLE+ * * +PGRES_PIPELINE_SYNC+ * * +PGRES_PIPELINE_ABORTED+ */ static VALUE pgconn_make_empty_pgresult(VALUE self, VALUE status) { PGresult *result; VALUE rb_pgresult; PGconn *conn = pg_get_pgconn(self); result = PQmakeEmptyPGresult(conn, NUM2INT(status)); rb_pgresult = pg_new_result(result, self); pg_result_check(rb_pgresult); return rb_pgresult; } /* * call-seq: * conn.escape_string( str ) -> String * * Returns a SQL-safe version of the String _str_. * This is the preferred way to make strings safe for inclusion in * SQL queries. * * Consider using exec_params, which avoids the need for passing values * inside of SQL commands. * * Character encoding of escaped string will be equal to client encoding of connection. * * NOTE: This class version of this method can only be used safely in client * programs that use a single PostgreSQL connection at a time (in this case it can * find out what it needs to know "behind the scenes"). It might give the wrong * results if used in programs that use multiple database connections; use the * same method on the connection object in such cases. * * See also convenience functions #escape_literal and #escape_identifier which also add proper quotes around the string. */ static VALUE pgconn_s_escape(VALUE self, VALUE string) { size_t size; int error; VALUE result; int enc_idx; int singleton = !rb_obj_is_kind_of(self, rb_cPGconn); StringValueCStr(string); enc_idx = singleton ? ENCODING_GET(string) : pg_get_connection(self)->enc_idx; if( ENCODING_GET(string) != enc_idx ){ string = rb_str_export_to_enc(string, rb_enc_from_index(enc_idx)); } result = rb_str_new(NULL, RSTRING_LEN(string) * 2 + 1); PG_ENCODING_SET_NOCHECK(result, enc_idx); if( !singleton ) { size = PQescapeStringConn(pg_get_pgconn(self), RSTRING_PTR(result), RSTRING_PTR(string), RSTRING_LEN(string), &error); if(error) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(pg_get_pgconn(self))); } else { size = PQescapeString(RSTRING_PTR(result), RSTRING_PTR(string), RSTRING_LEN(string)); } rb_str_set_len(result, size); return result; } /* * call-seq: * conn.escape_bytea( string ) -> String * * Escapes binary data for use within an SQL command with the type +bytea+. * * Certain byte values must be escaped (but all byte values may be escaped) * when used as part of a +bytea+ literal in an SQL statement. In general, to * escape a byte, it is converted into the three digit octal number equal to * the octet value, and preceded by two backslashes. The single quote (') and * backslash (\) characters have special alternative escape sequences. * #escape_bytea performs this operation, escaping only the minimally required * bytes. * * Consider using exec_params, which avoids the need for passing values inside of * SQL commands. * * NOTE: This class version of this method can only be used safely in client * programs that use a single PostgreSQL connection at a time (in this case it can * find out what it needs to know "behind the scenes"). It might give the wrong * results if used in programs that use multiple database connections; use the * same method on the connection object in such cases. */ static VALUE pgconn_s_escape_bytea(VALUE self, VALUE str) { unsigned char *from, *to; size_t from_len, to_len; VALUE ret; Check_Type(str, T_STRING); from = (unsigned char*)RSTRING_PTR(str); from_len = RSTRING_LEN(str); if ( rb_obj_is_kind_of(self, rb_cPGconn) ) { to = PQescapeByteaConn(pg_get_pgconn(self), from, from_len, &to_len); } else { to = PQescapeBytea( from, from_len, &to_len); } ret = rb_str_new((char*)to, to_len - 1); PQfreemem(to); return ret; } /* * call-seq: * PG::Connection.unescape_bytea( string ) * * Converts an escaped string representation of binary data into binary data --- the * reverse of #escape_bytea. This is needed when retrieving +bytea+ data in text format, * but not when retrieving it in binary format. * */ static VALUE pgconn_s_unescape_bytea(VALUE self, VALUE str) { unsigned char *from, *to; size_t to_len; VALUE ret; UNUSED( self ); Check_Type(str, T_STRING); from = (unsigned char*)StringValueCStr(str); to = PQunescapeBytea(from, &to_len); ret = rb_str_new((char*)to, to_len); PQfreemem(to); return ret; } /* * call-seq: * conn.escape_literal( str ) -> String * * Escape an arbitrary String +str+ as a literal. * * See also PG::TextEncoder::QuotedLiteral for a type cast integrated version of this function. */ static VALUE pgconn_escape_literal(VALUE self, VALUE string) { t_pg_connection *this = pg_get_connection_safe( self ); char *escaped = NULL; VALUE result = Qnil; int enc_idx = this->enc_idx; StringValueCStr(string); if( ENCODING_GET(string) != enc_idx ){ string = rb_str_export_to_enc(string, rb_enc_from_index(enc_idx)); } escaped = PQescapeLiteral(this->pgconn, RSTRING_PTR(string), RSTRING_LEN(string)); if (escaped == NULL) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(this->pgconn)); result = rb_str_new2(escaped); PQfreemem(escaped); PG_ENCODING_SET_NOCHECK(result, enc_idx); return result; } /* * call-seq: * conn.escape_identifier( str ) -> String * * Escape an arbitrary String +str+ as an identifier. * * This method does the same as #quote_ident with a String argument, * but it doesn't support an Array argument and it makes use of libpq * to process the string. */ static VALUE pgconn_escape_identifier(VALUE self, VALUE string) { t_pg_connection *this = pg_get_connection_safe( self ); char *escaped = NULL; VALUE result = Qnil; int enc_idx = this->enc_idx; StringValueCStr(string); if( ENCODING_GET(string) != enc_idx ){ string = rb_str_export_to_enc(string, rb_enc_from_index(enc_idx)); } escaped = PQescapeIdentifier(this->pgconn, RSTRING_PTR(string), RSTRING_LEN(string)); if (escaped == NULL) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(this->pgconn)); result = rb_str_new2(escaped); PQfreemem(escaped); PG_ENCODING_SET_NOCHECK(result, enc_idx); return result; } /* * call-seq: * conn.set_single_row_mode -> self * * To enter single-row mode, call this method immediately after a successful * call of send_query (or a sibling function). This mode selection is effective * only for the currently executing query. * Then call Connection#get_result repeatedly, until it returns nil. * * Each (but the last) received Result has exactly one row and a * Result#result_status of PGRES_SINGLE_TUPLE. The last Result has * zero rows and is used to indicate a successful execution of the query. * All of these Result objects will contain the same row description data * (column names, types, etc) that an ordinary Result object for the query * would have. * * *Caution:* While processing a query, the server may return some rows and * then encounter an error, causing the query to be aborted. Ordinarily, pg * discards any such rows and reports only the error. But in single-row mode, * those rows will have already been returned to the application. Hence, the * application will see some Result objects followed by an Error raised in get_result. * For proper transactional behavior, the application must be designed to discard * or undo whatever has been done with the previously-processed rows, if the query * ultimately fails. * * Example: * conn.send_query( "your SQL command" ) * conn.set_single_row_mode * loop do * res = conn.get_result or break * res.check * res.each do |row| * # do something with the received row * end * end */ static VALUE pgconn_set_single_row_mode(VALUE self) { PGconn *conn = pg_get_pgconn(self); rb_check_frozen(self); if( PQsetSingleRowMode(conn) == 0 ) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); return self; } static VALUE pgconn_send_query_params(int argc, VALUE *argv, VALUE self); /* * call-seq: * conn.send_query(sql) -> nil * * Sends SQL query request specified by _sql_ to PostgreSQL for * asynchronous processing, and immediately returns. * On failure, it raises a PG::Error. * * For backward compatibility, if you pass more than one parameter to this method, * it will call #send_query_params for you. New code should explicitly use #send_query_params if * argument placeholders are used. * */ static VALUE pgconn_send_query(int argc, VALUE *argv, VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); /* If called with no or nil parameters, use PQexec for compatibility */ if ( argc == 1 || (argc >= 2 && argc <= 4 && NIL_P(argv[1]) )) { if(gvl_PQsendQuery(this->pgconn, pg_cstr_enc(argv[0], this->enc_idx)) == 0) pg_raise_conn_error( rb_eUnableToSend, self, "%s", PQerrorMessage(this->pgconn)); pgconn_wait_for_flush( self ); return Qnil; } pg_deprecated(2, ("forwarding async_exec to async_exec_params and send_query to send_query_params is deprecated")); /* If called with parameters, and optionally result_format, * use PQsendQueryParams */ return pgconn_send_query_params( argc, argv, self); } /* * call-seq: * conn.send_query_params(sql, params [, result_format [, type_map ]] ) -> nil * * Sends SQL query request specified by _sql_ to PostgreSQL for * asynchronous processing, and immediately returns. * On failure, it raises a PG::Error. * * +params+ is an array of the bind parameters for the SQL query. * Each element of the +params+ array may be either: * a hash of the form: * {:value => String (value of bind parameter) * :type => Integer (oid of type of bind parameter) * :format => Integer (0 for text, 1 for binary) * } * or, it may be a String. If it is a string, that is equivalent to the hash: * { :value => , :type => 0, :format => 0 } * * PostgreSQL bind parameters are represented as $1, $2, $3, etc., * inside the SQL query. The 0th element of the +params+ array is bound * to $1, the 1st element is bound to $2, etc. +nil+ is treated as +NULL+. * * If the types are not specified, they will be inferred by PostgreSQL. * Instead of specifying type oids, it's recommended to simply add * explicit casts in the query to ensure that the right type is used. * * For example: "SELECT $1::int" * * The optional +result_format+ should be 0 for text results, 1 * for binary. * * +type_map+ can be a PG::TypeMap derivation (such as PG::BasicTypeMapForQueries). * This will type cast the params from various Ruby types before transmission * based on the encoders defined by the type map. When a type encoder is used * the format and oid of a given bind parameter are retrieved from the encoder * instead out of the hash form described above. * */ static VALUE pgconn_send_query_params(int argc, VALUE *argv, VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); int result; VALUE command, in_res_fmt; int nParams; int resultFormat; struct query_params_data paramsData = { this->enc_idx }; rb_scan_args(argc, argv, "22", &command, ¶msData.params, &in_res_fmt, ¶msData.typemap); paramsData.with_types = 1; pgconn_query_assign_typemap( self, ¶msData ); resultFormat = NIL_P(in_res_fmt) ? 0 : NUM2INT(in_res_fmt); nParams = alloc_query_params( ¶msData ); result = gvl_PQsendQueryParams(this->pgconn, pg_cstr_enc(command, paramsData.enc_idx), nParams, paramsData.types, (const char * const *)paramsData.values, paramsData.lengths, paramsData.formats, resultFormat); free_query_params( ¶msData ); if(result == 0) pg_raise_conn_error( rb_eUnableToSend, self, "%s", PQerrorMessage(this->pgconn)); pgconn_wait_for_flush( self ); return Qnil; } /* * call-seq: * conn.send_prepare( stmt_name, sql [, param_types ] ) -> nil * * Prepares statement _sql_ with name _name_ to be executed later. * Sends prepare command asynchronously, and returns immediately. * On failure, it raises a PG::Error. * * +param_types+ is an optional parameter to specify the Oids of the * types of the parameters. * * If the types are not specified, they will be inferred by PostgreSQL. * Instead of specifying type oids, it's recommended to simply add * explicit casts in the query to ensure that the right type is used. * * For example: "SELECT $1::int" * * PostgreSQL bind parameters are represented as $1, $2, $3, etc., * inside the SQL query. */ static VALUE pgconn_send_prepare(int argc, VALUE *argv, VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); int result; VALUE name, command, in_paramtypes; VALUE param; int i = 0; int nParams = 0; Oid *paramTypes = NULL; const char *name_cstr; const char *command_cstr; int enc_idx = this->enc_idx; rb_scan_args(argc, argv, "21", &name, &command, &in_paramtypes); name_cstr = pg_cstr_enc(name, enc_idx); command_cstr = pg_cstr_enc(command, enc_idx); if(! NIL_P(in_paramtypes)) { Check_Type(in_paramtypes, T_ARRAY); nParams = (int)RARRAY_LEN(in_paramtypes); paramTypes = ALLOC_N(Oid, nParams); for(i = 0; i < nParams; i++) { param = rb_ary_entry(in_paramtypes, i); if(param == Qnil) paramTypes[i] = 0; else paramTypes[i] = NUM2UINT(param); } } result = gvl_PQsendPrepare(this->pgconn, name_cstr, command_cstr, nParams, paramTypes); xfree(paramTypes); if(result == 0) { pg_raise_conn_error( rb_eUnableToSend, self, "%s", PQerrorMessage(this->pgconn)); } pgconn_wait_for_flush( self ); return Qnil; } /* * call-seq: * conn.send_query_prepared( statement_name [, params, result_format[, type_map ]] ) * -> nil * * Execute prepared named statement specified by _statement_name_ * asynchronously, and returns immediately. * On failure, it raises a PG::Error. * * +params+ is an array of the optional bind parameters for the * SQL query. Each element of the +params+ array may be either: * a hash of the form: * {:value => String (value of bind parameter) * :format => Integer (0 for text, 1 for binary) * } * or, it may be a String. If it is a string, that is equivalent to the hash: * { :value => , :format => 0 } * * PostgreSQL bind parameters are represented as $1, $2, $3, etc., * inside the SQL query. The 0th element of the +params+ array is bound * to $1, the 1st element is bound to $2, etc. +nil+ is treated as +NULL+. * * The optional +result_format+ should be 0 for text results, 1 * for binary. * * +type_map+ can be a PG::TypeMap derivation (such as PG::BasicTypeMapForQueries). * This will type cast the params from various Ruby types before transmission * based on the encoders defined by the type map. When a type encoder is used * the format and oid of a given bind parameter are retrieved from the encoder * instead out of the hash form described above. * */ static VALUE pgconn_send_query_prepared(int argc, VALUE *argv, VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); int result; VALUE name, in_res_fmt; int nParams; int resultFormat; struct query_params_data paramsData = { this->enc_idx }; rb_scan_args(argc, argv, "13", &name, ¶msData.params, &in_res_fmt, ¶msData.typemap); paramsData.with_types = 0; if(NIL_P(paramsData.params)) { paramsData.params = rb_ary_new2(0); } pgconn_query_assign_typemap( self, ¶msData ); resultFormat = NIL_P(in_res_fmt) ? 0 : NUM2INT(in_res_fmt); nParams = alloc_query_params( ¶msData ); result = gvl_PQsendQueryPrepared(this->pgconn, pg_cstr_enc(name, paramsData.enc_idx), nParams, (const char * const *)paramsData.values, paramsData.lengths, paramsData.formats, resultFormat); free_query_params( ¶msData ); if(result == 0) pg_raise_conn_error( rb_eUnableToSend, self, "%s", PQerrorMessage(this->pgconn)); pgconn_wait_for_flush( self ); return Qnil; } /* * call-seq: * conn.send_describe_prepared( statement_name ) -> nil * * Asynchronously send _command_ to the server. Does not block. * Use in combination with +conn.get_result+. */ static VALUE pgconn_send_describe_prepared(VALUE self, VALUE stmt_name) { t_pg_connection *this = pg_get_connection_safe( self ); /* returns 0 on failure */ if(gvl_PQsendDescribePrepared(this->pgconn, pg_cstr_enc(stmt_name, this->enc_idx)) == 0) pg_raise_conn_error( rb_eUnableToSend, self, "%s", PQerrorMessage(this->pgconn)); pgconn_wait_for_flush( self ); return Qnil; } /* * call-seq: * conn.send_describe_portal( portal_name ) -> nil * * Asynchronously send _command_ to the server. Does not block. * Use in combination with +conn.get_result+. */ static VALUE pgconn_send_describe_portal(VALUE self, VALUE portal) { t_pg_connection *this = pg_get_connection_safe( self ); /* returns 0 on failure */ if(gvl_PQsendDescribePortal(this->pgconn, pg_cstr_enc(portal, this->enc_idx)) == 0) pg_raise_conn_error( rb_eUnableToSend, self, "%s", PQerrorMessage(this->pgconn)); pgconn_wait_for_flush( self ); return Qnil; } static VALUE pgconn_sync_get_result(VALUE self) { PGconn *conn = pg_get_pgconn(self); PGresult *result; VALUE rb_pgresult; result = gvl_PQgetResult(conn); if(result == NULL) return Qnil; rb_pgresult = pg_new_result(result, self); if (rb_block_given_p()) { return rb_ensure(rb_yield, rb_pgresult, pg_result_clear, rb_pgresult); } return rb_pgresult; } /* * call-seq: * conn.consume_input() * * If input is available from the server, consume it. * After calling +consume_input+, you can check +is_busy+ * or *notifies* to see if the state has changed. */ static VALUE pgconn_consume_input(VALUE self) { PGconn *conn = pg_get_pgconn(self); /* returns 0 on error */ if(PQconsumeInput(conn) == 0) { pgconn_close_socket_io(self); pg_raise_conn_error( rb_eConnectionBad, self, "%s", PQerrorMessage(conn)); } return Qnil; } /* * call-seq: * conn.is_busy() -> Boolean * * Returns +true+ if a command is busy, that is, if * #get_result would block. Otherwise returns +false+. */ static VALUE pgconn_is_busy(VALUE self) { return gvl_PQisBusy(pg_get_pgconn(self)) ? Qtrue : Qfalse; } static VALUE pgconn_sync_setnonblocking(VALUE self, VALUE state) { int arg; PGconn *conn = pg_get_pgconn(self); rb_check_frozen(self); if(state == Qtrue) arg = 1; else if (state == Qfalse) arg = 0; else rb_raise(rb_eArgError, "Boolean value expected"); if(PQsetnonblocking(conn, arg) == -1) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); return Qnil; } static VALUE pgconn_sync_isnonblocking(VALUE self) { return PQisnonblocking(pg_get_pgconn(self)) ? Qtrue : Qfalse; } static VALUE pgconn_sync_flush(VALUE self) { PGconn *conn = pg_get_pgconn(self); int ret = PQflush(conn); if(ret == -1) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); return (ret) ? Qfalse : Qtrue; } static VALUE pgconn_sync_cancel(VALUE self) { char errbuf[256]; PGcancel *cancel; VALUE retval; int ret; cancel = PQgetCancel(pg_get_pgconn(self)); if(cancel == NULL) pg_raise_conn_error( rb_ePGerror, self, "Invalid connection!"); ret = gvl_PQcancel(cancel, errbuf, sizeof(errbuf)); if(ret == 1) retval = Qnil; else retval = rb_str_new2(errbuf); PQfreeCancel(cancel); return retval; } /* * call-seq: * conn.notifies() * * Returns a hash of the unprocessed notifications. * If there is no unprocessed notifier, it returns +nil+. */ static VALUE pgconn_notifies(VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); PGnotify *notification; VALUE hash; VALUE sym_relname, sym_be_pid, sym_extra; VALUE relname, be_pid, extra; sym_relname = ID2SYM(rb_intern("relname")); sym_be_pid = ID2SYM(rb_intern("be_pid")); sym_extra = ID2SYM(rb_intern("extra")); notification = gvl_PQnotifies(this->pgconn); if (notification == NULL) { return Qnil; } hash = rb_hash_new(); relname = rb_str_new2(notification->relname); be_pid = INT2NUM(notification->be_pid); extra = rb_str_new2(notification->extra); PG_ENCODING_SET_NOCHECK( relname, this->enc_idx ); PG_ENCODING_SET_NOCHECK( extra, this->enc_idx ); rb_hash_aset(hash, sym_relname, relname); rb_hash_aset(hash, sym_be_pid, be_pid); rb_hash_aset(hash, sym_extra, extra); PQfreemem(notification); return hash; } #if defined(_WIN32) /* We use a specialized implementation of rb_io_wait() on Windows. * This is because rb_io_wait() and rb_wait_for_single_fd() are very slow on Windows. */ #if defined(HAVE_RUBY_FIBER_SCHEDULER_H) #include #endif typedef enum { PG_RUBY_IO_READABLE = RB_WAITFD_IN, PG_RUBY_IO_WRITABLE = RB_WAITFD_OUT, PG_RUBY_IO_PRIORITY = RB_WAITFD_PRI, } pg_rb_io_event_t; int rb_w32_wait_events( HANDLE *events, int num, DWORD timeout ); static VALUE pg_rb_thread_io_wait(VALUE io, VALUE events, VALUE timeout) { rb_io_t *fptr; struct timeval ptimeout; struct timeval aborttime={0,0}, currtime, waittime; DWORD timeout_milisec = INFINITE; HANDLE hEvent = WSACreateEvent(); long rb_events = NUM2UINT(events); long w32_events = 0; DWORD wait_ret; GetOpenFile((io), fptr); if( !NIL_P(timeout) ){ ptimeout.tv_sec = (time_t)(NUM2DBL(timeout)); ptimeout.tv_usec = (time_t)((NUM2DBL(timeout) - (double)ptimeout.tv_sec) * 1e6); gettimeofday(&currtime, NULL); timeradd(&currtime, &ptimeout, &aborttime); } if(rb_events & PG_RUBY_IO_READABLE) w32_events |= FD_READ | FD_ACCEPT | FD_CLOSE; if(rb_events & PG_RUBY_IO_WRITABLE) w32_events |= FD_WRITE | FD_CONNECT; if(rb_events & PG_RUBY_IO_PRIORITY) w32_events |= FD_OOB; for(;;) { if ( WSAEventSelect(_get_osfhandle(fptr->fd), hEvent, w32_events) == SOCKET_ERROR ) { WSACloseEvent( hEvent ); rb_raise( rb_eConnectionBad, "WSAEventSelect socket error: %d", WSAGetLastError() ); } if ( !NIL_P(timeout) ) { gettimeofday(&currtime, NULL); timersub(&aborttime, &currtime, &waittime); timeout_milisec = (DWORD)( waittime.tv_sec * 1e3 + waittime.tv_usec / 1e3 ); } if( NIL_P(timeout) || (waittime.tv_sec >= 0 && waittime.tv_usec >= 0) ){ /* Wait for the socket to become readable before checking again */ wait_ret = rb_w32_wait_events( &hEvent, 1, timeout_milisec ); } else { wait_ret = WAIT_TIMEOUT; } if ( wait_ret == WAIT_TIMEOUT ) { WSACloseEvent( hEvent ); return UINT2NUM(0); } else if ( wait_ret == WAIT_OBJECT_0 ) { WSACloseEvent( hEvent ); /* The event we were waiting for. */ return UINT2NUM(rb_events); } else if ( wait_ret == WAIT_OBJECT_0 + 1) { /* This indicates interruption from timer thread, GC, exception * from other threads etc... */ rb_thread_check_ints(); } else if ( wait_ret == WAIT_FAILED ) { WSACloseEvent( hEvent ); rb_raise( rb_eConnectionBad, "Wait on socket error (WaitForMultipleObjects): %lu", GetLastError() ); } else { WSACloseEvent( hEvent ); rb_raise( rb_eConnectionBad, "Wait on socket abandoned (WaitForMultipleObjects)" ); } } } static VALUE pg_rb_io_wait(VALUE io, VALUE events, VALUE timeout) { #if defined(HAVE_RUBY_FIBER_SCHEDULER_H) /* We don't support Fiber.scheduler on Windows ruby-3.0 because there is no fast way to check whether a scheduler is active. * Fortunatelly ruby-3.1 offers a C-API for it. */ VALUE scheduler = rb_fiber_scheduler_current(); if (!NIL_P(scheduler)) { return rb_io_wait(io, events, timeout); } #endif return pg_rb_thread_io_wait(io, events, timeout); } #elif defined(HAVE_RB_IO_WAIT) /* Use our own function and constants names, to avoid conflicts with truffleruby-head on its road to ruby-3.0 compatibility. */ #define pg_rb_io_wait rb_io_wait #define PG_RUBY_IO_READABLE RUBY_IO_READABLE #define PG_RUBY_IO_WRITABLE RUBY_IO_WRITABLE #define PG_RUBY_IO_PRIORITY RUBY_IO_PRIORITY #else /* For compat with ruby < 3.0 */ typedef enum { PG_RUBY_IO_READABLE = RB_WAITFD_IN, PG_RUBY_IO_WRITABLE = RB_WAITFD_OUT, PG_RUBY_IO_PRIORITY = RB_WAITFD_PRI, } pg_rb_io_event_t; static VALUE pg_rb_io_wait(VALUE io, VALUE events, VALUE timeout) { rb_io_t *fptr; struct timeval waittime; int res; GetOpenFile((io), fptr); if( !NIL_P(timeout) ){ waittime.tv_sec = (time_t)(NUM2DBL(timeout)); waittime.tv_usec = (time_t)((NUM2DBL(timeout) - (double)waittime.tv_sec) * 1e6); } res = rb_wait_for_single_fd(fptr->fd, NUM2UINT(events), NIL_P(timeout) ? NULL : &waittime); return UINT2NUM(res); } #endif static void * wait_socket_readable( VALUE self, struct timeval *ptimeout, void *(*is_readable)(PGconn *)) { VALUE ret; void *retval; struct timeval aborttime={0,0}, currtime, waittime; VALUE wait_timeout = Qnil; PGconn *conn = pg_get_pgconn(self); if ( ptimeout ) { gettimeofday(&currtime, NULL); timeradd(&currtime, ptimeout, &aborttime); } while ( !(retval=is_readable(conn)) ) { if ( ptimeout ) { gettimeofday(&currtime, NULL); timersub(&aborttime, &currtime, &waittime); wait_timeout = DBL2NUM((double)(waittime.tv_sec) + (double)(waittime.tv_usec) / 1000000.0); } /* Is the given timeout valid? */ if( !ptimeout || (waittime.tv_sec >= 0 && waittime.tv_usec >= 0) ){ VALUE socket_io; /* before we wait for data, make sure everything has been sent */ pgconn_async_flush(self); if ((retval=is_readable(conn))) return retval; socket_io = pgconn_socket_io(self); /* Wait for the socket to become readable before checking again */ ret = pg_rb_io_wait(socket_io, RB_INT2NUM(PG_RUBY_IO_READABLE), wait_timeout); } else { ret = Qfalse; } /* Return false if the select() timed out */ if ( ret == Qfalse ){ return NULL; } /* Check for connection errors (PQisBusy is true on connection errors) */ if ( PQconsumeInput(conn) == 0 ){ pgconn_close_socket_io(self); pg_raise_conn_error(rb_eConnectionBad, self, "PQconsumeInput() %s", PQerrorMessage(conn)); } } return retval; } /* * call-seq: * conn.flush() -> Boolean * * Attempts to flush any queued output data to the server. * Returns +true+ if data is successfully flushed, +false+ * if not. It can only return +false+ if connection is * in nonblocking mode. * Raises PG::Error if some other failure occurred. */ static VALUE pgconn_async_flush(VALUE self) { while( pgconn_sync_flush(self) == Qfalse ){ /* wait for the socket to become read- or write-ready */ int events; VALUE socket_io = pgconn_socket_io(self); events = RB_NUM2INT(pg_rb_io_wait(socket_io, RB_INT2NUM(PG_RUBY_IO_READABLE | PG_RUBY_IO_WRITABLE), Qnil)); if (events & PG_RUBY_IO_READABLE){ pgconn_consume_input(self); } } return Qtrue; } static VALUE pgconn_wait_for_flush( VALUE self ){ if( !pg_get_connection_safe(self)->flush_data ) return Qnil; return pgconn_async_flush(self); } static VALUE pgconn_flush_data_set( VALUE self, VALUE enabled ){ t_pg_connection *conn = pg_get_connection(self); rb_check_frozen(self); conn->flush_data = RTEST(enabled); return enabled; } static void * notify_readable(PGconn *conn) { return (void*)gvl_PQnotifies(conn); } /* * call-seq: * conn.wait_for_notify( [ timeout ] ) { |event, pid, payload| block } -> String * * Blocks while waiting for notification(s), or until the optional * _timeout_ is reached, whichever comes first. _timeout_ is * measured in seconds and can be fractional. * * Returns +nil+ if _timeout_ is reached, the name of the NOTIFY event otherwise. * If used in block form, passes the name of the NOTIFY +event+, the generating * +pid+ and the optional +payload+ string into the block. */ static VALUE pgconn_wait_for_notify(int argc, VALUE *argv, VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); PGnotify *pnotification; struct timeval timeout; struct timeval *ptimeout = NULL; VALUE timeout_in = Qnil, relname = Qnil, be_pid = Qnil, extra = Qnil; double timeout_sec; rb_scan_args( argc, argv, "01", &timeout_in ); if ( RTEST(timeout_in) ) { timeout_sec = NUM2DBL( timeout_in ); timeout.tv_sec = (time_t)timeout_sec; timeout.tv_usec = (suseconds_t)( (timeout_sec - (long)timeout_sec) * 1e6 ); ptimeout = &timeout; } pnotification = (PGnotify*) wait_socket_readable( self, ptimeout, notify_readable); /* Return nil if the select timed out */ if ( !pnotification ) return Qnil; relname = rb_str_new2( pnotification->relname ); PG_ENCODING_SET_NOCHECK( relname, this->enc_idx ); be_pid = INT2NUM( pnotification->be_pid ); if ( *pnotification->extra ) { extra = rb_str_new2( pnotification->extra ); PG_ENCODING_SET_NOCHECK( extra, this->enc_idx ); } PQfreemem( pnotification ); if ( rb_block_given_p() ) rb_yield_values( 3, relname, be_pid, extra ); return relname; } static VALUE pgconn_sync_put_copy_data(int argc, VALUE *argv, VALUE self) { int ret; int len; t_pg_connection *this = pg_get_connection_safe( self ); VALUE value; VALUE buffer = Qnil; VALUE encoder; VALUE intermediate; t_pg_coder *p_coder = NULL; rb_scan_args( argc, argv, "11", &value, &encoder ); if( NIL_P(encoder) ){ if( NIL_P(this->encoder_for_put_copy_data) ){ buffer = value; } else { p_coder = RTYPEDDATA_DATA( this->encoder_for_put_copy_data ); } } else { /* Check argument type and use argument encoder */ TypedData_Get_Struct(encoder, t_pg_coder, &pg_coder_type, p_coder); } if( p_coder ){ t_pg_coder_enc_func enc_func; int enc_idx = this->enc_idx; enc_func = pg_coder_enc_func( p_coder ); len = enc_func( p_coder, value, NULL, &intermediate, enc_idx); if( len == -1 ){ /* The intermediate value is a String that can be used directly. */ buffer = intermediate; } else { buffer = rb_str_new(NULL, len); len = enc_func( p_coder, value, RSTRING_PTR(buffer), &intermediate, enc_idx); rb_str_set_len( buffer, len ); } } Check_Type(buffer, T_STRING); ret = gvl_PQputCopyData(this->pgconn, RSTRING_PTR(buffer), RSTRING_LENINT(buffer)); if(ret == -1) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(this->pgconn)); RB_GC_GUARD(intermediate); RB_GC_GUARD(buffer); return (ret) ? Qtrue : Qfalse; } static VALUE pgconn_sync_put_copy_end(int argc, VALUE *argv, VALUE self) { VALUE str; int ret; const char *error_message = NULL; t_pg_connection *this = pg_get_connection_safe( self ); if (rb_scan_args(argc, argv, "01", &str) == 0) error_message = NULL; else error_message = pg_cstr_enc(str, this->enc_idx); ret = gvl_PQputCopyEnd(this->pgconn, error_message); if(ret == -1) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(this->pgconn)); return (ret) ? Qtrue : Qfalse; } static VALUE pgconn_sync_get_copy_data(int argc, VALUE *argv, VALUE self ) { VALUE async_in; VALUE result; int ret; char *buffer; VALUE decoder; t_pg_coder *p_coder = NULL; t_pg_connection *this = pg_get_connection_safe( self ); rb_scan_args(argc, argv, "02", &async_in, &decoder); if( NIL_P(decoder) ){ if( !NIL_P(this->decoder_for_get_copy_data) ){ p_coder = RTYPEDDATA_DATA( this->decoder_for_get_copy_data ); } } else { /* Check argument type and use argument decoder */ TypedData_Get_Struct(decoder, t_pg_coder, &pg_coder_type, p_coder); } ret = gvl_PQgetCopyData(this->pgconn, &buffer, RTEST(async_in)); if(ret == -2){ /* error */ pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(this->pgconn)); } if(ret == -1) { /* No data left */ return Qnil; } if(ret == 0) { /* would block */ return Qfalse; } if( p_coder ){ t_pg_coder_dec_func dec_func = pg_coder_dec_func( p_coder, p_coder->format ); result = dec_func( p_coder, buffer, ret, 0, 0, this->enc_idx ); } else { result = rb_str_new(buffer, ret); } PQfreemem(buffer); return result; } /* * call-seq: * conn.set_error_verbosity( verbosity ) -> Integer * * Sets connection's verbosity to _verbosity_ and returns * the previous setting. Available settings are: * * * PQERRORS_TERSE * * PQERRORS_DEFAULT * * PQERRORS_VERBOSE * * PQERRORS_SQLSTATE * * Changing the verbosity does not affect the messages available from already-existing PG::Result objects, only subsequently-created ones. * (But see PG::Result#verbose_error_message if you want to print a previous error with a different verbosity.) * * See also corresponding {libpq function}[https://www.postgresql.org/docs/current/libpq-control.html#LIBPQ-PQSETERRORVERBOSITY]. */ static VALUE pgconn_set_error_verbosity(VALUE self, VALUE in_verbosity) { PGconn *conn = pg_get_pgconn(self); PGVerbosity verbosity = NUM2INT(in_verbosity); return INT2FIX(PQsetErrorVerbosity(conn, verbosity)); } #ifdef HAVE_PQRESULTVERBOSEERRORMESSAGE /* * call-seq: * conn.set_error_context_visibility( context_visibility ) -> Integer * * Sets connection's context display mode to _context_visibility_ and returns * the previous setting. Available settings are: * * PQSHOW_CONTEXT_NEVER * * PQSHOW_CONTEXT_ERRORS * * PQSHOW_CONTEXT_ALWAYS * * This mode controls whether the CONTEXT field is included in messages (unless the verbosity setting is TERSE, in which case CONTEXT is never shown). * The NEVER mode never includes CONTEXT, while ALWAYS always includes it if available. * In ERRORS mode (the default), CONTEXT fields are included only for error messages, not for notices and warnings. * * Changing this mode does not affect the messages available from already-existing PG::Result objects, only subsequently-created ones. * (But see PG::Result#verbose_error_message if you want to print a previous error with a different display mode.) * * See also corresponding {libpq function}[https://www.postgresql.org/docs/current/libpq-control.html#LIBPQ-PQSETERRORCONTEXTVISIBILITY]. * * Available since PostgreSQL-9.6 */ static VALUE pgconn_set_error_context_visibility(VALUE self, VALUE in_context_visibility) { PGconn *conn = pg_get_pgconn(self); PGContextVisibility context_visibility = NUM2INT(in_context_visibility); return INT2FIX(PQsetErrorContextVisibility(conn, context_visibility)); } #endif /* * call-seq: * conn.trace( stream ) -> nil * * Enables tracing message passing between backend. The * trace message will be written to the stream _stream_, * which must implement a method +fileno+ that returns * a writable file descriptor. */ static VALUE pgconn_trace(VALUE self, VALUE stream) { VALUE fileno; FILE *new_fp; int old_fd, new_fd; VALUE new_file; t_pg_connection *this = pg_get_connection_safe( self ); rb_check_frozen(self); if(!rb_respond_to(stream,rb_intern("fileno"))) rb_raise(rb_eArgError, "stream does not respond to method: fileno"); fileno = rb_funcall(stream, rb_intern("fileno"), 0); if(fileno == Qnil) rb_raise(rb_eArgError, "can't get file descriptor from stream"); /* Duplicate the file descriptor and re-open * it. Then, make it into a ruby File object * and assign it to an instance variable. * This prevents a problem when the File * object passed to this function is closed * before the connection object is. */ old_fd = NUM2INT(fileno); new_fd = dup(old_fd); new_fp = fdopen(new_fd, "w"); if(new_fp == NULL) rb_raise(rb_eArgError, "stream is not writable"); new_file = rb_funcall(rb_cIO, rb_intern("new"), 1, INT2NUM(new_fd)); RB_OBJ_WRITE(self, &this->trace_stream, new_file); PQtrace(this->pgconn, new_fp); return Qnil; } /* * call-seq: * conn.untrace() -> nil * * Disables the message tracing. */ static VALUE pgconn_untrace(VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); PQuntrace(this->pgconn); rb_funcall(this->trace_stream, rb_intern("close"), 0); RB_OBJ_WRITE(self, &this->trace_stream, Qnil); return Qnil; } /* * Notice callback proxy function -- delegate the callback to the * currently-registered Ruby notice_receiver object. */ void notice_receiver_proxy(void *arg, const PGresult *pgresult) { VALUE self = (VALUE)arg; t_pg_connection *this = pg_get_connection( self ); if (this->notice_receiver != Qnil) { VALUE result = pg_new_result_autoclear( (PGresult *)pgresult, self ); rb_funcall(this->notice_receiver, rb_intern("call"), 1, result); pg_result_clear( result ); } return; } /* * call-seq: * conn.set_notice_receiver {|result| ... } -> Proc * * Notice and warning messages generated by the server are not returned * by the query execution functions, since they do not imply failure of * the query. Instead they are passed to a notice handling function, and * execution continues normally after the handler returns. The default * notice handling function prints the message on stderr, but the * application can override this behavior by supplying its own handling * function. * * For historical reasons, there are two levels of notice handling, called the * notice receiver and notice processor. The default behavior is for the notice * receiver to format the notice and pass a string to the notice processor for * printing. However, an application that chooses to provide its own notice * receiver will typically ignore the notice processor layer and just do all * the work in the notice receiver. * * This function takes a new block to act as the handler, which should * accept a single parameter that will be a PG::Result object, and returns * the Proc object previously set, or +nil+ if it was previously the default. * * If you pass no arguments, it will reset the handler to the default. * * *Note:* The +result+ passed to the block should not be used outside * of the block, since the corresponding C object could be freed after the * block finishes. */ static VALUE pgconn_set_notice_receiver(VALUE self) { VALUE proc, old_proc; t_pg_connection *this = pg_get_connection_safe( self ); rb_check_frozen(self); /* If default_notice_receiver is unset, assume that the current * notice receiver is the default, and save it to a global variable. * This should not be a problem because the default receiver is * always the same, so won't vary among connections. */ if(this->default_notice_receiver == NULL) this->default_notice_receiver = PQsetNoticeReceiver(this->pgconn, NULL, NULL); old_proc = this->notice_receiver; if( rb_block_given_p() ) { proc = rb_block_proc(); PQsetNoticeReceiver(this->pgconn, gvl_notice_receiver_proxy, (void *)self); } else { /* if no block is given, set back to default */ proc = Qnil; PQsetNoticeReceiver(this->pgconn, this->default_notice_receiver, NULL); } RB_OBJ_WRITE(self, &this->notice_receiver, proc); return old_proc; } /* * Notice callback proxy function -- delegate the callback to the * currently-registered Ruby notice_processor object. */ void notice_processor_proxy(void *arg, const char *message) { VALUE self = (VALUE)arg; t_pg_connection *this = pg_get_connection( self ); if (this->notice_processor != Qnil) { VALUE message_str = rb_str_new2(message); PG_ENCODING_SET_NOCHECK( message_str, this->enc_idx ); rb_funcall(this->notice_processor, rb_intern("call"), 1, message_str); } return; } /* * call-seq: * conn.set_notice_processor {|message| ... } -> Proc * * See #set_notice_receiver for the description of what this and the * notice_processor methods do. * * This function takes a new block to act as the notice processor and returns * the Proc object previously set, or +nil+ if it was previously the default. * The block should accept a single String object. * * If you pass no arguments, it will reset the handler to the default. */ static VALUE pgconn_set_notice_processor(VALUE self) { VALUE proc, old_proc; t_pg_connection *this = pg_get_connection_safe( self ); rb_check_frozen(self); /* If default_notice_processor is unset, assume that the current * notice processor is the default, and save it to a global variable. * This should not be a problem because the default processor is * always the same, so won't vary among connections. */ if(this->default_notice_processor == NULL) this->default_notice_processor = PQsetNoticeProcessor(this->pgconn, NULL, NULL); old_proc = this->notice_processor; if( rb_block_given_p() ) { proc = rb_block_proc(); PQsetNoticeProcessor(this->pgconn, gvl_notice_processor_proxy, (void *)self); } else { /* if no block is given, set back to default */ proc = Qnil; PQsetNoticeProcessor(this->pgconn, this->default_notice_processor, NULL); } RB_OBJ_WRITE(self, &this->notice_processor, proc); return old_proc; } /* * call-seq: * conn.get_client_encoding() -> String * * Returns the client encoding as a String. */ static VALUE pgconn_get_client_encoding(VALUE self) { char *encoding = (char *)pg_encoding_to_char(PQclientEncoding(pg_get_pgconn(self))); return rb_str_new2(encoding); } /* * call-seq: * conn.sync_set_client_encoding( encoding ) * * This function has the same behavior as #async_set_client_encoding, but is implemented using the synchronous command processing API of libpq. * See #async_exec for the differences between the two API variants. * It's not recommended to use explicit sync or async variants but #set_client_encoding instead, unless you have a good reason to do so. */ static VALUE pgconn_sync_set_client_encoding(VALUE self, VALUE str) { PGconn *conn = pg_get_pgconn( self ); rb_check_frozen(self); Check_Type(str, T_STRING); if ( (gvl_PQsetClientEncoding(conn, StringValueCStr(str))) == -1 ) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); pgconn_set_internal_encoding_index( self ); return Qnil; } /* * call-seq: * conn.quote_ident( str ) -> String * conn.quote_ident( array ) -> String * PG::Connection.quote_ident( str ) -> String * PG::Connection.quote_ident( array ) -> String * * Returns a string that is safe for inclusion in a SQL query as an * identifier. Note: this is not a quote function for values, but for * identifiers. * * For example, in a typical SQL query: SELECT FOO FROM MYTABLE * The identifier FOO is folded to lower case, so it actually * means foo. If you really want to access the case-sensitive * field name FOO, use this function like * conn.quote_ident('FOO'), which will return "FOO" * (with double-quotes). PostgreSQL will see the double-quotes, and * it will not fold to lower case. * * Similarly, this function also protects against special characters, * and other things that might allow SQL injection if the identifier * comes from an untrusted source. * * If the parameter is an Array, then all it's values are separately quoted * and then joined by a "." character. This can be used for identifiers in * the form "schema"."table"."column" . * * This method is functional identical to the encoder PG::TextEncoder::Identifier . * * If the instance method form is used and the input string character encoding * is different to the connection encoding, then the string is converted to this * encoding, so that the returned string is always encoded as PG::Connection#internal_encoding . * * In the singleton form (PG::Connection.quote_ident) the character encoding * of the result string is set to the character encoding of the input string. */ static VALUE pgconn_s_quote_ident(VALUE self, VALUE str_or_array) { VALUE ret; int enc_idx; if( rb_obj_is_kind_of(self, rb_cPGconn) ){ enc_idx = pg_get_connection(self)->enc_idx; }else{ enc_idx = RB_TYPE_P(str_or_array, T_STRING) ? ENCODING_GET( str_or_array ) : rb_ascii8bit_encindex(); } pg_text_enc_identifier(NULL, str_or_array, NULL, &ret, enc_idx); return ret; } static void * get_result_readable(PGconn *conn) { return gvl_PQisBusy(conn) ? NULL : (void*)1; } /* * call-seq: * conn.block( [ timeout ] ) -> Boolean * * Blocks until the server is no longer busy, or until the * optional _timeout_ is reached, whichever comes first. * _timeout_ is measured in seconds and can be fractional. * * Returns +false+ if _timeout_ is reached, +true+ otherwise. * * If +true+ is returned, +conn.is_busy+ will return +false+ * and +conn.get_result+ will not block. */ VALUE pgconn_block( int argc, VALUE *argv, VALUE self ) { struct timeval timeout; struct timeval *ptimeout = NULL; VALUE timeout_in; double timeout_sec; void *ret; if ( rb_scan_args(argc, argv, "01", &timeout_in) == 1 ) { timeout_sec = NUM2DBL( timeout_in ); timeout.tv_sec = (time_t)timeout_sec; timeout.tv_usec = (suseconds_t)((timeout_sec - (long)timeout_sec) * 1e6); ptimeout = &timeout; } ret = wait_socket_readable( self, ptimeout, get_result_readable); if( !ret ) return Qfalse; return Qtrue; } /* * call-seq: * conn.sync_get_last_result( ) -> PG::Result * * This function has the same behavior as #async_get_last_result, but is implemented using the synchronous command processing API of libpq. * See #async_exec for the differences between the two API variants. * It's not recommended to use explicit sync or async variants but #get_last_result instead, unless you have a good reason to do so. */ static VALUE pgconn_sync_get_last_result(VALUE self) { PGconn *conn = pg_get_pgconn(self); VALUE rb_pgresult = Qnil; PGresult *cur, *prev; cur = prev = NULL; while ((cur = gvl_PQgetResult(conn)) != NULL) { int status; if (prev) PQclear(prev); prev = cur; status = PQresultStatus(cur); if (status == PGRES_COPY_OUT || status == PGRES_COPY_IN || status == PGRES_COPY_BOTH) break; } if (prev) { rb_pgresult = pg_new_result( prev, self ); pg_result_check(rb_pgresult); } return rb_pgresult; } /* * call-seq: * conn.get_last_result( ) -> PG::Result * * This function retrieves all available results * on the current connection (from previously issued * asynchronous commands like +send_query()+) and * returns the last non-NULL result, or +nil+ if no * results are available. * * If the last result contains a bad result_status, an * appropriate exception is raised. * * This function is similar to #get_result * except that it is designed to get one and only * one result and that it checks the result state. */ static VALUE pgconn_async_get_last_result(VALUE self) { PGconn *conn = pg_get_pgconn(self); VALUE rb_pgresult = Qnil; PGresult *cur, *prev; cur = prev = NULL; for(;;) { int status; /* wait for input (without blocking) before reading each result */ wait_socket_readable(self, NULL, get_result_readable); cur = gvl_PQgetResult(conn); if (cur == NULL) break; if (prev) PQclear(prev); prev = cur; status = PQresultStatus(cur); if (status == PGRES_COPY_OUT || status == PGRES_COPY_IN || status == PGRES_COPY_BOTH) break; } if (prev) { rb_pgresult = pg_new_result( prev, self ); pg_result_check(rb_pgresult); } return rb_pgresult; } /* * call-seq: * conn.discard_results() * * Silently discard any prior query result that application didn't eat. * This is internally used prior to Connection#exec and sibling methods. * It doesn't raise an exception on connection errors, but returns +false+ instead. * * Returns: * * +nil+ when the connection is already idle * * +true+ when some results have been discarded * * +false+ when a failure occured and the connection was closed * */ static VALUE pgconn_discard_results(VALUE self) { PGconn *conn = pg_get_pgconn(self); VALUE socket_io; switch( PQtransactionStatus(conn) ) { case PQTRANS_IDLE: case PQTRANS_INTRANS: case PQTRANS_INERROR: return Qnil; default:; } socket_io = pgconn_socket_io(self); for(;;) { PGresult *cur; int status; /* pgconn_block() raises an exception in case of errors. * To avoid this call pg_rb_io_wait() and PQconsumeInput() without rb_raise(). */ while( gvl_PQisBusy(conn) ){ int events; switch( PQflush(conn) ) { case 1: events = RB_NUM2INT(pg_rb_io_wait(socket_io, RB_INT2NUM(PG_RUBY_IO_READABLE | PG_RUBY_IO_WRITABLE), Qnil)); if (events & PG_RUBY_IO_READABLE){ if ( PQconsumeInput(conn) == 0 ) goto error; } break; case 0: pg_rb_io_wait(socket_io, RB_INT2NUM(PG_RUBY_IO_READABLE), Qnil); if ( PQconsumeInput(conn) == 0 ) goto error; break; default: goto error; } } cur = gvl_PQgetResult(conn); if( cur == NULL) break; status = PQresultStatus(cur); PQclear(cur); if (status == PGRES_COPY_IN){ while( gvl_PQputCopyEnd(conn, "COPY terminated by new query or discard_results") == 0 ){ pgconn_async_flush(self); } } if (status == PGRES_COPY_OUT){ for(;;) { char *buffer = NULL; int st = gvl_PQgetCopyData(conn, &buffer, 1); if( st == 0 ) { /* would block -> wait for readable data */ pg_rb_io_wait(socket_io, RB_INT2NUM(PG_RUBY_IO_READABLE), Qnil); if ( PQconsumeInput(conn) == 0 ) goto error; } else if( st > 0 ) { /* some data retrieved -> discard it */ PQfreemem(buffer); } else { /* no more data */ break; } } } } return Qtrue; error: pgconn_close_socket_io(self); return Qfalse; } /* * call-seq: * conn.exec(sql) -> PG::Result * conn.exec(sql) {|pg_result| block } * * Sends SQL query request specified by _sql_ to PostgreSQL. * On success, it returns a PG::Result instance with all result rows and columns. * On failure, it raises a PG::Error. * * For backward compatibility, if you pass more than one parameter to this method, * it will call #exec_params for you. New code should explicitly use #exec_params if * argument placeholders are used. * * If the optional code block is given, it will be passed result as an argument, * and the PG::Result object will automatically be cleared when the block terminates. * In this instance, conn.exec returns the value of the block. * * #exec is an alias for #async_exec which is almost identical to #sync_exec . * #sync_exec is implemented on the simpler synchronous command processing API of libpq, whereas * #async_exec is implemented on the asynchronous API and on ruby's IO mechanisms. * Only #async_exec is compatible to Fiber.scheduler based asynchronous IO processing introduced in ruby-3.0. * Both methods ensure that other threads can process while waiting for the server to * complete the request, but #sync_exec blocks all signals to be processed until the query is finished. * This is most notably visible by a delayed reaction to Control+C. * It's not recommended to use explicit sync or async variants but #exec instead, unless you have a good reason to do so. * * See also corresponding {libpq function}[https://www.postgresql.org/docs/current/libpq-exec.html#LIBPQ-PQEXEC]. */ static VALUE pgconn_async_exec(int argc, VALUE *argv, VALUE self) { VALUE rb_pgresult = Qnil; pgconn_discard_results( self ); pgconn_send_query( argc, argv, self ); rb_pgresult = pgconn_async_get_last_result( self ); if ( rb_block_given_p() ) { return rb_ensure( rb_yield, rb_pgresult, pg_result_clear, rb_pgresult ); } return rb_pgresult; } /* * call-seq: * conn.exec_params(sql, params [, result_format [, type_map ]] ) -> nil * conn.exec_params(sql, params [, result_format [, type_map ]] ) {|pg_result| block } * * Sends SQL query request specified by +sql+ to PostgreSQL using placeholders * for parameters. * * Returns a PG::Result instance on success. On failure, it raises a PG::Error. * * +params+ is an array of the bind parameters for the SQL query. * Each element of the +params+ array may be either: * a hash of the form: * {:value => String (value of bind parameter) * :type => Integer (oid of type of bind parameter) * :format => Integer (0 for text, 1 for binary) * } * or, it may be a String. If it is a string, that is equivalent to the hash: * { :value => , :type => 0, :format => 0 } * * PostgreSQL bind parameters are represented as $1, $2, $3, etc., * inside the SQL query. The 0th element of the +params+ array is bound * to $1, the 1st element is bound to $2, etc. +nil+ is treated as +NULL+. * * If the types are not specified, they will be inferred by PostgreSQL. * Instead of specifying type oids, it's recommended to simply add * explicit casts in the query to ensure that the right type is used. * * For example: "SELECT $1::int" * * The optional +result_format+ should be 0 for text results, 1 * for binary. * * +type_map+ can be a PG::TypeMap derivation (such as PG::BasicTypeMapForQueries). * This will type cast the params from various Ruby types before transmission * based on the encoders defined by the type map. When a type encoder is used * the format and oid of a given bind parameter are retrieved from the encoder * instead out of the hash form described above. * * If the optional code block is given, it will be passed result as an argument, * and the PG::Result object will automatically be cleared when the block terminates. * In this instance, conn.exec returns the value of the block. * * The primary advantage of #exec_params over #exec is that parameter values can be separated from the command string, thus avoiding the need for tedious and error-prone quoting and escaping. * Unlike #exec, #exec_params allows at most one SQL command in the given string. * (There can be semicolons in it, but not more than one nonempty command.) * This is a limitation of the underlying protocol, but has some usefulness as an extra defense against SQL-injection attacks. * * See also corresponding {libpq function}[https://www.postgresql.org/docs/current/libpq-exec.html#LIBPQ-PQEXECPARAMS]. */ static VALUE pgconn_async_exec_params(int argc, VALUE *argv, VALUE self) { VALUE rb_pgresult = Qnil; pgconn_discard_results( self ); /* If called with no or nil parameters, use PQsendQuery for compatibility */ if ( argc == 1 || (argc >= 2 && argc <= 4 && NIL_P(argv[1]) )) { pg_deprecated(3, ("forwarding async_exec_params to async_exec is deprecated")); pgconn_send_query( argc, argv, self ); } else { pgconn_send_query_params( argc, argv, self ); } rb_pgresult = pgconn_async_get_last_result( self ); if ( rb_block_given_p() ) { return rb_ensure( rb_yield, rb_pgresult, pg_result_clear, rb_pgresult ); } return rb_pgresult; } /* * call-seq: * conn.prepare(stmt_name, sql [, param_types ] ) -> PG::Result * * Prepares statement _sql_ with name _name_ to be executed later. * Returns a PG::Result instance on success. * On failure, it raises a PG::Error. * * +param_types+ is an optional parameter to specify the Oids of the * types of the parameters. * * If the types are not specified, they will be inferred by PostgreSQL. * Instead of specifying type oids, it's recommended to simply add * explicit casts in the query to ensure that the right type is used. * * For example: "SELECT $1::int" * * PostgreSQL bind parameters are represented as $1, $2, $3, etc., * inside the SQL query. * * See also corresponding {libpq function}[https://www.postgresql.org/docs/current/libpq-exec.html#LIBPQ-PQPREPARE]. */ static VALUE pgconn_async_prepare(int argc, VALUE *argv, VALUE self) { VALUE rb_pgresult = Qnil; pgconn_discard_results( self ); pgconn_send_prepare( argc, argv, self ); rb_pgresult = pgconn_async_get_last_result( self ); if ( rb_block_given_p() ) { return rb_ensure( rb_yield, rb_pgresult, pg_result_clear, rb_pgresult ); } return rb_pgresult; } /* * call-seq: * conn.exec_prepared(statement_name [, params, result_format[, type_map]] ) -> PG::Result * conn.exec_prepared(statement_name [, params, result_format[, type_map]] ) {|pg_result| block } * * Execute prepared named statement specified by _statement_name_. * Returns a PG::Result instance on success. * On failure, it raises a PG::Error. * * +params+ is an array of the optional bind parameters for the * SQL query. Each element of the +params+ array may be either: * a hash of the form: * {:value => String (value of bind parameter) * :format => Integer (0 for text, 1 for binary) * } * or, it may be a String. If it is a string, that is equivalent to the hash: * { :value => , :format => 0 } * * PostgreSQL bind parameters are represented as $1, $2, $3, etc., * inside the SQL query. The 0th element of the +params+ array is bound * to $1, the 1st element is bound to $2, etc. +nil+ is treated as +NULL+. * * The optional +result_format+ should be 0 for text results, 1 * for binary. * * +type_map+ can be a PG::TypeMap derivation (such as PG::BasicTypeMapForQueries). * This will type cast the params from various Ruby types before transmission * based on the encoders defined by the type map. When a type encoder is used * the format and oid of a given bind parameter are retrieved from the encoder * instead out of the hash form described above. * * If the optional code block is given, it will be passed result as an argument, * and the PG::Result object will automatically be cleared when the block terminates. * In this instance, conn.exec_prepared returns the value of the block. * * See also corresponding {libpq function}[https://www.postgresql.org/docs/current/libpq-exec.html#LIBPQ-PQEXECPREPARED]. */ static VALUE pgconn_async_exec_prepared(int argc, VALUE *argv, VALUE self) { VALUE rb_pgresult = Qnil; pgconn_discard_results( self ); pgconn_send_query_prepared( argc, argv, self ); rb_pgresult = pgconn_async_get_last_result( self ); if ( rb_block_given_p() ) { return rb_ensure( rb_yield, rb_pgresult, pg_result_clear, rb_pgresult ); } return rb_pgresult; } /* * call-seq: * conn.describe_portal( portal_name ) -> PG::Result * * Retrieve information about the portal _portal_name_. * * See also corresponding {libpq function}[https://www.postgresql.org/docs/current/libpq-exec.html#LIBPQ-PQDESCRIBEPORTAL]. */ static VALUE pgconn_async_describe_portal(VALUE self, VALUE portal) { VALUE rb_pgresult = Qnil; pgconn_discard_results( self ); pgconn_send_describe_portal( self, portal ); rb_pgresult = pgconn_async_get_last_result( self ); if ( rb_block_given_p() ) { return rb_ensure( rb_yield, rb_pgresult, pg_result_clear, rb_pgresult ); } return rb_pgresult; } /* * call-seq: * conn.describe_prepared( statement_name ) -> PG::Result * * Retrieve information about the prepared statement _statement_name_. * * See also corresponding {libpq function}[https://www.postgresql.org/docs/current/libpq-exec.html#LIBPQ-PQDESCRIBEPREPARED]. */ static VALUE pgconn_async_describe_prepared(VALUE self, VALUE stmt_name) { VALUE rb_pgresult = Qnil; pgconn_discard_results( self ); pgconn_send_describe_prepared( self, stmt_name ); rb_pgresult = pgconn_async_get_last_result( self ); if ( rb_block_given_p() ) { return rb_ensure( rb_yield, rb_pgresult, pg_result_clear, rb_pgresult ); } return rb_pgresult; } #ifdef HAVE_PQSSLATTRIBUTE /* * call-seq: * conn.ssl_in_use? -> Boolean * * Returns +true+ if the connection uses SSL/TLS, +false+ if not. * * Available since PostgreSQL-9.5 */ static VALUE pgconn_ssl_in_use(VALUE self) { return PQsslInUse(pg_get_pgconn(self)) ? Qtrue : Qfalse; } /* * call-seq: * conn.ssl_attribute(attribute_name) -> String * * Returns SSL-related information about the connection. * * The list of available attributes varies depending on the SSL library being used, * and the type of connection. If an attribute is not available, returns nil. * * The following attributes are commonly available: * * [+library+] * Name of the SSL implementation in use. (Currently, only "OpenSSL" is implemented) * [+protocol+] * SSL/TLS version in use. Common values are "SSLv2", "SSLv3", "TLSv1", "TLSv1.1" and "TLSv1.2", but an implementation may return other strings if some other protocol is used. * [+key_bits+] * Number of key bits used by the encryption algorithm. * [+cipher+] * A short name of the ciphersuite used, e.g. "DHE-RSA-DES-CBC3-SHA". The names are specific to each SSL implementation. * [+compression+] * If SSL compression is in use, returns the name of the compression algorithm, or "on" if compression is used but the algorithm is not known. If compression is not in use, returns "off". * * * See also #ssl_attribute_names and the {corresponding libpq function}[https://www.postgresql.org/docs/current/libpq-status.html#LIBPQ-PQSSLATTRIBUTE]. * * Available since PostgreSQL-9.5 */ static VALUE pgconn_ssl_attribute(VALUE self, VALUE attribute_name) { const char *p_attr; p_attr = PQsslAttribute(pg_get_pgconn(self), StringValueCStr(attribute_name)); return p_attr ? rb_str_new_cstr(p_attr) : Qnil; } /* * call-seq: * conn.ssl_attribute_names -> Array * * Return an array of SSL attribute names available. * * See also #ssl_attribute * * Available since PostgreSQL-9.5 */ static VALUE pgconn_ssl_attribute_names(VALUE self) { int i; const char * const * p_list = PQsslAttributeNames(pg_get_pgconn(self)); VALUE ary = rb_ary_new(); for ( i = 0; p_list[i]; i++ ) { rb_ary_push( ary, rb_str_new_cstr( p_list[i] )); } return ary; } #endif #ifdef HAVE_PQENTERPIPELINEMODE /* * call-seq: * conn.pipeline_status -> Integer * * Returns the current pipeline mode status of the libpq connection. * * PQpipelineStatus can return one of the following values: * * * PQ_PIPELINE_ON - The libpq connection is in pipeline mode. * * PQ_PIPELINE_OFF - The libpq connection is not in pipeline mode. * * PQ_PIPELINE_ABORTED - The libpq connection is in pipeline mode and an error occurred while processing the current pipeline. * The aborted flag is cleared when PQgetResult returns a result of type PGRES_PIPELINE_SYNC. * * Available since PostgreSQL-14 */ static VALUE pgconn_pipeline_status(VALUE self) { int res = PQpipelineStatus(pg_get_pgconn(self)); return INT2FIX(res); } /* * call-seq: * conn.enter_pipeline_mode -> nil * * Causes a connection to enter pipeline mode if it is currently idle or already in pipeline mode. * * Raises PG::Error and has no effect if the connection is not currently idle, i.e., it has a result ready, or it is waiting for more input from the server, etc. * This function does not actually send anything to the server, it just changes the libpq connection state. * * Available since PostgreSQL-14 */ static VALUE pgconn_enter_pipeline_mode(VALUE self) { PGconn *conn = pg_get_pgconn(self); int res = PQenterPipelineMode(conn); if( res != 1 ) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); return Qnil; } /* * call-seq: * conn.exit_pipeline_mode -> nil * * Causes a connection to exit pipeline mode if it is currently in pipeline mode with an empty queue and no pending results. * * Takes no action if not in pipeline mode. * Raises PG::Error if the current statement isn't finished processing, or PQgetResult has not been called to collect results from all previously sent query. * * Available since PostgreSQL-14 */ static VALUE pgconn_exit_pipeline_mode(VALUE self) { PGconn *conn = pg_get_pgconn(self); int res = PQexitPipelineMode(conn); if( res != 1 ) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); return Qnil; } /* * call-seq: * conn.pipeline_sync -> nil * * Marks a synchronization point in a pipeline by sending a sync message and flushing the send buffer. * This serves as the delimiter of an implicit transaction and an error recovery point; see Section 34.5.1.3 of the PostgreSQL documentation. * * Raises PG::Error if the connection is not in pipeline mode or sending a sync message failed. * * Available since PostgreSQL-14 */ static VALUE pgconn_pipeline_sync(VALUE self) { PGconn *conn = pg_get_pgconn(self); int res = PQpipelineSync(conn); if( res != 1 ) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); return Qnil; } /* * call-seq: * conn.pipeline_sync -> nil * * Sends a request for the server to flush its output buffer. * * The server flushes its output buffer automatically as a result of Connection#pipeline_sync being called, or on any request when not in pipeline mode. * This function is useful to cause the server to flush its output buffer in pipeline mode without establishing a synchronization point. * Note that the request is not itself flushed to the server automatically; use Connection#flush if necessary. * * Available since PostgreSQL-14 */ static VALUE pgconn_send_flush_request(VALUE self) { PGconn *conn = pg_get_pgconn(self); int res = PQsendFlushRequest(conn); if( res != 1 ) pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); return Qnil; } #endif /************************************************************************** * LARGE OBJECT SUPPORT **************************************************************************/ #define BLOCKING_BEGIN(conn) do { \ int old_nonblocking = PQisnonblocking(conn); \ PQsetnonblocking(conn, 0); #define BLOCKING_END(th) \ PQsetnonblocking(conn, old_nonblocking); \ } while(0); /* * call-seq: * conn.lo_creat( [mode] ) -> Integer * * Creates a large object with mode _mode_. Returns a large object Oid. * On failure, it raises PG::Error. */ static VALUE pgconn_locreat(int argc, VALUE *argv, VALUE self) { Oid lo_oid; int mode; VALUE nmode; PGconn *conn = pg_get_pgconn(self); if (rb_scan_args(argc, argv, "01", &nmode) == 0) mode = INV_READ; else mode = NUM2INT(nmode); BLOCKING_BEGIN(conn) lo_oid = lo_creat(conn, mode); BLOCKING_END(conn) if (lo_oid == 0) pg_raise_conn_error( rb_ePGerror, self, "lo_creat failed"); return UINT2NUM(lo_oid); } /* * call-seq: * conn.lo_create( oid ) -> Integer * * Creates a large object with oid _oid_. Returns the large object Oid. * On failure, it raises PG::Error. */ static VALUE pgconn_locreate(VALUE self, VALUE in_lo_oid) { Oid ret, lo_oid; PGconn *conn = pg_get_pgconn(self); lo_oid = NUM2UINT(in_lo_oid); ret = lo_create(conn, lo_oid); if (ret == InvalidOid) pg_raise_conn_error( rb_ePGerror, self, "lo_create failed"); return UINT2NUM(ret); } /* * call-seq: * conn.lo_import(file) -> Integer * * Import a file to a large object. Returns a large object Oid. * * On failure, it raises a PG::Error. */ static VALUE pgconn_loimport(VALUE self, VALUE filename) { Oid lo_oid; PGconn *conn = pg_get_pgconn(self); Check_Type(filename, T_STRING); BLOCKING_BEGIN(conn) lo_oid = lo_import(conn, StringValueCStr(filename)); BLOCKING_END(conn) if (lo_oid == 0) { pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); } return UINT2NUM(lo_oid); } /* * call-seq: * conn.lo_export( oid, file ) -> nil * * Saves a large object of _oid_ to a _file_. */ static VALUE pgconn_loexport(VALUE self, VALUE lo_oid, VALUE filename) { PGconn *conn = pg_get_pgconn(self); Oid oid; int ret; Check_Type(filename, T_STRING); oid = NUM2UINT(lo_oid); BLOCKING_BEGIN(conn) ret = lo_export(conn, oid, StringValueCStr(filename)); BLOCKING_END(conn) if (ret < 0) { pg_raise_conn_error( rb_ePGerror, self, "%s", PQerrorMessage(conn)); } return Qnil; } /* * call-seq: * conn.lo_open( oid, [mode] ) -> Integer * * Open a large object of _oid_. Returns a large object descriptor * instance on success. The _mode_ argument specifies the mode for * the opened large object,which is either +INV_READ+, or +INV_WRITE+. * * If _mode_ is omitted, the default is +INV_READ+. */ static VALUE pgconn_loopen(int argc, VALUE *argv, VALUE self) { Oid lo_oid; int fd, mode; VALUE nmode, selfid; PGconn *conn = pg_get_pgconn(self); rb_scan_args(argc, argv, "11", &selfid, &nmode); lo_oid = NUM2UINT(selfid); if(NIL_P(nmode)) mode = INV_READ; else mode = NUM2INT(nmode); BLOCKING_BEGIN(conn) fd = lo_open(conn, lo_oid, mode); BLOCKING_END(conn) if(fd < 0) { pg_raise_conn_error( rb_ePGerror, self, "can't open large object: %s", PQerrorMessage(conn)); } return INT2FIX(fd); } /* * call-seq: * conn.lo_write( lo_desc, buffer ) -> Integer * * Writes the string _buffer_ to the large object _lo_desc_. * Returns the number of bytes written. */ static VALUE pgconn_lowrite(VALUE self, VALUE in_lo_desc, VALUE buffer) { int n; PGconn *conn = pg_get_pgconn(self); int fd = NUM2INT(in_lo_desc); Check_Type(buffer, T_STRING); if( RSTRING_LEN(buffer) < 0) { pg_raise_conn_error( rb_ePGerror, self, "write buffer zero string"); } BLOCKING_BEGIN(conn) n = lo_write(conn, fd, StringValuePtr(buffer), RSTRING_LEN(buffer)); BLOCKING_END(conn) if(n < 0) { pg_raise_conn_error( rb_ePGerror, self, "lo_write failed: %s", PQerrorMessage(conn)); } return INT2FIX(n); } /* * call-seq: * conn.lo_read( lo_desc, len ) -> String * * Attempts to read _len_ bytes from large object _lo_desc_, * returns resulting data. */ static VALUE pgconn_loread(VALUE self, VALUE in_lo_desc, VALUE in_len) { int ret; PGconn *conn = pg_get_pgconn(self); int len = NUM2INT(in_len); int lo_desc = NUM2INT(in_lo_desc); VALUE str; char *buffer; if (len < 0) pg_raise_conn_error( rb_ePGerror, self, "negative length %d given", len); buffer = ALLOC_N(char, len); BLOCKING_BEGIN(conn) ret = lo_read(conn, lo_desc, buffer, len); BLOCKING_END(conn) if(ret < 0) pg_raise_conn_error( rb_ePGerror, self, "lo_read failed"); if(ret == 0) { xfree(buffer); return Qnil; } str = rb_str_new(buffer, ret); xfree(buffer); return str; } /* * call-seq: * conn.lo_lseek( lo_desc, offset, whence ) -> Integer * * Move the large object pointer _lo_desc_ to offset _offset_. * Valid values for _whence_ are +SEEK_SET+, +SEEK_CUR+, and +SEEK_END+. * (Or 0, 1, or 2.) */ static VALUE pgconn_lolseek(VALUE self, VALUE in_lo_desc, VALUE offset, VALUE whence) { PGconn *conn = pg_get_pgconn(self); int lo_desc = NUM2INT(in_lo_desc); int ret; BLOCKING_BEGIN(conn) ret = lo_lseek(conn, lo_desc, NUM2INT(offset), NUM2INT(whence)); BLOCKING_END(conn) if(ret < 0) { pg_raise_conn_error( rb_ePGerror, self, "lo_lseek failed"); } return INT2FIX(ret); } /* * call-seq: * conn.lo_tell( lo_desc ) -> Integer * * Returns the current position of the large object _lo_desc_. */ static VALUE pgconn_lotell(VALUE self, VALUE in_lo_desc) { int position; PGconn *conn = pg_get_pgconn(self); int lo_desc = NUM2INT(in_lo_desc); BLOCKING_BEGIN(conn) position = lo_tell(conn, lo_desc); BLOCKING_END(conn) if(position < 0) pg_raise_conn_error( rb_ePGerror, self, "lo_tell failed"); return INT2FIX(position); } /* * call-seq: * conn.lo_truncate( lo_desc, len ) -> nil * * Truncates the large object _lo_desc_ to size _len_. */ static VALUE pgconn_lotruncate(VALUE self, VALUE in_lo_desc, VALUE in_len) { PGconn *conn = pg_get_pgconn(self); int lo_desc = NUM2INT(in_lo_desc); size_t len = NUM2INT(in_len); int ret; BLOCKING_BEGIN(conn) ret = lo_truncate(conn,lo_desc,len); BLOCKING_END(conn) if(ret < 0) pg_raise_conn_error( rb_ePGerror, self, "lo_truncate failed"); return Qnil; } /* * call-seq: * conn.lo_close( lo_desc ) -> nil * * Closes the postgres large object of _lo_desc_. */ static VALUE pgconn_loclose(VALUE self, VALUE in_lo_desc) { PGconn *conn = pg_get_pgconn(self); int lo_desc = NUM2INT(in_lo_desc); int ret; BLOCKING_BEGIN(conn) ret = lo_close(conn,lo_desc); BLOCKING_END(conn) if(ret < 0) pg_raise_conn_error( rb_ePGerror, self, "lo_close failed"); return Qnil; } /* * call-seq: * conn.lo_unlink( oid ) -> nil * * Unlinks (deletes) the postgres large object of _oid_. */ static VALUE pgconn_lounlink(VALUE self, VALUE in_oid) { PGconn *conn = pg_get_pgconn(self); Oid oid = NUM2UINT(in_oid); int ret; BLOCKING_BEGIN(conn) ret = lo_unlink(conn,oid); BLOCKING_END(conn) if(ret < 0) pg_raise_conn_error( rb_ePGerror, self, "lo_unlink failed"); return Qnil; } static void pgconn_set_internal_encoding_index( VALUE self ) { int enc_idx; t_pg_connection *this = pg_get_connection_safe( self ); rb_encoding *enc = pg_conn_enc_get( this->pgconn ); enc_idx = rb_enc_to_index(enc); if( enc_idx >= (1<<(PG_ENC_IDX_BITS-1)) ) rb_raise(rb_eArgError, "unsupported encoding index %d", enc_idx); this->enc_idx = enc_idx; } /* * call-seq: * conn.internal_encoding -> Encoding * * defined in Ruby 1.9 or later. * * Returns: * * an Encoding - client_encoding of the connection as a Ruby Encoding object. * * nil - the client_encoding is 'SQL_ASCII' */ static VALUE pgconn_internal_encoding(VALUE self) { PGconn *conn = pg_get_pgconn( self ); rb_encoding *enc = pg_conn_enc_get( conn ); if ( enc ) { return rb_enc_from_encoding( enc ); } else { return Qnil; } } static VALUE pgconn_external_encoding(VALUE self); /* * call-seq: * conn.internal_encoding = value * * A wrapper of #set_client_encoding. * defined in Ruby 1.9 or later. * * +value+ can be one of: * * an Encoding * * a String - a name of Encoding * * +nil+ - sets the client_encoding to SQL_ASCII. */ static VALUE pgconn_internal_encoding_set(VALUE self, VALUE enc) { rb_check_frozen(self); if (NIL_P(enc)) { pgconn_sync_set_client_encoding( self, rb_usascii_str_new_cstr("SQL_ASCII") ); return enc; } else if ( TYPE(enc) == T_STRING && strcasecmp("JOHAB", StringValueCStr(enc)) == 0 ) { pgconn_sync_set_client_encoding(self, rb_usascii_str_new_cstr("JOHAB")); return enc; } else { rb_encoding *rbenc = rb_to_encoding( enc ); const char *name = pg_get_rb_encoding_as_pg_encoding( rbenc ); if ( gvl_PQsetClientEncoding(pg_get_pgconn( self ), name) == -1 ) { VALUE server_encoding = pgconn_external_encoding( self ); rb_raise( rb_eEncCompatError, "incompatible character encodings: %s and %s", rb_enc_name(rb_to_encoding(server_encoding)), name ); } pgconn_set_internal_encoding_index( self ); return enc; } } /* * call-seq: * conn.external_encoding() -> Encoding * * Return the +server_encoding+ of the connected database as a Ruby Encoding object. * The SQL_ASCII encoding is mapped to to ASCII_8BIT. */ static VALUE pgconn_external_encoding(VALUE self) { t_pg_connection *this = pg_get_connection_safe( self ); rb_encoding *enc = NULL; const char *pg_encname = NULL; pg_encname = PQparameterStatus( this->pgconn, "server_encoding" ); enc = pg_get_pg_encname_as_rb_encoding( pg_encname ); return rb_enc_from_encoding( enc ); } /* * call-seq: * conn.set_client_encoding( encoding ) * * Sets the client encoding to the _encoding_ String. */ static VALUE pgconn_async_set_client_encoding(VALUE self, VALUE encname) { VALUE query_format, query; rb_check_frozen(self); Check_Type(encname, T_STRING); query_format = rb_str_new_cstr("set client_encoding to '%s'"); query = rb_funcall(query_format, rb_intern("%"), 1, encname); pgconn_async_exec(1, &query, self); pgconn_set_internal_encoding_index( self ); return Qnil; } static VALUE pgconn_set_client_encoding_async1( VALUE args ) { VALUE self = ((VALUE*)args)[0]; VALUE encname = ((VALUE*)args)[1]; pgconn_async_set_client_encoding(self, encname); return 0; } static VALUE pgconn_set_client_encoding_async2( VALUE arg, VALUE ex ) { UNUSED(arg); UNUSED(ex); return 1; } static VALUE pgconn_set_client_encoding_async( VALUE self, VALUE encname ) { VALUE args[] = { self, encname }; return rb_rescue(pgconn_set_client_encoding_async1, (VALUE)&args, pgconn_set_client_encoding_async2, Qnil); } /* * call-seq: * conn.set_default_encoding() -> Encoding * * If Ruby has its Encoding.default_internal set, set PostgreSQL's client_encoding * to match. Returns the new Encoding, or +nil+ if the default internal encoding * wasn't set. */ static VALUE pgconn_set_default_encoding( VALUE self ) { PGconn *conn = pg_get_pgconn( self ); rb_encoding *rb_enc; rb_check_frozen(self); if (( rb_enc = rb_default_internal_encoding() )) { rb_encoding * conn_encoding = pg_conn_enc_get( conn ); /* Don't set the server encoding, if it's unnecessary. * This is important for connection proxies, who disallow configuration settings. */ if ( conn_encoding != rb_enc ) { const char *encname = pg_get_rb_encoding_as_pg_encoding( rb_enc ); if ( pgconn_set_client_encoding_async(self, rb_str_new_cstr(encname)) != 0 ) rb_warning( "Failed to set the default_internal encoding to %s: '%s'", encname, PQerrorMessage(conn) ); } pgconn_set_internal_encoding_index( self ); return rb_enc_from_encoding( rb_enc ); } else { pgconn_set_internal_encoding_index( self ); return Qnil; } } /* * call-seq: * res.type_map_for_queries = typemap * * Set the default TypeMap that is used for type casts of query bind parameters. * * +typemap+ must be a kind of PG::TypeMap . * */ static VALUE pgconn_type_map_for_queries_set(VALUE self, VALUE typemap) { t_pg_connection *this = pg_get_connection( self ); t_typemap *tm; UNUSED(tm); rb_check_frozen(self); /* Check type of method param */ TypedData_Get_Struct(typemap, t_typemap, &pg_typemap_type, tm); RB_OBJ_WRITE(self, &this->type_map_for_queries, typemap); return typemap; } /* * call-seq: * res.type_map_for_queries -> TypeMap * * Returns the default TypeMap that is currently set for type casts of query * bind parameters. * */ static VALUE pgconn_type_map_for_queries_get(VALUE self) { t_pg_connection *this = pg_get_connection( self ); return this->type_map_for_queries; } /* * call-seq: * res.type_map_for_results = typemap * * Set the default TypeMap that is used for type casts of result values. * * +typemap+ must be a kind of PG::TypeMap . * */ static VALUE pgconn_type_map_for_results_set(VALUE self, VALUE typemap) { t_pg_connection *this = pg_get_connection( self ); t_typemap *tm; UNUSED(tm); rb_check_frozen(self); TypedData_Get_Struct(typemap, t_typemap, &pg_typemap_type, tm); RB_OBJ_WRITE(self, &this->type_map_for_results, typemap); return typemap; } /* * call-seq: * res.type_map_for_results -> TypeMap * * Returns the default TypeMap that is currently set for type casts of result values. * */ static VALUE pgconn_type_map_for_results_get(VALUE self) { t_pg_connection *this = pg_get_connection( self ); return this->type_map_for_results; } /* * call-seq: * res.encoder_for_put_copy_data = encoder * * Set the default coder that is used for type casting of parameters * to #put_copy_data . * * +encoder+ can be: * * a kind of PG::Coder * * +nil+ - disable type encoding, data must be a String. * */ static VALUE pgconn_encoder_for_put_copy_data_set(VALUE self, VALUE encoder) { t_pg_connection *this = pg_get_connection( self ); rb_check_frozen(self); if( encoder != Qnil ){ t_pg_coder *co; UNUSED(co); /* Check argument type */ TypedData_Get_Struct(encoder, t_pg_coder, &pg_coder_type, co); } RB_OBJ_WRITE(self, &this->encoder_for_put_copy_data, encoder); return encoder; } /* * call-seq: * res.encoder_for_put_copy_data -> PG::Coder * * Returns the default coder object that is currently set for type casting of parameters * to #put_copy_data . * * Returns either: * * a kind of PG::Coder * * +nil+ - type encoding is disabled, data must be a String. * */ static VALUE pgconn_encoder_for_put_copy_data_get(VALUE self) { t_pg_connection *this = pg_get_connection( self ); return this->encoder_for_put_copy_data; } /* * call-seq: * res.decoder_for_get_copy_data = decoder * * Set the default coder that is used for type casting of received data * by #get_copy_data . * * +decoder+ can be: * * a kind of PG::Coder * * +nil+ - disable type decoding, returned data will be a String. * */ static VALUE pgconn_decoder_for_get_copy_data_set(VALUE self, VALUE decoder) { t_pg_connection *this = pg_get_connection( self ); rb_check_frozen(self); if( decoder != Qnil ){ t_pg_coder *co; UNUSED(co); /* Check argument type */ TypedData_Get_Struct(decoder, t_pg_coder, &pg_coder_type, co); } RB_OBJ_WRITE(self, &this->decoder_for_get_copy_data, decoder); return decoder; } /* * call-seq: * res.decoder_for_get_copy_data -> PG::Coder * * Returns the default coder object that is currently set for type casting of received * data by #get_copy_data . * * Returns either: * * a kind of PG::Coder * * +nil+ - type encoding is disabled, returned data will be a String. * */ static VALUE pgconn_decoder_for_get_copy_data_get(VALUE self) { t_pg_connection *this = pg_get_connection( self ); return this->decoder_for_get_copy_data; } /* * call-seq: * conn.field_name_type = Symbol * * Set default type of field names of results retrieved by this connection. * It can be set to one of: * * +:string+ to use String based field names * * +:symbol+ to use Symbol based field names * * The default is +:string+ . * * Settings the type of field names affects only future results. * * See further description at PG::Result#field_name_type= * */ static VALUE pgconn_field_name_type_set(VALUE self, VALUE sym) { t_pg_connection *this = pg_get_connection( self ); rb_check_frozen(self); this->flags &= ~PG_RESULT_FIELD_NAMES_MASK; if( sym == sym_symbol ) this->flags |= PG_RESULT_FIELD_NAMES_SYMBOL; else if ( sym == sym_static_symbol ) this->flags |= PG_RESULT_FIELD_NAMES_STATIC_SYMBOL; else if ( sym == sym_string ); else rb_raise(rb_eArgError, "invalid argument %+"PRIsVALUE, sym); return sym; } /* * call-seq: * conn.field_name_type -> Symbol * * Get type of field names. * * See description at #field_name_type= */ static VALUE pgconn_field_name_type_get(VALUE self) { t_pg_connection *this = pg_get_connection( self ); if( this->flags & PG_RESULT_FIELD_NAMES_SYMBOL ){ return sym_symbol; } else if( this->flags & PG_RESULT_FIELD_NAMES_STATIC_SYMBOL ){ return sym_static_symbol; } else { return sym_string; } } /* * Document-class: PG::Connection */ void init_pg_connection(void) { s_id_encode = rb_intern("encode"); s_id_autoclose_set = rb_intern("autoclose="); sym_type = ID2SYM(rb_intern("type")); sym_format = ID2SYM(rb_intern("format")); sym_value = ID2SYM(rb_intern("value")); sym_string = ID2SYM(rb_intern("string")); sym_symbol = ID2SYM(rb_intern("symbol")); sym_static_symbol = ID2SYM(rb_intern("static_symbol")); rb_cPGconn = rb_define_class_under( rb_mPG, "Connection", rb_cObject ); /* Help rdoc to known the Constants module */ /* rb_mPGconstants = rb_define_module_under( rb_mPG, "Constants" ); */ rb_include_module(rb_cPGconn, rb_mPGconstants); /****** PG::Connection CLASS METHODS ******/ rb_define_alloc_func( rb_cPGconn, pgconn_s_allocate ); rb_define_singleton_method(rb_cPGconn, "escape_string", pgconn_s_escape, 1); SINGLETON_ALIAS(rb_cPGconn, "escape", "escape_string"); rb_define_singleton_method(rb_cPGconn, "escape_bytea", pgconn_s_escape_bytea, 1); rb_define_singleton_method(rb_cPGconn, "unescape_bytea", pgconn_s_unescape_bytea, 1); rb_define_singleton_method(rb_cPGconn, "encrypt_password", pgconn_s_encrypt_password, 2); rb_define_singleton_method(rb_cPGconn, "quote_ident", pgconn_s_quote_ident, 1); rb_define_singleton_method(rb_cPGconn, "connect_start", pgconn_s_connect_start, -1); rb_define_singleton_method(rb_cPGconn, "conndefaults", pgconn_s_conndefaults, 0); rb_define_singleton_method(rb_cPGconn, "conninfo_parse", pgconn_s_conninfo_parse, 1); rb_define_singleton_method(rb_cPGconn, "sync_ping", pgconn_s_sync_ping, -1); rb_define_singleton_method(rb_cPGconn, "sync_connect", pgconn_s_sync_connect, -1); /****** PG::Connection INSTANCE METHODS: Connection Control ******/ rb_define_method(rb_cPGconn, "connect_poll", pgconn_connect_poll, 0); rb_define_method(rb_cPGconn, "finish", pgconn_finish, 0); rb_define_method(rb_cPGconn, "finished?", pgconn_finished_p, 0); rb_define_method(rb_cPGconn, "sync_reset", pgconn_sync_reset, 0); rb_define_method(rb_cPGconn, "reset_start", pgconn_reset_start, 0); rb_define_method(rb_cPGconn, "reset_poll", pgconn_reset_poll, 0); rb_define_alias(rb_cPGconn, "close", "finish"); /****** PG::Connection INSTANCE METHODS: Connection Status ******/ rb_define_method(rb_cPGconn, "db", pgconn_db, 0); rb_define_method(rb_cPGconn, "user", pgconn_user, 0); rb_define_method(rb_cPGconn, "pass", pgconn_pass, 0); rb_define_method(rb_cPGconn, "host", pgconn_host, 0); #if defined(HAVE_PQRESULTMEMORYSIZE) rb_define_method(rb_cPGconn, "hostaddr", pgconn_hostaddr, 0); #endif rb_define_method(rb_cPGconn, "port", pgconn_port, 0); rb_define_method(rb_cPGconn, "tty", pgconn_tty, 0); rb_define_method(rb_cPGconn, "conninfo", pgconn_conninfo, 0); rb_define_method(rb_cPGconn, "options", pgconn_options, 0); rb_define_method(rb_cPGconn, "status", pgconn_status, 0); rb_define_method(rb_cPGconn, "transaction_status", pgconn_transaction_status, 0); rb_define_method(rb_cPGconn, "parameter_status", pgconn_parameter_status, 1); rb_define_method(rb_cPGconn, "protocol_version", pgconn_protocol_version, 0); rb_define_method(rb_cPGconn, "server_version", pgconn_server_version, 0); rb_define_method(rb_cPGconn, "error_message", pgconn_error_message, 0); rb_define_method(rb_cPGconn, "socket", pgconn_socket, 0); rb_define_method(rb_cPGconn, "socket_io", pgconn_socket_io, 0); rb_define_method(rb_cPGconn, "backend_pid", pgconn_backend_pid, 0); rb_define_method(rb_cPGconn, "backend_key", pgconn_backend_key, 0); rb_define_method(rb_cPGconn, "connection_needs_password", pgconn_connection_needs_password, 0); rb_define_method(rb_cPGconn, "connection_used_password", pgconn_connection_used_password, 0); /* rb_define_method(rb_cPGconn, "getssl", pgconn_getssl, 0); */ /****** PG::Connection INSTANCE METHODS: Command Execution ******/ rb_define_method(rb_cPGconn, "sync_exec", pgconn_sync_exec, -1); rb_define_method(rb_cPGconn, "sync_exec_params", pgconn_sync_exec_params, -1); rb_define_method(rb_cPGconn, "sync_prepare", pgconn_sync_prepare, -1); rb_define_method(rb_cPGconn, "sync_exec_prepared", pgconn_sync_exec_prepared, -1); rb_define_method(rb_cPGconn, "sync_describe_prepared", pgconn_sync_describe_prepared, 1); rb_define_method(rb_cPGconn, "sync_describe_portal", pgconn_sync_describe_portal, 1); rb_define_method(rb_cPGconn, "exec", pgconn_async_exec, -1); rb_define_method(rb_cPGconn, "exec_params", pgconn_async_exec_params, -1); rb_define_method(rb_cPGconn, "prepare", pgconn_async_prepare, -1); rb_define_method(rb_cPGconn, "exec_prepared", pgconn_async_exec_prepared, -1); rb_define_method(rb_cPGconn, "describe_prepared", pgconn_async_describe_prepared, 1); rb_define_method(rb_cPGconn, "describe_portal", pgconn_async_describe_portal, 1); rb_define_alias(rb_cPGconn, "async_exec", "exec"); rb_define_alias(rb_cPGconn, "async_query", "async_exec"); rb_define_alias(rb_cPGconn, "async_exec_params", "exec_params"); rb_define_alias(rb_cPGconn, "async_prepare", "prepare"); rb_define_alias(rb_cPGconn, "async_exec_prepared", "exec_prepared"); rb_define_alias(rb_cPGconn, "async_describe_prepared", "describe_prepared"); rb_define_alias(rb_cPGconn, "async_describe_portal", "describe_portal"); rb_define_method(rb_cPGconn, "make_empty_pgresult", pgconn_make_empty_pgresult, 1); rb_define_method(rb_cPGconn, "escape_string", pgconn_s_escape, 1); rb_define_alias(rb_cPGconn, "escape", "escape_string"); rb_define_method(rb_cPGconn, "escape_literal", pgconn_escape_literal, 1); rb_define_method(rb_cPGconn, "escape_identifier", pgconn_escape_identifier, 1); rb_define_method(rb_cPGconn, "escape_bytea", pgconn_s_escape_bytea, 1); rb_define_method(rb_cPGconn, "unescape_bytea", pgconn_s_unescape_bytea, 1); rb_define_method(rb_cPGconn, "set_single_row_mode", pgconn_set_single_row_mode, 0); /****** PG::Connection INSTANCE METHODS: Asynchronous Command Processing ******/ rb_define_method(rb_cPGconn, "send_query", pgconn_send_query, -1); rb_define_method(rb_cPGconn, "send_query_params", pgconn_send_query_params, -1); rb_define_method(rb_cPGconn, "send_prepare", pgconn_send_prepare, -1); rb_define_method(rb_cPGconn, "send_query_prepared", pgconn_send_query_prepared, -1); rb_define_method(rb_cPGconn, "send_describe_prepared", pgconn_send_describe_prepared, 1); rb_define_method(rb_cPGconn, "send_describe_portal", pgconn_send_describe_portal, 1); rb_define_method(rb_cPGconn, "sync_get_result", pgconn_sync_get_result, 0); rb_define_method(rb_cPGconn, "consume_input", pgconn_consume_input, 0); rb_define_method(rb_cPGconn, "is_busy", pgconn_is_busy, 0); rb_define_method(rb_cPGconn, "sync_setnonblocking", pgconn_sync_setnonblocking, 1); rb_define_method(rb_cPGconn, "sync_isnonblocking", pgconn_sync_isnonblocking, 0); rb_define_method(rb_cPGconn, "sync_flush", pgconn_sync_flush, 0); rb_define_method(rb_cPGconn, "flush", pgconn_async_flush, 0); rb_define_alias(rb_cPGconn, "async_flush", "flush"); rb_define_method(rb_cPGconn, "discard_results", pgconn_discard_results, 0); /****** PG::Connection INSTANCE METHODS: Cancelling Queries in Progress ******/ rb_define_method(rb_cPGconn, "sync_cancel", pgconn_sync_cancel, 0); /****** PG::Connection INSTANCE METHODS: NOTIFY ******/ rb_define_method(rb_cPGconn, "notifies", pgconn_notifies, 0); /****** PG::Connection INSTANCE METHODS: COPY ******/ rb_define_method(rb_cPGconn, "sync_put_copy_data", pgconn_sync_put_copy_data, -1); rb_define_method(rb_cPGconn, "sync_put_copy_end", pgconn_sync_put_copy_end, -1); rb_define_method(rb_cPGconn, "sync_get_copy_data", pgconn_sync_get_copy_data, -1); /****** PG::Connection INSTANCE METHODS: Control Functions ******/ rb_define_method(rb_cPGconn, "set_error_verbosity", pgconn_set_error_verbosity, 1); #ifdef HAVE_PQRESULTVERBOSEERRORMESSAGE rb_define_method(rb_cPGconn, "set_error_context_visibility", pgconn_set_error_context_visibility, 1 ); #endif rb_define_method(rb_cPGconn, "trace", pgconn_trace, 1); rb_define_method(rb_cPGconn, "untrace", pgconn_untrace, 0); /****** PG::Connection INSTANCE METHODS: Notice Processing ******/ rb_define_method(rb_cPGconn, "set_notice_receiver", pgconn_set_notice_receiver, 0); rb_define_method(rb_cPGconn, "set_notice_processor", pgconn_set_notice_processor, 0); /****** PG::Connection INSTANCE METHODS: Other ******/ rb_define_method(rb_cPGconn, "get_client_encoding", pgconn_get_client_encoding, 0); rb_define_method(rb_cPGconn, "sync_set_client_encoding", pgconn_sync_set_client_encoding, 1); rb_define_method(rb_cPGconn, "set_client_encoding", pgconn_async_set_client_encoding, 1); rb_define_alias(rb_cPGconn, "async_set_client_encoding", "set_client_encoding"); rb_define_alias(rb_cPGconn, "client_encoding=", "set_client_encoding"); rb_define_method(rb_cPGconn, "block", pgconn_block, -1); rb_define_private_method(rb_cPGconn, "flush_data=", pgconn_flush_data_set, 1); rb_define_method(rb_cPGconn, "wait_for_notify", pgconn_wait_for_notify, -1); rb_define_alias(rb_cPGconn, "notifies_wait", "wait_for_notify"); rb_define_method(rb_cPGconn, "quote_ident", pgconn_s_quote_ident, 1); rb_define_method(rb_cPGconn, "sync_get_last_result", pgconn_sync_get_last_result, 0); rb_define_method(rb_cPGconn, "get_last_result", pgconn_async_get_last_result, 0); rb_define_alias(rb_cPGconn, "async_get_last_result", "get_last_result"); #ifdef HAVE_PQENCRYPTPASSWORDCONN rb_define_method(rb_cPGconn, "sync_encrypt_password", pgconn_sync_encrypt_password, -1); #endif #ifdef HAVE_PQSSLATTRIBUTE rb_define_method(rb_cPGconn, "ssl_in_use?", pgconn_ssl_in_use, 0); rb_define_method(rb_cPGconn, "ssl_attribute", pgconn_ssl_attribute, 1); rb_define_method(rb_cPGconn, "ssl_attribute_names", pgconn_ssl_attribute_names, 0); #endif #ifdef HAVE_PQENTERPIPELINEMODE rb_define_method(rb_cPGconn, "pipeline_status", pgconn_pipeline_status, 0); rb_define_method(rb_cPGconn, "enter_pipeline_mode", pgconn_enter_pipeline_mode, 0); rb_define_method(rb_cPGconn, "exit_pipeline_mode", pgconn_exit_pipeline_mode, 0); rb_define_method(rb_cPGconn, "pipeline_sync", pgconn_pipeline_sync, 0); rb_define_method(rb_cPGconn, "send_flush_request", pgconn_send_flush_request, 0); #endif /****** PG::Connection INSTANCE METHODS: Large Object Support ******/ rb_define_method(rb_cPGconn, "lo_creat", pgconn_locreat, -1); rb_define_alias(rb_cPGconn, "locreat", "lo_creat"); rb_define_method(rb_cPGconn, "lo_create", pgconn_locreate, 1); rb_define_alias(rb_cPGconn, "locreate", "lo_create"); rb_define_method(rb_cPGconn, "lo_import", pgconn_loimport, 1); rb_define_alias(rb_cPGconn, "loimport", "lo_import"); rb_define_method(rb_cPGconn, "lo_export", pgconn_loexport, 2); rb_define_alias(rb_cPGconn, "loexport", "lo_export"); rb_define_method(rb_cPGconn, "lo_open", pgconn_loopen, -1); rb_define_alias(rb_cPGconn, "loopen", "lo_open"); rb_define_method(rb_cPGconn, "lo_write",pgconn_lowrite, 2); rb_define_alias(rb_cPGconn, "lowrite", "lo_write"); rb_define_method(rb_cPGconn, "lo_read",pgconn_loread, 2); rb_define_alias(rb_cPGconn, "loread", "lo_read"); rb_define_method(rb_cPGconn, "lo_lseek",pgconn_lolseek, 3); rb_define_alias(rb_cPGconn, "lolseek", "lo_lseek"); rb_define_alias(rb_cPGconn, "lo_seek", "lo_lseek"); rb_define_alias(rb_cPGconn, "loseek", "lo_lseek"); rb_define_method(rb_cPGconn, "lo_tell",pgconn_lotell, 1); rb_define_alias(rb_cPGconn, "lotell", "lo_tell"); rb_define_method(rb_cPGconn, "lo_truncate", pgconn_lotruncate, 2); rb_define_alias(rb_cPGconn, "lotruncate", "lo_truncate"); rb_define_method(rb_cPGconn, "lo_close",pgconn_loclose, 1); rb_define_alias(rb_cPGconn, "loclose", "lo_close"); rb_define_method(rb_cPGconn, "lo_unlink", pgconn_lounlink, 1); rb_define_alias(rb_cPGconn, "lounlink", "lo_unlink"); rb_define_method(rb_cPGconn, "internal_encoding", pgconn_internal_encoding, 0); rb_define_method(rb_cPGconn, "internal_encoding=", pgconn_internal_encoding_set, 1); rb_define_method(rb_cPGconn, "external_encoding", pgconn_external_encoding, 0); rb_define_method(rb_cPGconn, "set_default_encoding", pgconn_set_default_encoding, 0); rb_define_method(rb_cPGconn, "type_map_for_queries=", pgconn_type_map_for_queries_set, 1); rb_define_method(rb_cPGconn, "type_map_for_queries", pgconn_type_map_for_queries_get, 0); rb_define_method(rb_cPGconn, "type_map_for_results=", pgconn_type_map_for_results_set, 1); rb_define_method(rb_cPGconn, "type_map_for_results", pgconn_type_map_for_results_get, 0); rb_define_method(rb_cPGconn, "encoder_for_put_copy_data=", pgconn_encoder_for_put_copy_data_set, 1); rb_define_method(rb_cPGconn, "encoder_for_put_copy_data", pgconn_encoder_for_put_copy_data_get, 0); rb_define_method(rb_cPGconn, "decoder_for_get_copy_data=", pgconn_decoder_for_get_copy_data_set, 1); rb_define_method(rb_cPGconn, "decoder_for_get_copy_data", pgconn_decoder_for_get_copy_data_get, 0); rb_define_method(rb_cPGconn, "field_name_type=", pgconn_field_name_type_set, 1 ); rb_define_method(rb_cPGconn, "field_name_type", pgconn_field_name_type_get, 0 ); } pg-1.5.5/ext/pg_type_map_by_oid.c0000644000004100000410000002533414563476204016777 0ustar www-datawww-data/* * pg_type_map_by_oid.c - PG::TypeMapByOid class extension * $Id$ * */ #include "pg.h" static VALUE rb_cTypeMapByOid; static ID s_id_decode; typedef struct { t_typemap typemap; int max_rows_for_online_lookup; struct pg_tmbo_converter { VALUE oid_to_coder; struct pg_tmbo_oid_cache_entry { Oid oid; t_pg_coder *p_coder; } cache_row[0x100]; } format[2]; } t_tmbo; static VALUE pg_tmbo_s_allocate( VALUE klass ); /* * We use the OID's minor 8 Bits as index to a 256 entry cache. This avoids full ruby hash lookups * for each value in most cases. */ #define CACHE_LOOKUP(this, form, oid) ( &this->format[(form)].cache_row[(oid) & 0xff] ) static t_pg_coder * pg_tmbo_lookup_oid(t_tmbo *this, int format, Oid oid) { t_pg_coder *conv; struct pg_tmbo_oid_cache_entry *p_ce; p_ce = CACHE_LOOKUP(this, format, oid); /* Has the entry the expected OID and is it a non empty entry? */ if( p_ce->oid == oid && (oid || p_ce->p_coder) ) { conv = p_ce->p_coder; } else { VALUE obj = rb_hash_lookup( this->format[format].oid_to_coder, UINT2NUM( oid )); /* obj must be nil or some kind of PG::Coder, this is checked at insertion */ conv = NIL_P(obj) ? NULL : RTYPEDDATA_DATA(obj); /* Write the retrieved coder to the cache */ p_ce->oid = oid; p_ce->p_coder = conv; } return conv; } /* Build a TypeMapByColumn that fits to the given result */ static VALUE pg_tmbo_build_type_map_for_result2( t_tmbo *this, PGresult *pgresult ) { t_tmbc *p_colmap; int i; VALUE colmap; int nfields = PQnfields( pgresult ); p_colmap = xmalloc(sizeof(t_tmbc) + sizeof(struct pg_tmbc_converter) * nfields); /* Set nfields to 0 at first, so that GC mark function doesn't access uninitialized memory. */ p_colmap->nfields = 0; p_colmap->typemap.funcs = pg_tmbc_funcs; p_colmap->typemap.default_typemap = pg_typemap_all_strings; colmap = pg_tmbc_allocate(); RTYPEDDATA_DATA(colmap) = p_colmap; for(i=0; i 1 ) rb_raise(rb_eArgError, "result field %d has unsupported format code %d", i+1, format); p_colmap->convs[i].cconv = pg_tmbo_lookup_oid( this, format, PQftype(pgresult, i) ); } p_colmap->nfields = nfields; return colmap; } static VALUE pg_tmbo_result_value(t_typemap *p_typemap, VALUE result, int tuple, int field) { int format; t_pg_coder *p_coder; t_pg_result *p_result = pgresult_get_this(result); t_tmbo *this = (t_tmbo*) p_typemap; t_typemap *default_tm; if (PQgetisnull(p_result->pgresult, tuple, field)) { return Qnil; } format = PQfformat( p_result->pgresult, field ); if( format < 0 || format > 1 ) rb_raise(rb_eArgError, "result field %d has unsupported format code %d", field+1, format); p_coder = pg_tmbo_lookup_oid( this, format, PQftype(p_result->pgresult, field) ); if( p_coder ){ char * val = PQgetvalue( p_result->pgresult, tuple, field ); int len = PQgetlength( p_result->pgresult, tuple, field ); t_pg_coder_dec_func dec_func = pg_coder_dec_func( p_coder, format ); return dec_func( p_coder, val, len, tuple, field, p_result->enc_idx ); } default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); return default_tm->funcs.typecast_result_value( default_tm, result, tuple, field ); } static VALUE pg_tmbo_fit_to_result( VALUE self, VALUE result ) { t_tmbo *this = RTYPEDDATA_DATA( self ); PGresult *pgresult = pgresult_get( result ); /* Ensure that the default type map fits equally. */ t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); VALUE sub_typemap = default_tm->funcs.fit_to_result( this->typemap.default_typemap, result ); if( PQntuples( pgresult ) <= this->max_rows_for_online_lookup ){ /* Do a hash lookup for each result value in pg_tmbc_result_value() */ /* Did the default type return the same object ? */ if( sub_typemap == this->typemap.default_typemap ){ return self; } else { /* The default type map built a new object, so we need to propagate it * and build a copy of this type map. */ VALUE new_typemap = pg_tmbo_s_allocate( rb_cTypeMapByOid ); t_tmbo *p_new_typemap = RTYPEDDATA_DATA(new_typemap); *p_new_typemap = *this; p_new_typemap->typemap.default_typemap = sub_typemap; return new_typemap; } }else{ /* Build a new TypeMapByColumn that fits to the given result and * uses a fast array lookup. */ VALUE new_typemap = pg_tmbo_build_type_map_for_result2( this, pgresult ); t_tmbo *p_new_typemap = RTYPEDDATA_DATA(new_typemap); p_new_typemap->typemap.default_typemap = sub_typemap; return new_typemap; } } static void pg_tmbo_mark( void *_this ) { t_tmbo *this = (t_tmbo *)_this; int i; pg_typemap_mark(&this->typemap); for( i=0; i<2; i++){ rb_gc_mark_movable(this->format[i].oid_to_coder); } } static size_t pg_tmbo_memsize( const void *_this ) { const t_tmbo *this = (const t_tmbo *)_this; return sizeof(*this); } static void pg_tmbo_compact( void *_this ) { t_tmbo *this = (t_tmbo *)_this; int i; pg_typemap_compact(&this->typemap); for( i=0; i<2; i++){ pg_gc_location(this->format[i].oid_to_coder); } } static const rb_data_type_t pg_tmbo_type = { "PG::TypeMapByOid", { pg_tmbo_mark, RUBY_TYPED_DEFAULT_FREE, pg_tmbo_memsize, pg_compact_callback(pg_tmbo_compact), }, &pg_typemap_type, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; static VALUE pg_tmbo_s_allocate( VALUE klass ) { t_tmbo *this; VALUE self; int i; self = TypedData_Make_Struct( klass, t_tmbo, &pg_tmbo_type, this ); this->typemap.funcs.fit_to_result = pg_tmbo_fit_to_result; this->typemap.funcs.fit_to_query = pg_typemap_fit_to_query; this->typemap.funcs.fit_to_copy_get = pg_typemap_fit_to_copy_get; this->typemap.funcs.typecast_result_value = pg_tmbo_result_value; this->typemap.funcs.typecast_query_param = pg_typemap_typecast_query_param; this->typemap.funcs.typecast_copy_get = pg_typemap_typecast_copy_get; RB_OBJ_WRITE(self, &this->typemap.default_typemap, pg_typemap_all_strings); this->max_rows_for_online_lookup = 10; for( i=0; i<2; i++){ RB_OBJ_WRITE(self, &this->format[i].oid_to_coder, rb_hash_new()); } return self; } /* * call-seq: * typemap.add_coder( coder ) * * Assigns a new PG::Coder object to the type map. The decoder * is registered for type casts based on it's PG::Coder#oid and * PG::Coder#format attributes. * * Later changes of the oid or format code within the coder object * will have no effect to the type map. * */ static VALUE pg_tmbo_add_coder( VALUE self, VALUE coder ) { VALUE hash; t_tmbo *this = RTYPEDDATA_DATA( self ); t_pg_coder *p_coder; struct pg_tmbo_oid_cache_entry *p_ce; rb_check_frozen(self); TypedData_Get_Struct(coder, t_pg_coder, &pg_coder_type, p_coder); if( p_coder->format < 0 || p_coder->format > 1 ) rb_raise(rb_eArgError, "invalid format code %d", p_coder->format); /* Update cache entry */ p_ce = CACHE_LOOKUP(this, p_coder->format, p_coder->oid); p_ce->oid = p_coder->oid; p_ce->p_coder = p_coder; /* Write coder into the hash of the given format */ hash = this->format[p_coder->format].oid_to_coder; rb_hash_aset( hash, UINT2NUM(p_coder->oid), coder); return self; } /* * call-seq: * typemap.rm_coder( format, oid ) * * Removes a PG::Coder object from the type map based on the given * oid and format codes. * * Returns the removed coder object. */ static VALUE pg_tmbo_rm_coder( VALUE self, VALUE format, VALUE oid ) { VALUE hash; VALUE coder; t_tmbo *this = RTYPEDDATA_DATA( self ); int i_format = NUM2INT(format); struct pg_tmbo_oid_cache_entry *p_ce; rb_check_frozen(self); if( i_format < 0 || i_format > 1 ) rb_raise(rb_eArgError, "invalid format code %d", i_format); /* Mark the cache entry as empty */ p_ce = CACHE_LOOKUP(this, i_format, NUM2UINT(oid)); p_ce->oid = 0; p_ce->p_coder = NULL; hash = this->format[i_format].oid_to_coder; coder = rb_hash_delete( hash, oid ); return coder; } /* * call-seq: * typemap.coders -> Array * * Array of all assigned PG::Coder objects. */ static VALUE pg_tmbo_coders( VALUE self ) { t_tmbo *this = RTYPEDDATA_DATA( self ); return rb_ary_concat( rb_funcall(this->format[0].oid_to_coder, rb_intern("values"), 0), rb_funcall(this->format[1].oid_to_coder, rb_intern("values"), 0)); } /* * call-seq: * typemap.max_rows_for_online_lookup = number * * Threshold for doing Hash lookups versus creation of a dedicated PG::TypeMapByColumn. * The type map will do Hash lookups for each result value, if the number of rows * is below or equal +number+. * */ static VALUE pg_tmbo_max_rows_for_online_lookup_set( VALUE self, VALUE value ) { t_tmbo *this = RTYPEDDATA_DATA( self ); rb_check_frozen(self); this->max_rows_for_online_lookup = NUM2INT(value); return value; } /* * call-seq: * typemap.max_rows_for_online_lookup -> Integer */ static VALUE pg_tmbo_max_rows_for_online_lookup_get( VALUE self ) { t_tmbo *this = RTYPEDDATA_DATA( self ); return INT2NUM(this->max_rows_for_online_lookup); } /* * call-seq: * typemap.build_column_map( result ) * * This builds a PG::TypeMapByColumn that fits to the given PG::Result object * based on it's type OIDs and binary/text format. * */ static VALUE pg_tmbo_build_column_map( VALUE self, VALUE result ) { t_tmbo *this = RTYPEDDATA_DATA( self ); if ( !rb_obj_is_kind_of(result, rb_cPGresult) ) { rb_raise( rb_eTypeError, "wrong argument type %s (expected kind of PG::Result)", rb_obj_classname( result ) ); } return pg_tmbo_build_type_map_for_result2( this, pgresult_get(result) ); } void init_pg_type_map_by_oid(void) { s_id_decode = rb_intern("decode"); /* * Document-class: PG::TypeMapByOid < PG::TypeMap * * This type map casts values based on the type OID of the given column * in the result set. * * This type map is only suitable to cast values from PG::Result objects. * Therefore only decoders might be assigned by the #add_coder method. * * Fields with no match to any of the registered type OID / format combination * are forwarded to the #default_type_map . */ rb_cTypeMapByOid = rb_define_class_under( rb_mPG, "TypeMapByOid", rb_cTypeMap ); rb_define_alloc_func( rb_cTypeMapByOid, pg_tmbo_s_allocate ); rb_define_method( rb_cTypeMapByOid, "add_coder", pg_tmbo_add_coder, 1 ); rb_define_method( rb_cTypeMapByOid, "rm_coder", pg_tmbo_rm_coder, 2 ); rb_define_method( rb_cTypeMapByOid, "coders", pg_tmbo_coders, 0 ); rb_define_method( rb_cTypeMapByOid, "max_rows_for_online_lookup=", pg_tmbo_max_rows_for_online_lookup_set, 1 ); rb_define_method( rb_cTypeMapByOid, "max_rows_for_online_lookup", pg_tmbo_max_rows_for_online_lookup_get, 0 ); rb_define_method( rb_cTypeMapByOid, "build_column_map", pg_tmbo_build_column_map, 1 ); /* rb_mDefaultTypeMappable = rb_define_module_under( rb_cTypeMap, "DefaultTypeMappable"); */ rb_include_module( rb_cTypeMapByOid, rb_mDefaultTypeMappable ); } pg-1.5.5/ext/pg_tuple.c0000644000004100000410000003360414563476204014764 0ustar www-datawww-data#include "pg.h" /******************************************************************** * * Document-class: PG::Tuple * * The class to represent one query result tuple (row). * An instance of this class can be created by PG::Result#tuple . * * All field values of the tuple are retrieved on demand from the underlying PGresult object and converted to a Ruby object. * Subsequent access to the same field returns the same object, since they are cached when materialized. * Each PG::Tuple holds a reference to the related PG::Result object, but gets detached, when all fields are materialized. * * Example: * require 'pg' * conn = PG.connect(:dbname => 'test') * res = conn.exec('VALUES(1,2), (3,4)') * t0 = res.tuple(0) # => # * t1 = res.tuple(1) # => # * t1[0] # => "3" * t1["column2"] # => "4" */ static VALUE rb_cPG_Tuple; typedef struct { /* PG::Result object this tuple was retrieved from. * Qnil when all fields are materialized. */ VALUE result; /* Store the typemap of the result. * It's not enough to reference the PG::TypeMap object through the result, * since it could be exchanged after the tuple has been created. */ VALUE typemap; /* Hash with maps field names to index into values[] * Shared between all instances retrieved from one PG::Result. */ VALUE field_map; /* Row number within the result set. */ int row_num; /* Number of fields in the result set. */ int num_fields; /* Materialized values. * And in case of dup column names, a field_names Array subsequently. */ VALUE values[0]; } t_pg_tuple; static inline VALUE * pg_tuple_get_field_names_ptr( t_pg_tuple *this ) { if( this->num_fields != (int)RHASH_SIZE(this->field_map) ){ return &this->values[this->num_fields]; } else { static VALUE f = Qfalse; return &f; } } static inline VALUE pg_tuple_get_field_names( t_pg_tuple *this ) { return *pg_tuple_get_field_names_ptr(this); } static void pg_tuple_gc_mark( void *_this ) { t_pg_tuple *this = (t_pg_tuple *)_this; int i; if( !this ) return; rb_gc_mark_movable( this->result ); rb_gc_mark_movable( this->typemap ); rb_gc_mark_movable( this->field_map ); for( i = 0; i < this->num_fields; i++ ){ rb_gc_mark_movable( this->values[i] ); } rb_gc_mark_movable( pg_tuple_get_field_names(this) ); } static void pg_tuple_gc_compact( void *_this ) { t_pg_tuple *this = (t_pg_tuple *)_this; int i; if( !this ) return; pg_gc_location( this->result ); pg_gc_location( this->typemap ); pg_gc_location( this->field_map ); for( i = 0; i < this->num_fields; i++ ){ pg_gc_location( this->values[i] ); } pg_gc_location( *pg_tuple_get_field_names_ptr(this) ); } static void pg_tuple_gc_free( void *_this ) { t_pg_tuple *this = (t_pg_tuple *)_this; if( !this ) return; xfree(this); } static size_t pg_tuple_memsize( const void *_this ) { const t_pg_tuple *this = (const t_pg_tuple *)_this; if( this==NULL ) return 0; return sizeof(*this) + sizeof(*this->values) * this->num_fields; } static const rb_data_type_t pg_tuple_type = { "PG::Tuple", { pg_tuple_gc_mark, pg_tuple_gc_free, pg_tuple_memsize, pg_compact_callback(pg_tuple_gc_compact), }, 0, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; /* * Document-method: allocate * * call-seq: * PG::VeryTuple.allocate -> obj */ static VALUE pg_tuple_s_allocate( VALUE klass ) { return TypedData_Wrap_Struct( klass, &pg_tuple_type, NULL ); } VALUE pg_tuple_new(VALUE result, int row_num) { t_pg_tuple *this; VALUE self = pg_tuple_s_allocate( rb_cPG_Tuple ); t_pg_result *p_result = pgresult_get_this(result); int num_fields = p_result->nfields; int i; VALUE field_map = p_result->field_map; int dup_names = num_fields != (int)RHASH_SIZE(field_map); this = (t_pg_tuple *)xmalloc( sizeof(*this) + sizeof(*this->values) * num_fields + sizeof(*this->values) * (dup_names ? 1 : 0)); RB_OBJ_WRITE(self, &this->result, result); RB_OBJ_WRITE(self, &this->typemap, p_result->typemap); RB_OBJ_WRITE(self, &this->field_map, field_map); this->row_num = row_num; this->num_fields = num_fields; for( i = 0; i < num_fields; i++ ){ this->values[i] = Qundef; } if( dup_names ){ /* Some of the column names are duplicated -> we need the keys as Array in addition. * Store it behind the values to save the space in the common case of no dups. */ VALUE keys_array = rb_obj_freeze(rb_ary_new4(num_fields, p_result->fnames)); RB_OBJ_WRITE(self, &this->values[num_fields], keys_array); } RTYPEDDATA_DATA(self) = this; return self; } static inline t_pg_tuple * pg_tuple_get_this( VALUE self ) { t_pg_tuple *this; TypedData_Get_Struct(self, t_pg_tuple, &pg_tuple_type, this); if (this == NULL) rb_raise(rb_eTypeError, "tuple is empty"); return this; } static VALUE pg_tuple_materialize_field(VALUE self, int col) { t_pg_tuple *this = RTYPEDDATA_DATA( self ); VALUE value = this->values[col]; if( value == Qundef ){ t_typemap *p_typemap = RTYPEDDATA_DATA( this->typemap ); pgresult_get(this->result); /* make sure we have a valid PGresult object */ value = p_typemap->funcs.typecast_result_value(p_typemap, this->result, this->row_num, col); RB_OBJ_WRITE(self, &this->values[col], value); } return value; } static void pg_tuple_detach(VALUE self) { t_pg_tuple *this = RTYPEDDATA_DATA( self ); RB_OBJ_WRITE(self, &this->result, Qnil); RB_OBJ_WRITE(self, &this->typemap, Qnil); this->row_num = -1; } static void pg_tuple_materialize(VALUE self) { t_pg_tuple *this = RTYPEDDATA_DATA( self ); int field_num; for(field_num = 0; field_num < this->num_fields; field_num++) { pg_tuple_materialize_field(self, field_num); } pg_tuple_detach(self); } /* * call-seq: * tup.fetch(key) → value * tup.fetch(key, default) → value * tup.fetch(key) { |key| block } → value * * Returns a field value by either column index or column name. * * An integer +key+ is interpreted as column index. * Negative values of index count from the end of the array. * * Depending on Result#field_name_type= a string or symbol +key+ is interpreted as column name. * * If the key can't be found, there are several options: * With no other arguments, it will raise a IndexError exception; * if default is given, then that will be returned; * if the optional code block is specified, then that will be run and its result returned. */ static VALUE pg_tuple_fetch(int argc, VALUE *argv, VALUE self) { VALUE key; long block_given; VALUE index; int field_num; t_pg_tuple *this = pg_tuple_get_this(self); rb_check_arity(argc, 1, 2); key = argv[0]; block_given = rb_block_given_p(); if (block_given && argc == 2) { rb_warn("block supersedes default value argument"); } switch(rb_type(key)){ case T_FIXNUM: case T_BIGNUM: field_num = NUM2INT(key); if ( field_num < 0 ) field_num = this->num_fields + field_num; if ( field_num < 0 || field_num >= this->num_fields ){ if (block_given) return rb_yield(key); if (argc == 1) rb_raise( rb_eIndexError, "Index %d is out of range", field_num ); return argv[1]; } break; default: index = rb_hash_aref(this->field_map, key); if (index == Qnil) { if (block_given) return rb_yield(key); if (argc == 1) rb_raise( rb_eKeyError, "column not found" ); return argv[1]; } field_num = NUM2INT(index); } return pg_tuple_materialize_field(self, field_num); } /* * call-seq: * tup[ key ] -> value * * Returns a field value by either column index or column name. * * An integer +key+ is interpreted as column index. * Negative values of index count from the end of the array. * * Depending on Result#field_name_type= a string or symbol +key+ is interpreted as column name. * * If the key can't be found, it returns +nil+ . */ static VALUE pg_tuple_aref(VALUE self, VALUE key) { VALUE index; int field_num; t_pg_tuple *this = pg_tuple_get_this(self); switch(rb_type(key)){ case T_FIXNUM: case T_BIGNUM: field_num = NUM2INT(key); if ( field_num < 0 ) field_num = this->num_fields + field_num; if ( field_num < 0 || field_num >= this->num_fields ) return Qnil; break; default: index = rb_hash_aref(this->field_map, key); if( index == Qnil ) return Qnil; field_num = NUM2INT(index); } return pg_tuple_materialize_field(self, field_num); } static VALUE pg_tuple_num_fields_for_enum(VALUE self, VALUE args, VALUE eobj) { t_pg_tuple *this = pg_tuple_get_this(self); return INT2NUM(this->num_fields); } static int pg_tuple_yield_key_value(VALUE key, VALUE index, VALUE self) { VALUE value = pg_tuple_materialize_field(self, NUM2INT(index)); rb_yield_values(2, key, value); return ST_CONTINUE; } /* * call-seq: * tup.each{ |key, value| ... } * * Invokes block for each field name and value in the tuple. */ static VALUE pg_tuple_each(VALUE self) { t_pg_tuple *this = pg_tuple_get_this(self); VALUE field_names; RETURN_SIZED_ENUMERATOR(self, 0, NULL, pg_tuple_num_fields_for_enum); field_names = pg_tuple_get_field_names(this); if( field_names == Qfalse ){ rb_hash_foreach(this->field_map, pg_tuple_yield_key_value, self); } else { int i; for( i = 0; i < this->num_fields; i++ ){ VALUE value = pg_tuple_materialize_field(self, i); rb_yield_values(2, RARRAY_AREF(field_names, i), value); } } pg_tuple_detach(self); return self; } /* * call-seq: * tup.each_value{ |value| ... } * * Invokes block for each field value in the tuple. */ static VALUE pg_tuple_each_value(VALUE self) { t_pg_tuple *this = pg_tuple_get_this(self); int field_num; RETURN_SIZED_ENUMERATOR(self, 0, NULL, pg_tuple_num_fields_for_enum); for(field_num = 0; field_num < this->num_fields; field_num++) { VALUE value = pg_tuple_materialize_field(self, field_num); rb_yield(value); } pg_tuple_detach(self); return self; } /* * call-seq: * tup.values -> Array * * Returns the values of this tuple as Array. * +res.tuple(i).values+ is equal to +res.tuple_values(i)+ . */ static VALUE pg_tuple_values(VALUE self) { t_pg_tuple *this = pg_tuple_get_this(self); pg_tuple_materialize(self); return rb_ary_new4(this->num_fields, &this->values[0]); } static VALUE pg_tuple_field_map(VALUE self) { t_pg_tuple *this = pg_tuple_get_this(self); return this->field_map; } static VALUE pg_tuple_field_names(VALUE self) { t_pg_tuple *this = pg_tuple_get_this(self); return pg_tuple_get_field_names(this); } /* * call-seq: * tup.length → integer * * Returns number of fields of this tuple. */ static VALUE pg_tuple_length(VALUE self) { t_pg_tuple *this = pg_tuple_get_this(self); return INT2NUM(this->num_fields); } /* * call-seq: * tup.index(key) → integer * * Returns the field number which matches the given column name. */ static VALUE pg_tuple_index(VALUE self, VALUE key) { t_pg_tuple *this = pg_tuple_get_this(self); return rb_hash_aref(this->field_map, key); } static VALUE pg_tuple_dump(VALUE self) { VALUE field_names; VALUE values; VALUE a; t_pg_tuple *this = pg_tuple_get_this(self); pg_tuple_materialize(self); field_names = pg_tuple_get_field_names(this); if( field_names == Qfalse ) field_names = rb_funcall(this->field_map, rb_intern("keys"), 0); values = rb_ary_new4(this->num_fields, &this->values[0]); a = rb_ary_new3(2, field_names, values); rb_copy_generic_ivar(a, self); return a; } static VALUE pg_tuple_load(VALUE self, VALUE a) { int num_fields; int i; t_pg_tuple *this; VALUE values; VALUE field_names; VALUE field_map; int dup_names; rb_check_frozen(self); TypedData_Get_Struct(self, t_pg_tuple, &pg_tuple_type, this); if (this) rb_raise(rb_eTypeError, "tuple is not empty"); Check_Type(a, T_ARRAY); if (RARRAY_LEN(a) != 2) rb_raise(rb_eTypeError, "expected an array of 2 elements"); field_names = RARRAY_AREF(a, 0); Check_Type(field_names, T_ARRAY); rb_obj_freeze(field_names); values = RARRAY_AREF(a, 1); Check_Type(values, T_ARRAY); num_fields = RARRAY_LENINT(values); if (RARRAY_LENINT(field_names) != num_fields) rb_raise(rb_eTypeError, "different number of fields and values"); field_map = rb_hash_new(); for( i = 0; i < num_fields; i++ ){ rb_hash_aset(field_map, RARRAY_AREF(field_names, i), INT2FIX(i)); } rb_obj_freeze(field_map); dup_names = num_fields != (int)RHASH_SIZE(field_map); this = (t_pg_tuple *)xmalloc( sizeof(*this) + sizeof(*this->values) * num_fields + sizeof(*this->values) * (dup_names ? 1 : 0)); RB_OBJ_WRITE(self, &this->result, Qnil); RB_OBJ_WRITE(self, &this->typemap, Qnil); this->row_num = -1; this->num_fields = num_fields; RB_OBJ_WRITE(self, &this->field_map, field_map); for( i = 0; i < num_fields; i++ ){ VALUE v = RARRAY_AREF(values, i); if( v == Qundef ) rb_raise(rb_eTypeError, "field %d is not materialized", i); RB_OBJ_WRITE(self, &this->values[i], v); } if( dup_names ){ RB_OBJ_WRITE(self, &this->values[num_fields], field_names); } RTYPEDDATA_DATA(self) = this; rb_copy_generic_ivar(self, a); return self; } void init_pg_tuple(void) { rb_cPG_Tuple = rb_define_class_under( rb_mPG, "Tuple", rb_cObject ); rb_define_alloc_func( rb_cPG_Tuple, pg_tuple_s_allocate ); rb_include_module(rb_cPG_Tuple, rb_mEnumerable); rb_define_method(rb_cPG_Tuple, "fetch", pg_tuple_fetch, -1); rb_define_method(rb_cPG_Tuple, "[]", pg_tuple_aref, 1); rb_define_method(rb_cPG_Tuple, "each", pg_tuple_each, 0); rb_define_method(rb_cPG_Tuple, "each_value", pg_tuple_each_value, 0); rb_define_method(rb_cPG_Tuple, "values", pg_tuple_values, 0); rb_define_method(rb_cPG_Tuple, "length", pg_tuple_length, 0); rb_define_alias(rb_cPG_Tuple, "size", "length"); rb_define_method(rb_cPG_Tuple, "index", pg_tuple_index, 1); rb_define_private_method(rb_cPG_Tuple, "field_map", pg_tuple_field_map, 0); rb_define_private_method(rb_cPG_Tuple, "field_names", pg_tuple_field_names, 0); /* methods for marshaling */ rb_define_private_method(rb_cPG_Tuple, "marshal_dump", pg_tuple_dump, 0); rb_define_private_method(rb_cPG_Tuple, "marshal_load", pg_tuple_load, 1); } pg-1.5.5/ext/pg_type_map_in_ruby.c0000644000004100000410000002437614563476204017206 0ustar www-datawww-data/* * pg_type_map_in_ruby.c - PG::TypeMapInRuby class extension * $Id$ * */ #include "pg.h" VALUE rb_cTypeMapInRuby; static VALUE s_id_fit_to_result; static VALUE s_id_fit_to_query; static VALUE s_id_fit_to_copy_get; static VALUE s_id_typecast_result_value; static VALUE s_id_typecast_query_param; static VALUE s_id_typecast_copy_get; typedef struct { t_typemap typemap; VALUE self; } t_tmir; static size_t pg_tmir_memsize( const void *_this ) { const t_tmir *this = (const t_tmir *)_this; return sizeof(*this); } static void pg_tmir_compact( void *_this ) { t_tmir *this = (t_tmir *)_this; pg_typemap_compact(&this->typemap); pg_gc_location(this->self); } static const rb_data_type_t pg_tmir_type = { "PG::TypeMapInRuby", { pg_typemap_mark, RUBY_TYPED_DEFAULT_FREE, pg_tmir_memsize, pg_compact_callback(pg_tmir_compact), }, &pg_typemap_type, 0, RUBY_TYPED_FREE_IMMEDIATELY | RUBY_TYPED_WB_PROTECTED | PG_RUBY_TYPED_FROZEN_SHAREABLE, }; /* * call-seq: * typemap.fit_to_result( result ) * * Check that the type map fits to the result. * * This method is called, when a type map is assigned to a result. * It must return a PG::TypeMap object or raise an Exception. * This can be +self+ or some other type map that fits to the result. * */ static VALUE pg_tmir_fit_to_result( VALUE self, VALUE result ) { t_tmir *this = RTYPEDDATA_DATA( self ); t_typemap *default_tm; t_typemap *p_new_typemap; VALUE sub_typemap; VALUE new_typemap; if( rb_respond_to(self, s_id_fit_to_result) ){ t_typemap *tm; UNUSED(tm); new_typemap = rb_funcall( self, s_id_fit_to_result, 1, result ); if ( !rb_obj_is_kind_of(new_typemap, rb_cTypeMap) ) { /* TypedData_Get_Struct() raises "wrong argument type", which is misleading, * so we better raise our own message */ rb_raise( rb_eTypeError, "wrong return type from fit_to_result: %s expected kind of PG::TypeMap", rb_obj_classname( new_typemap ) ); } TypedData_Get_Struct(new_typemap, t_typemap, &pg_typemap_type, tm); } else { new_typemap = self; } /* Ensure that the default type map fits equally. */ default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); sub_typemap = default_tm->funcs.fit_to_result( this->typemap.default_typemap, result ); if( sub_typemap != this->typemap.default_typemap ){ new_typemap = rb_obj_dup( new_typemap ); } p_new_typemap = RTYPEDDATA_DATA(new_typemap); p_new_typemap->default_typemap = sub_typemap; return new_typemap; } static VALUE pg_tmir_result_value( t_typemap *p_typemap, VALUE result, int tuple, int field ) { t_tmir *this = (t_tmir *) p_typemap; return rb_funcall( this->self, s_id_typecast_result_value, 3, result, INT2NUM(tuple), INT2NUM(field) ); } /* * call-seq: * typemap.typecast_result_value( result, tuple, field ) * * Retrieve and cast a field of the given result. * * This method implementation uses the #default_type_map to get the * field value. It can be derived to change this behaviour. * * Parameters: * * +result+ : The PG::Result received from the database. * * +tuple+ : The row number to retrieve. * * +field+ : The column number to retrieve. * * Note: Calling any value retrieving methods of +result+ will result * in an (endless) recursion. Instead super() can be used to retrieve * the value using the default_typemap. * */ static VALUE pg_tmir_typecast_result_value( VALUE self, VALUE result, VALUE tuple, VALUE field ) { t_tmir *this = RTYPEDDATA_DATA( self ); t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); return default_tm->funcs.typecast_result_value( default_tm, result, NUM2INT(tuple), NUM2INT(field) ); } /* * call-seq: * typemap.fit_to_query( params ) * * Check that the type map fits to the given user values. * * This method is called, when a type map is used for sending a query * and for encoding of copy data, before the value is casted. * */ static VALUE pg_tmir_fit_to_query( VALUE self, VALUE params ) { t_tmir *this = RTYPEDDATA_DATA( self ); t_typemap *default_tm; if( rb_respond_to(self, s_id_fit_to_query) ){ rb_funcall( self, s_id_fit_to_query, 1, params ); } /* Ensure that the default type map fits equally. */ default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); default_tm->funcs.fit_to_query( this->typemap.default_typemap, params ); return self; } static t_pg_coder * pg_tmir_query_param( t_typemap *p_typemap, VALUE param_value, int field ) { t_tmir *this = (t_tmir *) p_typemap; VALUE coder = rb_funcall( this->self, s_id_typecast_query_param, 2, param_value, INT2NUM(field) ); if ( NIL_P(coder) ){ return NULL; } else if( rb_obj_is_kind_of(coder, rb_cPG_Coder) ) { return RTYPEDDATA_DATA(coder); } else { rb_raise( rb_eTypeError, "wrong return type from typecast_query_param: %s expected nil or kind of PG::Coder", rb_obj_classname( coder ) ); } } /* * call-seq: * typemap.typecast_query_param( param_value, field ) * * Cast a field string for transmission to the server. * * This method implementation uses the #default_type_map to cast param_value. * It can be derived to change this behaviour. * * Parameters: * * +param_value+ : The value from the user. * * +field+ : The field number from left to right. * */ static VALUE pg_tmir_typecast_query_param( VALUE self, VALUE param_value, VALUE field ) { t_tmir *this = RTYPEDDATA_DATA( self ); t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); t_pg_coder *p_coder = default_tm->funcs.typecast_query_param( default_tm, param_value, NUM2INT(field) ); return p_coder ? p_coder->coder_obj : Qnil; } /* This is to fool rdoc's C parser */ #if 0 /* * call-seq: * typemap.fit_to_copy_get() * * Check that the type map can be used for PG::Connection#get_copy_data. * * This method is called, when a type map is used for decoding copy data, * before the value is casted. * * Should return the expected number of columns or 0 if the number of columns is unknown. * This number is only used for memory pre-allocation. * */ static VALUE pg_tmir_fit_to_copy_get_dummy( VALUE self ){} #endif static int pg_tmir_fit_to_copy_get( VALUE self ) { t_tmir *this = RTYPEDDATA_DATA( self ); t_typemap *default_tm; VALUE num_columns = INT2NUM(0); if( rb_respond_to(self, s_id_fit_to_copy_get) ){ num_columns = rb_funcall( self, s_id_fit_to_copy_get, 0 ); } if ( !rb_obj_is_kind_of(num_columns, rb_cInteger) ) { rb_raise( rb_eTypeError, "wrong return type from fit_to_copy_get: %s expected kind of Integer", rb_obj_classname( num_columns ) ); } /* Ensure that the default type map fits equally. */ default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); default_tm->funcs.fit_to_copy_get( this->typemap.default_typemap ); return NUM2INT(num_columns);; } static VALUE pg_tmir_copy_get( t_typemap *p_typemap, VALUE field_str, int fieldno, int format, int enc_idx ) { t_tmir *this = (t_tmir *) p_typemap; rb_encoding *p_encoding = rb_enc_from_index(enc_idx); VALUE enc = rb_enc_from_encoding(p_encoding); /* field_str is reused in-place by pg_text_dec_copy_row(), so we need to make * a copy of the string buffer for use in ruby space. */ VALUE field_str_copy = rb_str_dup(field_str); rb_str_modify(field_str_copy); return rb_funcall( this->self, s_id_typecast_copy_get, 4, field_str_copy, INT2NUM(fieldno), INT2NUM(format), enc ); } /* * call-seq: * typemap.typecast_copy_get( field_str, fieldno, format, encoding ) * * Cast a field string received by PG::Connection#get_copy_data. * * This method implementation uses the #default_type_map to cast field_str. * It can be derived to change this behaviour. * * Parameters: * * +field_str+ : The String received from the server. * * +fieldno+ : The field number from left to right. * * +format+ : The format code (0 = text, 1 = binary) * * +encoding+ : The encoding of the connection and encoding the returned * value should get. * */ static VALUE pg_tmir_typecast_copy_get( VALUE self, VALUE field_str, VALUE fieldno, VALUE format, VALUE enc ) { t_tmir *this = RTYPEDDATA_DATA( self ); t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); int enc_idx = rb_to_encoding_index( enc ); return default_tm->funcs.typecast_copy_get( default_tm, field_str, NUM2INT(fieldno), NUM2INT(format), enc_idx ); } static VALUE pg_tmir_s_allocate( VALUE klass ) { t_tmir *this; VALUE self; self = TypedData_Make_Struct( klass, t_tmir, &pg_tmir_type, this ); this->typemap.funcs.fit_to_result = pg_tmir_fit_to_result; this->typemap.funcs.fit_to_query = pg_tmir_fit_to_query; this->typemap.funcs.fit_to_copy_get = pg_tmir_fit_to_copy_get; this->typemap.funcs.typecast_result_value = pg_tmir_result_value; this->typemap.funcs.typecast_query_param = pg_tmir_query_param; this->typemap.funcs.typecast_copy_get = pg_tmir_copy_get; RB_OBJ_WRITE(self, &this->typemap.default_typemap, pg_typemap_all_strings); this->self = self; return self; } void init_pg_type_map_in_ruby(void) { s_id_fit_to_result = rb_intern("fit_to_result"); s_id_fit_to_query = rb_intern("fit_to_query"); s_id_fit_to_copy_get = rb_intern("fit_to_copy_get"); s_id_typecast_result_value = rb_intern("typecast_result_value"); s_id_typecast_query_param = rb_intern("typecast_query_param"); s_id_typecast_copy_get = rb_intern("typecast_copy_get"); /* * Document-class: PG::TypeMapInRuby < PG::TypeMap * * This class can be used to implement a type map in ruby, typically as a * #default_type_map in a type map chain. * * This API is EXPERIMENTAL and could change in the future. * */ rb_cTypeMapInRuby = rb_define_class_under( rb_mPG, "TypeMapInRuby", rb_cTypeMap ); rb_define_alloc_func( rb_cTypeMapInRuby, pg_tmir_s_allocate ); /* rb_define_method( rb_cTypeMapInRuby, "fit_to_result", pg_tmir_fit_to_result, 1 ); */ /* rb_define_method( rb_cTypeMapInRuby, "fit_to_query", pg_tmir_fit_to_query, 1 ); */ /* rb_define_method( rb_cTypeMapInRuby, "fit_to_copy_get", pg_tmir_fit_to_copy_get_dummy, 0 ); */ rb_define_method( rb_cTypeMapInRuby, "typecast_result_value", pg_tmir_typecast_result_value, 3 ); rb_define_method( rb_cTypeMapInRuby, "typecast_query_param", pg_tmir_typecast_query_param, 2 ); rb_define_method( rb_cTypeMapInRuby, "typecast_copy_get", pg_tmir_typecast_copy_get, 4 ); /* rb_mDefaultTypeMappable = rb_define_module_under( rb_cTypeMap, "DefaultTypeMappable"); */ rb_include_module( rb_cTypeMapInRuby, rb_mDefaultTypeMappable ); } pg-1.5.5/ext/vc/0000755000004100000410000000000014563476204013403 5ustar www-datawww-datapg-1.5.5/ext/vc/pg.sln0000644000004100000410000000246014563476204014531 0ustar www-datawww-data Microsoft Visual Studio Solution File, Format Version 10.00 # Visual Studio 2008 Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "pg", "pg_18\pg.vcproj", "{9A8BF0C8-1D75-4DC0-8D84-BAEFD693795E}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "pg_19", "pg_19\pg_19.vcproj", "{2EE30C74-074F-4611-B39B-38D5F3C9B071}" EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Win32 = Debug|Win32 Release|Win32 = Release|Win32 EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {9A8BF0C8-1D75-4DC0-8D84-BAEFD693795E}.Debug|Win32.ActiveCfg = Debug|Win32 {9A8BF0C8-1D75-4DC0-8D84-BAEFD693795E}.Debug|Win32.Build.0 = Debug|Win32 {9A8BF0C8-1D75-4DC0-8D84-BAEFD693795E}.Release|Win32.ActiveCfg = Release|Win32 {9A8BF0C8-1D75-4DC0-8D84-BAEFD693795E}.Release|Win32.Build.0 = Release|Win32 {2EE30C74-074F-4611-B39B-38D5F3C9B071}.Debug|Win32.ActiveCfg = Debug|Win32 {2EE30C74-074F-4611-B39B-38D5F3C9B071}.Debug|Win32.Build.0 = Debug|Win32 {2EE30C74-074F-4611-B39B-38D5F3C9B071}.Release|Win32.ActiveCfg = Release|Win32 {2EE30C74-074F-4611-B39B-38D5F3C9B071}.Release|Win32.Build.0 = Release|Win32 EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE EndGlobalSection EndGlobal pg-1.5.5/ext/vc/pg_18/0000755000004100000410000000000014563476204014321 5ustar www-datawww-datapg-1.5.5/ext/vc/pg_18/pg.vcproj0000644000004100000410000001167314563476204016164 0ustar www-datawww-data pg-1.5.5/ext/vc/pg_19/0000755000004100000410000000000014563476204014322 5ustar www-datawww-datapg-1.5.5/ext/vc/pg_19/pg_19.vcproj0000644000004100000410000001020214563476204016461 0ustar www-datawww-data pg-1.5.5/ext/pg_util.c0000644000004100000410000001114114563476204014600 0ustar www-datawww-data/* * pg_util.c - Utils for ruby-pg * $Id$ * */ #include "pg.h" #include "pg_util.h" static const char base64_encode_table[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; /* Encode _len_ bytes at _in_ as base64 and write output to _out_. * * This encoder runs backwards, so that it is possible to encode a string * in-place (with _out_ == _in_). */ void base64_encode( char *out, const char *in, int len) { const unsigned char *in_ptr = (const unsigned char *)in + len; char *out_ptr = out + BASE64_ENCODED_SIZE(len); int part_len = len % 3; if( part_len > 0 ){ long byte2 = 0; long byte1 = part_len > 1 ? *--in_ptr : 0; long byte0 = *--in_ptr; long triple = (byte0 << 16) + (byte1 << 8) + byte2; *--out_ptr = '='; *--out_ptr = part_len > 1 ? base64_encode_table[(triple >> 1 * 6) & 0x3F] : '='; *--out_ptr = base64_encode_table[(triple >> 2 * 6) & 0x3F]; *--out_ptr = base64_encode_table[(triple >> 3 * 6) & 0x3F]; } while( out_ptr > out ){ long byte2 = *--in_ptr; long byte1 = *--in_ptr; long byte0 = *--in_ptr; long triple = (byte0 << 16) + (byte1 << 8) + byte2; *--out_ptr = base64_encode_table[(triple >> 0 * 6) & 0x3F]; *--out_ptr = base64_encode_table[(triple >> 1 * 6) & 0x3F]; *--out_ptr = base64_encode_table[(triple >> 2 * 6) & 0x3F]; *--out_ptr = base64_encode_table[(triple >> 3 * 6) & 0x3F]; } } /* * 0.upto(255).map{|a| "\\x#{ (base64_encode_table.index([a].pack("C")) || 0xff).to_s(16) }" }.join */ static const unsigned char base64_decode_table[] = "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x3e\xff\xff\xff\x3f" "\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\xff\xff\xff\xff\xff\xff" "\xff\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e" "\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\xff\xff\xff\xff\xff" "\xff\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28" "\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff" "\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff"; /* Decode _len_ bytes of base64 characters at _in_ and write output to _out_. * * It is possible to decode a string in-place (with _out_ == _in_). */ int base64_decode( char *out, const char *in, unsigned int len) { unsigned char a, b, c, d; const unsigned char *in_ptr = (const unsigned char *)in; unsigned char *out_ptr = (unsigned char *)out; const unsigned char *iend_ptr = (unsigned char *)in + len; for(;;){ if( in_ptr+3 < iend_ptr && (a=base64_decode_table[in_ptr[0]]) != 0xff && (b=base64_decode_table[in_ptr[1]]) != 0xff && (c=base64_decode_table[in_ptr[2]]) != 0xff && (d=base64_decode_table[in_ptr[3]]) != 0xff ) { in_ptr += 4; *out_ptr++ = (a << 2) | (b >> 4); *out_ptr++ = (b << 4) | (c >> 2); *out_ptr++ = (c << 6) | d; } else if (in_ptr < iend_ptr){ b = c = d = 0xff; while ((a = base64_decode_table[*in_ptr++]) == 0xff && in_ptr < iend_ptr) {} if (in_ptr < iend_ptr){ while ((b = base64_decode_table[*in_ptr++]) == 0xff && in_ptr < iend_ptr) {} if (in_ptr < iend_ptr){ while ((c = base64_decode_table[*in_ptr++]) == 0xff && in_ptr < iend_ptr) {} if (in_ptr < iend_ptr){ while ((d = base64_decode_table[*in_ptr++]) == 0xff && in_ptr < iend_ptr) {} } } } if (a != 0xff && b != 0xff) { *out_ptr++ = (a << 2) | (b >> 4); if (c != 0xff) { *out_ptr++ = (b << 4) | (c >> 2); if (d != 0xff) *out_ptr++ = (c << 6) | d; } } } else { break; } } return (int)((char*)out_ptr - out); } /* * Case-independent comparison of two not-necessarily-null-terminated strings. * At most n bytes will be examined from each string. */ int rbpg_strncasecmp(const char *s1, const char *s2, size_t n) { while (n-- > 0) { unsigned char ch1 = (unsigned char) *s1++; unsigned char ch2 = (unsigned char) *s2++; if (ch1 != ch2){ if (ch1 >= 'A' && ch1 <= 'Z') ch1 += 'a' - 'A'; if (ch2 >= 'A' && ch2 <= 'Z') ch2 += 'a' - 'A'; if (ch1 != ch2) return (int) ch1 - (int) ch2; } if (ch1 == 0) break; } return 0; } pg-1.5.5/ext/errorcodes.txt0000644000004100000410000010126214563476204015705 0ustar www-datawww-data# # errcodes.txt # PostgreSQL error codes # # Copyright (c) 2003-2023, PostgreSQL Global Development Group # # This list serves as the basis for generating source files containing error # codes. It is kept in a common format to make sure all these source files have # the same contents. # The files generated from this one are: # # src/include/utils/errcodes.h # macros defining errcode constants to be used in the rest of the source # # src/pl/plpgsql/src/plerrcodes.h # a list of PL/pgSQL condition names and their SQLSTATE codes # # src/pl/tcl/pltclerrcodes.h # the same, for PL/Tcl # # doc/src/sgml/errcodes-table.sgml # a SGML table of error codes for inclusion in the documentation # # The format of this file is one error code per line, with the following # whitespace-separated fields: # # sqlstate E/W/S errcode_macro_name spec_name # # where sqlstate is a five-character string following the SQLSTATE conventions, # the second field indicates if the code means an error, a warning or success, # errcode_macro_name is the C macro name starting with ERRCODE that will be put # in errcodes.h, and spec_name is a lowercase, underscore-separated name that # will be used as the PL/pgSQL condition name and will also be included in the # SGML list. The last field is optional, if not present the PL/pgSQL condition # and the SGML entry will not be generated. # # Empty lines and lines starting with a hash are comments. # # There are also special lines in the format of: # # Section: section description # # that is, lines starting with the string "Section:". They are used to delimit # error classes as defined in the SQL spec, and are necessary for SGML output. # # # SQLSTATE codes for errors. # # The SQL99 code set is rather impoverished, especially in the area of # syntactical and semantic errors. We have borrowed codes from IBM's DB2 # and invented our own codes to develop a useful code set. # # When adding a new code, make sure it is placed in the most appropriate # class (the first two characters of the code value identify the class). # The listing is organized by class to make this prominent. # # Each class should have a generic '000' subclass. However, # the generic '000' subclass code should be used for an error only # when there is not a more-specific subclass code defined. # # The SQL spec requires that all the elements of a SQLSTATE code be # either digits or upper-case ASCII characters. # # Classes that begin with 0-4 or A-H are defined by the # standard. Within such a class, subclass values defined by the # standard must begin with 0-4 or A-H. To define a new error code, # ensure that it is either in an "implementation-defined class" (it # begins with 5-9 or I-Z), or its subclass falls outside the range of # error codes that could be present in future versions of the # standard (i.e. the subclass value begins with 5-9 or I-Z). # # The convention is that new error codes defined by PostgreSQL in a # class defined by the standard have a subclass value that begins # with 'P'. In addition, error codes defined by PostgreSQL clients # (such as ecpg) have a class value that begins with 'Y'. Section: Class 00 - Successful Completion 00000 S ERRCODE_SUCCESSFUL_COMPLETION successful_completion Section: Class 01 - Warning # do not use this class for failure conditions 01000 W ERRCODE_WARNING warning 0100C W ERRCODE_WARNING_DYNAMIC_RESULT_SETS_RETURNED dynamic_result_sets_returned 01008 W ERRCODE_WARNING_IMPLICIT_ZERO_BIT_PADDING implicit_zero_bit_padding 01003 W ERRCODE_WARNING_NULL_VALUE_ELIMINATED_IN_SET_FUNCTION null_value_eliminated_in_set_function 01007 W ERRCODE_WARNING_PRIVILEGE_NOT_GRANTED privilege_not_granted 01006 W ERRCODE_WARNING_PRIVILEGE_NOT_REVOKED privilege_not_revoked 01004 W ERRCODE_WARNING_STRING_DATA_RIGHT_TRUNCATION string_data_right_truncation 01P01 W ERRCODE_WARNING_DEPRECATED_FEATURE deprecated_feature Section: Class 02 - No Data (this is also a warning class per the SQL standard) # do not use this class for failure conditions 02000 W ERRCODE_NO_DATA no_data 02001 W ERRCODE_NO_ADDITIONAL_DYNAMIC_RESULT_SETS_RETURNED no_additional_dynamic_result_sets_returned Section: Class 03 - SQL Statement Not Yet Complete 03000 E ERRCODE_SQL_STATEMENT_NOT_YET_COMPLETE sql_statement_not_yet_complete Section: Class 08 - Connection Exception 08000 E ERRCODE_CONNECTION_EXCEPTION connection_exception 08003 E ERRCODE_CONNECTION_DOES_NOT_EXIST connection_does_not_exist 08006 E ERRCODE_CONNECTION_FAILURE connection_failure 08001 E ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION sqlclient_unable_to_establish_sqlconnection 08004 E ERRCODE_SQLSERVER_REJECTED_ESTABLISHMENT_OF_SQLCONNECTION sqlserver_rejected_establishment_of_sqlconnection 08007 E ERRCODE_TRANSACTION_RESOLUTION_UNKNOWN transaction_resolution_unknown 08P01 E ERRCODE_PROTOCOL_VIOLATION protocol_violation Section: Class 09 - Triggered Action Exception 09000 E ERRCODE_TRIGGERED_ACTION_EXCEPTION triggered_action_exception Section: Class 0A - Feature Not Supported 0A000 E ERRCODE_FEATURE_NOT_SUPPORTED feature_not_supported Section: Class 0B - Invalid Transaction Initiation 0B000 E ERRCODE_INVALID_TRANSACTION_INITIATION invalid_transaction_initiation Section: Class 0F - Locator Exception 0F000 E ERRCODE_LOCATOR_EXCEPTION locator_exception 0F001 E ERRCODE_L_E_INVALID_SPECIFICATION invalid_locator_specification Section: Class 0L - Invalid Grantor 0L000 E ERRCODE_INVALID_GRANTOR invalid_grantor 0LP01 E ERRCODE_INVALID_GRANT_OPERATION invalid_grant_operation Section: Class 0P - Invalid Role Specification 0P000 E ERRCODE_INVALID_ROLE_SPECIFICATION invalid_role_specification Section: Class 0Z - Diagnostics Exception 0Z000 E ERRCODE_DIAGNOSTICS_EXCEPTION diagnostics_exception 0Z002 E ERRCODE_STACKED_DIAGNOSTICS_ACCESSED_WITHOUT_ACTIVE_HANDLER stacked_diagnostics_accessed_without_active_handler Section: Class 20 - Case Not Found 20000 E ERRCODE_CASE_NOT_FOUND case_not_found Section: Class 21 - Cardinality Violation # this means something returned the wrong number of rows 21000 E ERRCODE_CARDINALITY_VIOLATION cardinality_violation Section: Class 22 - Data Exception 22000 E ERRCODE_DATA_EXCEPTION data_exception 2202E E ERRCODE_ARRAY_ELEMENT_ERROR # SQL99's actual definition of "array element error" is subscript error 2202E E ERRCODE_ARRAY_SUBSCRIPT_ERROR array_subscript_error 22021 E ERRCODE_CHARACTER_NOT_IN_REPERTOIRE character_not_in_repertoire 22008 E ERRCODE_DATETIME_FIELD_OVERFLOW datetime_field_overflow 22008 E ERRCODE_DATETIME_VALUE_OUT_OF_RANGE 22012 E ERRCODE_DIVISION_BY_ZERO division_by_zero 22005 E ERRCODE_ERROR_IN_ASSIGNMENT error_in_assignment 2200B E ERRCODE_ESCAPE_CHARACTER_CONFLICT escape_character_conflict 22022 E ERRCODE_INDICATOR_OVERFLOW indicator_overflow 22015 E ERRCODE_INTERVAL_FIELD_OVERFLOW interval_field_overflow 2201E E ERRCODE_INVALID_ARGUMENT_FOR_LOG invalid_argument_for_logarithm 22014 E ERRCODE_INVALID_ARGUMENT_FOR_NTILE invalid_argument_for_ntile_function 22016 E ERRCODE_INVALID_ARGUMENT_FOR_NTH_VALUE invalid_argument_for_nth_value_function 2201F E ERRCODE_INVALID_ARGUMENT_FOR_POWER_FUNCTION invalid_argument_for_power_function 2201G E ERRCODE_INVALID_ARGUMENT_FOR_WIDTH_BUCKET_FUNCTION invalid_argument_for_width_bucket_function 22018 E ERRCODE_INVALID_CHARACTER_VALUE_FOR_CAST invalid_character_value_for_cast 22007 E ERRCODE_INVALID_DATETIME_FORMAT invalid_datetime_format 22019 E ERRCODE_INVALID_ESCAPE_CHARACTER invalid_escape_character 2200D E ERRCODE_INVALID_ESCAPE_OCTET invalid_escape_octet 22025 E ERRCODE_INVALID_ESCAPE_SEQUENCE invalid_escape_sequence 22P06 E ERRCODE_NONSTANDARD_USE_OF_ESCAPE_CHARACTER nonstandard_use_of_escape_character 22010 E ERRCODE_INVALID_INDICATOR_PARAMETER_VALUE invalid_indicator_parameter_value 22023 E ERRCODE_INVALID_PARAMETER_VALUE invalid_parameter_value 22013 E ERRCODE_INVALID_PRECEDING_OR_FOLLOWING_SIZE invalid_preceding_or_following_size 2201B E ERRCODE_INVALID_REGULAR_EXPRESSION invalid_regular_expression 2201W E ERRCODE_INVALID_ROW_COUNT_IN_LIMIT_CLAUSE invalid_row_count_in_limit_clause 2201X E ERRCODE_INVALID_ROW_COUNT_IN_RESULT_OFFSET_CLAUSE invalid_row_count_in_result_offset_clause 2202H E ERRCODE_INVALID_TABLESAMPLE_ARGUMENT invalid_tablesample_argument 2202G E ERRCODE_INVALID_TABLESAMPLE_REPEAT invalid_tablesample_repeat 22009 E ERRCODE_INVALID_TIME_ZONE_DISPLACEMENT_VALUE invalid_time_zone_displacement_value 2200C E ERRCODE_INVALID_USE_OF_ESCAPE_CHARACTER invalid_use_of_escape_character 2200G E ERRCODE_MOST_SPECIFIC_TYPE_MISMATCH most_specific_type_mismatch 22004 E ERRCODE_NULL_VALUE_NOT_ALLOWED null_value_not_allowed 22002 E ERRCODE_NULL_VALUE_NO_INDICATOR_PARAMETER null_value_no_indicator_parameter 22003 E ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE numeric_value_out_of_range 2200H E ERRCODE_SEQUENCE_GENERATOR_LIMIT_EXCEEDED sequence_generator_limit_exceeded 22026 E ERRCODE_STRING_DATA_LENGTH_MISMATCH string_data_length_mismatch 22001 E ERRCODE_STRING_DATA_RIGHT_TRUNCATION string_data_right_truncation 22011 E ERRCODE_SUBSTRING_ERROR substring_error 22027 E ERRCODE_TRIM_ERROR trim_error 22024 E ERRCODE_UNTERMINATED_C_STRING unterminated_c_string 2200F E ERRCODE_ZERO_LENGTH_CHARACTER_STRING zero_length_character_string 22P01 E ERRCODE_FLOATING_POINT_EXCEPTION floating_point_exception 22P02 E ERRCODE_INVALID_TEXT_REPRESENTATION invalid_text_representation 22P03 E ERRCODE_INVALID_BINARY_REPRESENTATION invalid_binary_representation 22P04 E ERRCODE_BAD_COPY_FILE_FORMAT bad_copy_file_format 22P05 E ERRCODE_UNTRANSLATABLE_CHARACTER untranslatable_character 2200L E ERRCODE_NOT_AN_XML_DOCUMENT not_an_xml_document 2200M E ERRCODE_INVALID_XML_DOCUMENT invalid_xml_document 2200N E ERRCODE_INVALID_XML_CONTENT invalid_xml_content 2200S E ERRCODE_INVALID_XML_COMMENT invalid_xml_comment 2200T E ERRCODE_INVALID_XML_PROCESSING_INSTRUCTION invalid_xml_processing_instruction 22030 E ERRCODE_DUPLICATE_JSON_OBJECT_KEY_VALUE duplicate_json_object_key_value 22031 E ERRCODE_INVALID_ARGUMENT_FOR_SQL_JSON_DATETIME_FUNCTION invalid_argument_for_sql_json_datetime_function 22032 E ERRCODE_INVALID_JSON_TEXT invalid_json_text 22033 E ERRCODE_INVALID_SQL_JSON_SUBSCRIPT invalid_sql_json_subscript 22034 E ERRCODE_MORE_THAN_ONE_SQL_JSON_ITEM more_than_one_sql_json_item 22035 E ERRCODE_NO_SQL_JSON_ITEM no_sql_json_item 22036 E ERRCODE_NON_NUMERIC_SQL_JSON_ITEM non_numeric_sql_json_item 22037 E ERRCODE_NON_UNIQUE_KEYS_IN_A_JSON_OBJECT non_unique_keys_in_a_json_object 22038 E ERRCODE_SINGLETON_SQL_JSON_ITEM_REQUIRED singleton_sql_json_item_required 22039 E ERRCODE_SQL_JSON_ARRAY_NOT_FOUND sql_json_array_not_found 2203A E ERRCODE_SQL_JSON_MEMBER_NOT_FOUND sql_json_member_not_found 2203B E ERRCODE_SQL_JSON_NUMBER_NOT_FOUND sql_json_number_not_found 2203C E ERRCODE_SQL_JSON_OBJECT_NOT_FOUND sql_json_object_not_found 2203D E ERRCODE_TOO_MANY_JSON_ARRAY_ELEMENTS too_many_json_array_elements 2203E E ERRCODE_TOO_MANY_JSON_OBJECT_MEMBERS too_many_json_object_members 2203F E ERRCODE_SQL_JSON_SCALAR_REQUIRED sql_json_scalar_required 2203G E ERRCODE_SQL_JSON_ITEM_CANNOT_BE_CAST_TO_TARGET_TYPE sql_json_item_cannot_be_cast_to_target_type Section: Class 23 - Integrity Constraint Violation 23000 E ERRCODE_INTEGRITY_CONSTRAINT_VIOLATION integrity_constraint_violation 23001 E ERRCODE_RESTRICT_VIOLATION restrict_violation 23502 E ERRCODE_NOT_NULL_VIOLATION not_null_violation 23503 E ERRCODE_FOREIGN_KEY_VIOLATION foreign_key_violation 23505 E ERRCODE_UNIQUE_VIOLATION unique_violation 23514 E ERRCODE_CHECK_VIOLATION check_violation 23P01 E ERRCODE_EXCLUSION_VIOLATION exclusion_violation Section: Class 24 - Invalid Cursor State 24000 E ERRCODE_INVALID_CURSOR_STATE invalid_cursor_state Section: Class 25 - Invalid Transaction State 25000 E ERRCODE_INVALID_TRANSACTION_STATE invalid_transaction_state 25001 E ERRCODE_ACTIVE_SQL_TRANSACTION active_sql_transaction 25002 E ERRCODE_BRANCH_TRANSACTION_ALREADY_ACTIVE branch_transaction_already_active 25008 E ERRCODE_HELD_CURSOR_REQUIRES_SAME_ISOLATION_LEVEL held_cursor_requires_same_isolation_level 25003 E ERRCODE_INAPPROPRIATE_ACCESS_MODE_FOR_BRANCH_TRANSACTION inappropriate_access_mode_for_branch_transaction 25004 E ERRCODE_INAPPROPRIATE_ISOLATION_LEVEL_FOR_BRANCH_TRANSACTION inappropriate_isolation_level_for_branch_transaction 25005 E ERRCODE_NO_ACTIVE_SQL_TRANSACTION_FOR_BRANCH_TRANSACTION no_active_sql_transaction_for_branch_transaction 25006 E ERRCODE_READ_ONLY_SQL_TRANSACTION read_only_sql_transaction 25007 E ERRCODE_SCHEMA_AND_DATA_STATEMENT_MIXING_NOT_SUPPORTED schema_and_data_statement_mixing_not_supported 25P01 E ERRCODE_NO_ACTIVE_SQL_TRANSACTION no_active_sql_transaction 25P02 E ERRCODE_IN_FAILED_SQL_TRANSACTION in_failed_sql_transaction 25P03 E ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT idle_in_transaction_session_timeout Section: Class 26 - Invalid SQL Statement Name # (we take this to mean prepared statements 26000 E ERRCODE_INVALID_SQL_STATEMENT_NAME invalid_sql_statement_name Section: Class 27 - Triggered Data Change Violation 27000 E ERRCODE_TRIGGERED_DATA_CHANGE_VIOLATION triggered_data_change_violation Section: Class 28 - Invalid Authorization Specification 28000 E ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION invalid_authorization_specification 28P01 E ERRCODE_INVALID_PASSWORD invalid_password Section: Class 2B - Dependent Privilege Descriptors Still Exist 2B000 E ERRCODE_DEPENDENT_PRIVILEGE_DESCRIPTORS_STILL_EXIST dependent_privilege_descriptors_still_exist 2BP01 E ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST dependent_objects_still_exist Section: Class 2D - Invalid Transaction Termination 2D000 E ERRCODE_INVALID_TRANSACTION_TERMINATION invalid_transaction_termination Section: Class 2F - SQL Routine Exception 2F000 E ERRCODE_SQL_ROUTINE_EXCEPTION sql_routine_exception 2F005 E ERRCODE_S_R_E_FUNCTION_EXECUTED_NO_RETURN_STATEMENT function_executed_no_return_statement 2F002 E ERRCODE_S_R_E_MODIFYING_SQL_DATA_NOT_PERMITTED modifying_sql_data_not_permitted 2F003 E ERRCODE_S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED prohibited_sql_statement_attempted 2F004 E ERRCODE_S_R_E_READING_SQL_DATA_NOT_PERMITTED reading_sql_data_not_permitted Section: Class 34 - Invalid Cursor Name 34000 E ERRCODE_INVALID_CURSOR_NAME invalid_cursor_name Section: Class 38 - External Routine Exception 38000 E ERRCODE_EXTERNAL_ROUTINE_EXCEPTION external_routine_exception 38001 E ERRCODE_E_R_E_CONTAINING_SQL_NOT_PERMITTED containing_sql_not_permitted 38002 E ERRCODE_E_R_E_MODIFYING_SQL_DATA_NOT_PERMITTED modifying_sql_data_not_permitted 38003 E ERRCODE_E_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED prohibited_sql_statement_attempted 38004 E ERRCODE_E_R_E_READING_SQL_DATA_NOT_PERMITTED reading_sql_data_not_permitted Section: Class 39 - External Routine Invocation Exception 39000 E ERRCODE_EXTERNAL_ROUTINE_INVOCATION_EXCEPTION external_routine_invocation_exception 39001 E ERRCODE_E_R_I_E_INVALID_SQLSTATE_RETURNED invalid_sqlstate_returned 39004 E ERRCODE_E_R_I_E_NULL_VALUE_NOT_ALLOWED null_value_not_allowed 39P01 E ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED trigger_protocol_violated 39P02 E ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED srf_protocol_violated 39P03 E ERRCODE_E_R_I_E_EVENT_TRIGGER_PROTOCOL_VIOLATED event_trigger_protocol_violated Section: Class 3B - Savepoint Exception 3B000 E ERRCODE_SAVEPOINT_EXCEPTION savepoint_exception 3B001 E ERRCODE_S_E_INVALID_SPECIFICATION invalid_savepoint_specification Section: Class 3D - Invalid Catalog Name 3D000 E ERRCODE_INVALID_CATALOG_NAME invalid_catalog_name Section: Class 3F - Invalid Schema Name 3F000 E ERRCODE_INVALID_SCHEMA_NAME invalid_schema_name Section: Class 40 - Transaction Rollback 40000 E ERRCODE_TRANSACTION_ROLLBACK transaction_rollback 40002 E ERRCODE_T_R_INTEGRITY_CONSTRAINT_VIOLATION transaction_integrity_constraint_violation 40001 E ERRCODE_T_R_SERIALIZATION_FAILURE serialization_failure 40003 E ERRCODE_T_R_STATEMENT_COMPLETION_UNKNOWN statement_completion_unknown 40P01 E ERRCODE_T_R_DEADLOCK_DETECTED deadlock_detected Section: Class 42 - Syntax Error or Access Rule Violation 42000 E ERRCODE_SYNTAX_ERROR_OR_ACCESS_RULE_VIOLATION syntax_error_or_access_rule_violation # never use the above; use one of these two if no specific code exists: 42601 E ERRCODE_SYNTAX_ERROR syntax_error 42501 E ERRCODE_INSUFFICIENT_PRIVILEGE insufficient_privilege 42846 E ERRCODE_CANNOT_COERCE cannot_coerce 42803 E ERRCODE_GROUPING_ERROR grouping_error 42P20 E ERRCODE_WINDOWING_ERROR windowing_error 42P19 E ERRCODE_INVALID_RECURSION invalid_recursion 42830 E ERRCODE_INVALID_FOREIGN_KEY invalid_foreign_key 42602 E ERRCODE_INVALID_NAME invalid_name 42622 E ERRCODE_NAME_TOO_LONG name_too_long 42939 E ERRCODE_RESERVED_NAME reserved_name 42804 E ERRCODE_DATATYPE_MISMATCH datatype_mismatch 42P18 E ERRCODE_INDETERMINATE_DATATYPE indeterminate_datatype 42P21 E ERRCODE_COLLATION_MISMATCH collation_mismatch 42P22 E ERRCODE_INDETERMINATE_COLLATION indeterminate_collation 42809 E ERRCODE_WRONG_OBJECT_TYPE wrong_object_type 428C9 E ERRCODE_GENERATED_ALWAYS generated_always # Note: for ERRCODE purposes, we divide namable objects into these categories: # databases, schemas, prepared statements, cursors, tables, columns, # functions (including operators), and all else (lumped as "objects"). # (The first four categories are mandated by the existence of separate # SQLSTATE classes for them in the spec; in this file, however, we group # the ERRCODE names with all the rest under class 42.) Parameters are # sort-of-named objects and get their own ERRCODE. # # The same breakdown is used for "duplicate" and "ambiguous" complaints, # as well as complaints associated with incorrect declarations. 42703 E ERRCODE_UNDEFINED_COLUMN undefined_column 34000 E ERRCODE_UNDEFINED_CURSOR 3D000 E ERRCODE_UNDEFINED_DATABASE 42883 E ERRCODE_UNDEFINED_FUNCTION undefined_function 26000 E ERRCODE_UNDEFINED_PSTATEMENT 3F000 E ERRCODE_UNDEFINED_SCHEMA 42P01 E ERRCODE_UNDEFINED_TABLE undefined_table 42P02 E ERRCODE_UNDEFINED_PARAMETER undefined_parameter 42704 E ERRCODE_UNDEFINED_OBJECT undefined_object 42701 E ERRCODE_DUPLICATE_COLUMN duplicate_column 42P03 E ERRCODE_DUPLICATE_CURSOR duplicate_cursor 42P04 E ERRCODE_DUPLICATE_DATABASE duplicate_database 42723 E ERRCODE_DUPLICATE_FUNCTION duplicate_function 42P05 E ERRCODE_DUPLICATE_PSTATEMENT duplicate_prepared_statement 42P06 E ERRCODE_DUPLICATE_SCHEMA duplicate_schema 42P07 E ERRCODE_DUPLICATE_TABLE duplicate_table 42712 E ERRCODE_DUPLICATE_ALIAS duplicate_alias 42710 E ERRCODE_DUPLICATE_OBJECT duplicate_object 42702 E ERRCODE_AMBIGUOUS_COLUMN ambiguous_column 42725 E ERRCODE_AMBIGUOUS_FUNCTION ambiguous_function 42P08 E ERRCODE_AMBIGUOUS_PARAMETER ambiguous_parameter 42P09 E ERRCODE_AMBIGUOUS_ALIAS ambiguous_alias 42P10 E ERRCODE_INVALID_COLUMN_REFERENCE invalid_column_reference 42611 E ERRCODE_INVALID_COLUMN_DEFINITION invalid_column_definition 42P11 E ERRCODE_INVALID_CURSOR_DEFINITION invalid_cursor_definition 42P12 E ERRCODE_INVALID_DATABASE_DEFINITION invalid_database_definition 42P13 E ERRCODE_INVALID_FUNCTION_DEFINITION invalid_function_definition 42P14 E ERRCODE_INVALID_PSTATEMENT_DEFINITION invalid_prepared_statement_definition 42P15 E ERRCODE_INVALID_SCHEMA_DEFINITION invalid_schema_definition 42P16 E ERRCODE_INVALID_TABLE_DEFINITION invalid_table_definition 42P17 E ERRCODE_INVALID_OBJECT_DEFINITION invalid_object_definition Section: Class 44 - WITH CHECK OPTION Violation 44000 E ERRCODE_WITH_CHECK_OPTION_VIOLATION with_check_option_violation Section: Class 53 - Insufficient Resources # (PostgreSQL-specific error class) 53000 E ERRCODE_INSUFFICIENT_RESOURCES insufficient_resources 53100 E ERRCODE_DISK_FULL disk_full 53200 E ERRCODE_OUT_OF_MEMORY out_of_memory 53300 E ERRCODE_TOO_MANY_CONNECTIONS too_many_connections 53400 E ERRCODE_CONFIGURATION_LIMIT_EXCEEDED configuration_limit_exceeded Section: Class 54 - Program Limit Exceeded # this is for wired-in limits, not resource exhaustion problems (class borrowed from DB2) 54000 E ERRCODE_PROGRAM_LIMIT_EXCEEDED program_limit_exceeded 54001 E ERRCODE_STATEMENT_TOO_COMPLEX statement_too_complex 54011 E ERRCODE_TOO_MANY_COLUMNS too_many_columns 54023 E ERRCODE_TOO_MANY_ARGUMENTS too_many_arguments Section: Class 55 - Object Not In Prerequisite State # (class borrowed from DB2) 55000 E ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE object_not_in_prerequisite_state 55006 E ERRCODE_OBJECT_IN_USE object_in_use 55P02 E ERRCODE_CANT_CHANGE_RUNTIME_PARAM cant_change_runtime_param 55P03 E ERRCODE_LOCK_NOT_AVAILABLE lock_not_available 55P04 E ERRCODE_UNSAFE_NEW_ENUM_VALUE_USAGE unsafe_new_enum_value_usage Section: Class 57 - Operator Intervention # (class borrowed from DB2) 57000 E ERRCODE_OPERATOR_INTERVENTION operator_intervention 57014 E ERRCODE_QUERY_CANCELED query_canceled 57P01 E ERRCODE_ADMIN_SHUTDOWN admin_shutdown 57P02 E ERRCODE_CRASH_SHUTDOWN crash_shutdown 57P03 E ERRCODE_CANNOT_CONNECT_NOW cannot_connect_now 57P04 E ERRCODE_DATABASE_DROPPED database_dropped 57P05 E ERRCODE_IDLE_SESSION_TIMEOUT idle_session_timeout Section: Class 58 - System Error (errors external to PostgreSQL itself) # (class borrowed from DB2) 58000 E ERRCODE_SYSTEM_ERROR system_error 58030 E ERRCODE_IO_ERROR io_error 58P01 E ERRCODE_UNDEFINED_FILE undefined_file 58P02 E ERRCODE_DUPLICATE_FILE duplicate_file Section: Class 72 - Snapshot Failure # (class borrowed from Oracle) 72000 E ERRCODE_SNAPSHOT_TOO_OLD snapshot_too_old Section: Class F0 - Configuration File Error # (PostgreSQL-specific error class) F0000 E ERRCODE_CONFIG_FILE_ERROR config_file_error F0001 E ERRCODE_LOCK_FILE_EXISTS lock_file_exists Section: Class HV - Foreign Data Wrapper Error (SQL/MED) # (SQL/MED-specific error class) HV000 E ERRCODE_FDW_ERROR fdw_error HV005 E ERRCODE_FDW_COLUMN_NAME_NOT_FOUND fdw_column_name_not_found HV002 E ERRCODE_FDW_DYNAMIC_PARAMETER_VALUE_NEEDED fdw_dynamic_parameter_value_needed HV010 E ERRCODE_FDW_FUNCTION_SEQUENCE_ERROR fdw_function_sequence_error HV021 E ERRCODE_FDW_INCONSISTENT_DESCRIPTOR_INFORMATION fdw_inconsistent_descriptor_information HV024 E ERRCODE_FDW_INVALID_ATTRIBUTE_VALUE fdw_invalid_attribute_value HV007 E ERRCODE_FDW_INVALID_COLUMN_NAME fdw_invalid_column_name HV008 E ERRCODE_FDW_INVALID_COLUMN_NUMBER fdw_invalid_column_number HV004 E ERRCODE_FDW_INVALID_DATA_TYPE fdw_invalid_data_type HV006 E ERRCODE_FDW_INVALID_DATA_TYPE_DESCRIPTORS fdw_invalid_data_type_descriptors HV091 E ERRCODE_FDW_INVALID_DESCRIPTOR_FIELD_IDENTIFIER fdw_invalid_descriptor_field_identifier HV00B E ERRCODE_FDW_INVALID_HANDLE fdw_invalid_handle HV00C E ERRCODE_FDW_INVALID_OPTION_INDEX fdw_invalid_option_index HV00D E ERRCODE_FDW_INVALID_OPTION_NAME fdw_invalid_option_name HV090 E ERRCODE_FDW_INVALID_STRING_LENGTH_OR_BUFFER_LENGTH fdw_invalid_string_length_or_buffer_length HV00A E ERRCODE_FDW_INVALID_STRING_FORMAT fdw_invalid_string_format HV009 E ERRCODE_FDW_INVALID_USE_OF_NULL_POINTER fdw_invalid_use_of_null_pointer HV014 E ERRCODE_FDW_TOO_MANY_HANDLES fdw_too_many_handles HV001 E ERRCODE_FDW_OUT_OF_MEMORY fdw_out_of_memory HV00P E ERRCODE_FDW_NO_SCHEMAS fdw_no_schemas HV00J E ERRCODE_FDW_OPTION_NAME_NOT_FOUND fdw_option_name_not_found HV00K E ERRCODE_FDW_REPLY_HANDLE fdw_reply_handle HV00Q E ERRCODE_FDW_SCHEMA_NOT_FOUND fdw_schema_not_found HV00R E ERRCODE_FDW_TABLE_NOT_FOUND fdw_table_not_found HV00L E ERRCODE_FDW_UNABLE_TO_CREATE_EXECUTION fdw_unable_to_create_execution HV00M E ERRCODE_FDW_UNABLE_TO_CREATE_REPLY fdw_unable_to_create_reply HV00N E ERRCODE_FDW_UNABLE_TO_ESTABLISH_CONNECTION fdw_unable_to_establish_connection Section: Class P0 - PL/pgSQL Error # (PostgreSQL-specific error class) P0000 E ERRCODE_PLPGSQL_ERROR plpgsql_error P0001 E ERRCODE_RAISE_EXCEPTION raise_exception P0002 E ERRCODE_NO_DATA_FOUND no_data_found P0003 E ERRCODE_TOO_MANY_ROWS too_many_rows P0004 E ERRCODE_ASSERT_FAILURE assert_failure Section: Class XX - Internal Error # this is for "can't-happen" conditions and software bugs (PostgreSQL-specific error class) XX000 E ERRCODE_INTERNAL_ERROR internal_error XX001 E ERRCODE_DATA_CORRUPTED data_corrupted XX002 E ERRCODE_INDEX_CORRUPTED index_corrupted pg-1.5.5/ext/gvl_wrappers.c0000644000004100000410000000117714563476204015660 0ustar www-datawww-data/* * gvl_wrappers.c - Wrapper functions for locking/unlocking the Ruby GVL * */ #include "pg.h" #ifndef HAVE_PQENCRYPTPASSWORDCONN char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm){return NULL;} #endif #ifdef ENABLE_GVL_UNLOCK FOR_EACH_BLOCKING_FUNCTION( DEFINE_GVL_WRAPPER_STRUCT ); FOR_EACH_BLOCKING_FUNCTION( DEFINE_GVL_SKELETON ); #endif FOR_EACH_BLOCKING_FUNCTION( DEFINE_GVL_STUB ); #ifdef ENABLE_GVL_UNLOCK FOR_EACH_CALLBACK_FUNCTION( DEFINE_GVL_WRAPPER_STRUCT ); FOR_EACH_CALLBACK_FUNCTION( DEFINE_GVLCB_SKELETON ); #endif FOR_EACH_CALLBACK_FUNCTION( DEFINE_GVLCB_STUB ); pg-1.5.5/ext/pg_type_map_by_mri_type.c0000644000004100000410000001760014563476204020051 0ustar www-datawww-data/* * pg_type_map_by_mri_type.c - PG::TypeMapByMriType class extension * $Id$ * * This type map can be used to select value encoders based on the MRI-internal * value type code. * */ #include "pg.h" static VALUE rb_cTypeMapByMriType; #define FOR_EACH_MRI_TYPE(func) \ func(T_FIXNUM) \ func(T_TRUE) \ func(T_FALSE) \ func(T_FLOAT) \ func(T_BIGNUM) \ func(T_COMPLEX) \ func(T_RATIONAL) \ func(T_ARRAY) \ func(T_STRING) \ func(T_SYMBOL) \ func(T_OBJECT) \ func(T_CLASS) \ func(T_MODULE) \ func(T_REGEXP) \ func(T_HASH) \ func(T_STRUCT) \ func(T_FILE) \ func(T_DATA) #define DECLARE_CODER(type) \ t_pg_coder *coder_##type; \ VALUE ask_##type; \ VALUE coder_obj_##type; typedef struct { t_typemap typemap; struct pg_tmbmt_converter { FOR_EACH_MRI_TYPE( DECLARE_CODER ) } coders; } t_tmbmt; #define CASE_AND_GET(type) \ case type: \ p_coder = this->coders.coder_##type; \ ask_for_coder = this->coders.ask_##type; \ break; static t_pg_coder * pg_tmbmt_typecast_query_param( t_typemap *p_typemap, VALUE param_value, int field ) { t_tmbmt *this = (t_tmbmt *)p_typemap; t_pg_coder *p_coder; VALUE ask_for_coder; switch(TYPE(param_value)){ FOR_EACH_MRI_TYPE( CASE_AND_GET ) default: /* unknown MRI type */ p_coder = NULL; ask_for_coder = Qnil; } if( !NIL_P(ask_for_coder) ){ /* No static Coder object, but proc/method given to ask for the Coder to use. */ VALUE obj; obj = rb_funcall(ask_for_coder, rb_intern("call"), 1, param_value); /* Check argument type and store the coder pointer */ TypedData_Get_Struct(obj, t_pg_coder, &pg_coder_type, p_coder); } if( !p_coder ){ t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); return default_tm->funcs.typecast_query_param( default_tm, param_value, field ); } return p_coder; } static VALUE pg_tmbmt_fit_to_query( VALUE self, VALUE params ) { t_tmbmt *this = (t_tmbmt *)RTYPEDDATA_DATA(self); /* Nothing to check at this typemap, but ensure that the default type map fits. */ t_typemap *default_tm = RTYPEDDATA_DATA( this->typemap.default_typemap ); default_tm->funcs.fit_to_query( this->typemap.default_typemap, params ); return self; } #define GC_MARK_AS_USED(type) \ rb_gc_mark_movable( this->coders.ask_##type ); \ rb_gc_mark_movable( this->coders.coder_obj_##type ); static void pg_tmbmt_mark( void *_this ) { t_tmbmt *this = (t_tmbmt *)_this; pg_typemap_mark(&this->typemap); FOR_EACH_MRI_TYPE( GC_MARK_AS_USED ); } static size_t pg_tmbmt_memsize( const void *_this ) { const t_tmbmt *this = (const t_tmbmt *)_this; return sizeof(*this); } #define GC_COMPACT(type) \ pg_gc_location( this->coders.ask_##type ); \ pg_gc_location( this->coders.coder_obj_##type ); static void pg_tmbmt_compact( void *_this ) { t_tmbmt *this = (t_tmbmt *)_this; pg_typemap_compact(&this->typemap); FOR_EACH_MRI_TYPE( GC_COMPACT ); } static const rb_data_type_t pg_tmbmt_type = { "PG::TypeMapByMriType", { pg_tmbmt_mark, RUBY_TYPED_DEFAULT_FREE, pg_tmbmt_memsize, pg_compact_callback(pg_tmbmt_compact), }, &pg_typemap_type, 0, RUBY_TYPED_FREE_IMMEDIATELY, }; #define INIT_VARIABLES(type) \ this->coders.coder_##type = NULL; \ this->coders.ask_##type = Qnil; \ this->coders.coder_obj_##type = Qnil; static VALUE pg_tmbmt_s_allocate( VALUE klass ) { t_tmbmt *this; VALUE self; self = TypedData_Make_Struct( klass, t_tmbmt, &pg_tmbmt_type, this ); this->typemap.funcs.fit_to_result = pg_typemap_fit_to_result; this->typemap.funcs.fit_to_query = pg_tmbmt_fit_to_query; this->typemap.funcs.fit_to_copy_get = pg_typemap_fit_to_copy_get; this->typemap.funcs.typecast_result_value = pg_typemap_result_value; this->typemap.funcs.typecast_query_param = pg_tmbmt_typecast_query_param; this->typemap.funcs.typecast_copy_get = pg_typemap_typecast_copy_get; this->typemap.default_typemap = pg_typemap_all_strings; FOR_EACH_MRI_TYPE( INIT_VARIABLES ); return self; } #define COMPARE_AND_ASSIGN(type) \ else if(!strcmp(p_mri_type, #type)){ \ this->coders.coder_obj_##type = coder; \ if(NIL_P(coder)){ \ this->coders.coder_##type = NULL; \ this->coders.ask_##type = Qnil; \ }else if(rb_obj_is_kind_of(coder, rb_cPG_Coder)){ \ TypedData_Get_Struct(coder, t_pg_coder, &pg_coder_type, this->coders.coder_##type); \ this->coders.ask_##type = Qnil; \ }else if(RB_TYPE_P(coder, T_SYMBOL)){ \ this->coders.coder_##type = NULL; \ this->coders.ask_##type = rb_obj_method( self, coder ); \ }else{ \ this->coders.coder_##type = NULL; \ this->coders.ask_##type = coder; \ } \ } /* * call-seq: * typemap.[mri_type] = coder * * Assigns a new PG::Coder object to the type map. The encoder * is registered for type casts of the given +mri_type+ . * * +coder+ can be one of the following: * * +nil+ - Values are forwarded to the #default_type_map . * * a PG::Coder - Values are encoded by the given encoder * * a Symbol - The method of this type map (or a derivation) that is called for each value to sent. * It must return a PG::Coder. * * a Proc - The Proc object is called for each value. It must return a PG::Coder. * * +mri_type+ must be one of the following strings: * * +T_FIXNUM+ * * +T_TRUE+ * * +T_FALSE+ * * +T_FLOAT+ * * +T_BIGNUM+ * * +T_COMPLEX+ * * +T_RATIONAL+ * * +T_ARRAY+ * * +T_STRING+ * * +T_SYMBOL+ * * +T_OBJECT+ * * +T_CLASS+ * * +T_MODULE+ * * +T_REGEXP+ * * +T_HASH+ * * +T_STRUCT+ * * +T_FILE+ * * +T_DATA+ */ static VALUE pg_tmbmt_aset( VALUE self, VALUE mri_type, VALUE coder ) { t_tmbmt *this = RTYPEDDATA_DATA( self ); char *p_mri_type; p_mri_type = StringValueCStr(mri_type); if(0){} FOR_EACH_MRI_TYPE( COMPARE_AND_ASSIGN ) else{ VALUE mri_type_inspect = rb_inspect( mri_type ); rb_raise(rb_eArgError, "unknown mri_type %s", StringValueCStr(mri_type_inspect)); } return self; } #define COMPARE_AND_GET(type) \ else if(!strcmp(p_mri_type, #type)){ \ coder = this->coders.coder_obj_##type; \ } /* * call-seq: * typemap.[mri_type] -> coder * * Returns the encoder object for the given +mri_type+ * * See #[]= for allowed +mri_type+ . */ static VALUE pg_tmbmt_aref( VALUE self, VALUE mri_type ) { VALUE coder; t_tmbmt *this = RTYPEDDATA_DATA( self ); char *p_mri_type; p_mri_type = StringValueCStr(mri_type); if(0){} FOR_EACH_MRI_TYPE( COMPARE_AND_GET ) else{ VALUE mri_type_inspect = rb_inspect( mri_type ); rb_raise(rb_eArgError, "unknown mri_type %s", StringValueCStr(mri_type_inspect)); } return coder; } #define ADD_TO_HASH(type) \ rb_hash_aset( hash_coders, rb_obj_freeze(rb_str_new2(#type)), this->coders.coder_obj_##type ); /* * call-seq: * typemap.coders -> Hash * * Returns all mri types and their assigned encoder object. */ static VALUE pg_tmbmt_coders( VALUE self ) { t_tmbmt *this = RTYPEDDATA_DATA( self ); VALUE hash_coders = rb_hash_new(); FOR_EACH_MRI_TYPE( ADD_TO_HASH ); return rb_obj_freeze(hash_coders); } void init_pg_type_map_by_mri_type(void) { /* * Document-class: PG::TypeMapByMriType < PG::TypeMap * * This type map casts values based on the Ruby object type code of the given value * to be sent. * * This type map is usable for type casting query bind parameters and COPY data * for PG::Connection#put_copy_data . Therefore only encoders might be assigned by * the #[]= method. * * _Note_ : This type map is not portable across Ruby implementations and is less flexible * than PG::TypeMapByClass. * It is kept only for performance comparisons, but PG::TypeMapByClass proved to be equally * fast in almost all cases. * */ rb_cTypeMapByMriType = rb_define_class_under( rb_mPG, "TypeMapByMriType", rb_cTypeMap ); rb_define_alloc_func( rb_cTypeMapByMriType, pg_tmbmt_s_allocate ); rb_define_method( rb_cTypeMapByMriType, "[]=", pg_tmbmt_aset, 2 ); rb_define_method( rb_cTypeMapByMriType, "[]", pg_tmbmt_aref, 1 ); rb_define_method( rb_cTypeMapByMriType, "coders", pg_tmbmt_coders, 0 ); rb_include_module( rb_cTypeMapByMriType, rb_mDefaultTypeMappable ); } pg-1.5.5/ext/pg.h0000644000004100000410000003236714563476204013565 0ustar www-datawww-data#ifndef __pg_h #define __pg_h #ifdef RUBY_EXTCONF_H # include RUBY_EXTCONF_H #endif /* System headers */ #include #include #include #if !defined(_WIN32) # include # include #endif #if defined(HAVE_UNISTD_H) && !defined(_WIN32) # include #endif /* HAVE_UNISTD_H */ /* Ruby headers */ #include "ruby.h" #include "ruby/st.h" #include "ruby/encoding.h" #define PG_ENCODING_SET_NOCHECK(obj,i) \ do { \ if ((i) < ENCODING_INLINE_MAX) \ ENCODING_SET_INLINED((obj), (i)); \ else \ rb_enc_set_index((obj), (i)); \ } while(0) #include "ruby/io.h" #ifndef timeradd #define timeradd(a, b, result) \ do { \ (result)->tv_sec = (a)->tv_sec + (b)->tv_sec; \ (result)->tv_usec = (a)->tv_usec + (b)->tv_usec; \ if ((result)->tv_usec >= 1000000L) { \ ++(result)->tv_sec; \ (result)->tv_usec -= 1000000L; \ } \ } while (0) #endif #ifndef timersub #define timersub(a, b, result) \ do { \ (result)->tv_sec = (a)->tv_sec - (b)->tv_sec; \ (result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \ if ((result)->tv_usec < 0) { \ --(result)->tv_sec; \ (result)->tv_usec += 1000000L; \ } \ } while (0) #endif /* PostgreSQL headers */ #include "pg_config.h" #include "libpq-fe.h" #include "libpq/libpq-fs.h" /* large-object interface */ #include "pg_config_manual.h" #if defined(_WIN32) # include typedef long suseconds_t; #endif #if defined(HAVE_VARIABLE_LENGTH_ARRAYS) #define PG_VARIABLE_LENGTH_ARRAY(type, name, len, maxlen) type name[(len)]; #else #define PG_VARIABLE_LENGTH_ARRAY(type, name, len, maxlen) \ type name[(maxlen)] = {(len)>(maxlen) ? (rb_raise(rb_eArgError, "Number of " #name " (%d) exceeds allowed maximum of " #maxlen, (len) ), (type)1) : (type)0}; #define PG_MAX_COLUMNS 4000 #endif #ifdef HAVE_RB_GC_MARK_MOVABLE #define pg_compact_callback(x) (x) #define pg_gc_location(x) x = rb_gc_location(x) #else #define rb_gc_mark_movable(x) rb_gc_mark(x) #define pg_compact_callback(x) {(x)} #define pg_gc_location(x) UNUSED(x) #endif /* For compatibility with ruby < 3.0 */ #ifndef RUBY_TYPED_FROZEN_SHAREABLE #define PG_RUBY_TYPED_FROZEN_SHAREABLE 0 #else #define PG_RUBY_TYPED_FROZEN_SHAREABLE RUBY_TYPED_FROZEN_SHAREABLE #endif #define PG_ENC_IDX_BITS 28 /* The data behind each PG::Connection object */ typedef struct { PGconn *pgconn; /* Cached IO object for the socket descriptor */ VALUE socket_io; /* function pointers of the original libpq notice receivers */ PQnoticeReceiver default_notice_receiver; PQnoticeProcessor default_notice_processor; /* Proc object that receives notices as PG::Result objects */ VALUE notice_receiver; /* Proc object that receives notices as String objects */ VALUE notice_processor; /* Kind of PG::TypeMap object for casting query params */ VALUE type_map_for_queries; /* Kind of PG::TypeMap object for casting result values */ VALUE type_map_for_results; /* IO object internally used for the trace stream */ VALUE trace_stream; /* Kind of PG::Coder object for casting ruby values to COPY rows */ VALUE encoder_for_put_copy_data; /* Kind of PG::Coder object for casting COPY rows to ruby values */ VALUE decoder_for_get_copy_data; /* Ruby encoding index of the client/internal encoding */ int enc_idx : PG_ENC_IDX_BITS; /* flags controlling Symbol/String field names */ unsigned int flags : 2; /* enable automatic flushing of send data at the end of send_query calls */ unsigned int flush_data : 1; #if defined(_WIN32) /* File descriptor to be used for rb_w32_unwrap_io_handle() */ int ruby_sd; #endif } t_pg_connection; typedef struct pg_coder t_pg_coder; typedef struct pg_typemap t_typemap; /* The data behind each PG::Result object */ typedef struct { PGresult *pgresult; /* The connection object used to build this result */ VALUE connection; /* The TypeMap used to type cast result values */ VALUE typemap; /* Pointer to the typemap object data. This is assumed to be * always valid. */ t_typemap *p_typemap; /* Ruby encoding index of the client/internal encoding */ int enc_idx : PG_ENC_IDX_BITS; /* 0 = PGresult is cleared by PG::Result#clear or by the GC * 1 = PGresult is cleared internally by libpq */ unsigned int autoclear : 1; /* flags controlling Symbol/String field names */ unsigned int flags : 2; /* Number of fields in fnames[] . * Set to -1 if fnames[] is not yet initialized. */ int nfields; /* Size of PGresult as published to ruby memory management. */ ssize_t result_size; /* Prefilled tuple Hash with fnames[] as keys. */ VALUE tuple_hash; /* Hash with fnames[] to field number mapping. */ VALUE field_map; /* List of field names as frozen String or Symbol objects. * Only valid if nfields != -1 */ VALUE fnames[0]; } t_pg_result; typedef int (* t_pg_coder_enc_func)(t_pg_coder *, VALUE, char *, VALUE *, int); typedef VALUE (* t_pg_coder_dec_func)(t_pg_coder *, const char *, int, int, int, int); typedef VALUE (* t_pg_fit_to_result)(VALUE, VALUE); typedef VALUE (* t_pg_fit_to_query)(VALUE, VALUE); typedef int (* t_pg_fit_to_copy_get)(VALUE); typedef VALUE (* t_pg_typecast_result)(t_typemap *, VALUE, int, int); typedef t_pg_coder *(* t_pg_typecast_query_param)(t_typemap *, VALUE, int); typedef VALUE (* t_pg_typecast_copy_get)( t_typemap *, VALUE, int, int, int ); #define PG_RESULT_FIELD_NAMES_MASK 0x03 #define PG_RESULT_FIELD_NAMES_SYMBOL 0x01 #define PG_RESULT_FIELD_NAMES_STATIC_SYMBOL 0x02 #define PG_CODER_TIMESTAMP_DB_UTC 0x0 #define PG_CODER_TIMESTAMP_DB_LOCAL 0x1 #define PG_CODER_TIMESTAMP_APP_UTC 0x0 #define PG_CODER_TIMESTAMP_APP_LOCAL 0x2 #define PG_CODER_FORMAT_ERROR_MASK 0xc #define PG_CODER_FORMAT_ERROR_TO_RAISE 0x4 #define PG_CODER_FORMAT_ERROR_TO_STRING 0x8 #define PG_CODER_FORMAT_ERROR_TO_PARTIAL 0xc struct pg_coder { t_pg_coder_enc_func enc_func; t_pg_coder_dec_func dec_func; VALUE coder_obj; Oid oid; int format; /* OR-ed values out of PG_CODER_* */ int flags; }; typedef struct { t_pg_coder comp; t_pg_coder *elem; int needs_quotation; char delimiter; } t_pg_composite_coder; struct pg_typemap { struct pg_typemap_funcs { t_pg_fit_to_result fit_to_result; t_pg_fit_to_query fit_to_query; t_pg_fit_to_copy_get fit_to_copy_get; t_pg_typecast_result typecast_result_value; t_pg_typecast_query_param typecast_query_param; t_pg_typecast_copy_get typecast_copy_get; } funcs; VALUE default_typemap; }; typedef struct { t_typemap typemap; int nfields; struct pg_tmbc_converter { t_pg_coder *cconv; } convs[0]; } t_tmbc; extern const rb_data_type_t pg_typemap_type; extern const rb_data_type_t pg_coder_type; #include "gvl_wrappers.h" /*************************************************************************** * Globals **************************************************************************/ extern int pg_skip_deprecation_warning; extern VALUE rb_mPG; extern VALUE rb_ePGerror; extern VALUE rb_eServerError; extern VALUE rb_eUnableToSend; extern VALUE rb_eConnectionBad; extern VALUE rb_eInvalidResultStatus; extern VALUE rb_eNoResultError; extern VALUE rb_eInvalidChangeOfResultFields; extern VALUE rb_mPGconstants; extern VALUE rb_cPGconn; extern VALUE rb_cPGresult; extern VALUE rb_hErrors; extern VALUE rb_cTypeMap; extern VALUE rb_cTypeMapAllStrings; extern VALUE rb_mDefaultTypeMappable; extern VALUE rb_cPG_Coder; extern VALUE rb_cPG_SimpleEncoder; extern VALUE rb_cPG_SimpleDecoder; extern VALUE rb_cPG_CompositeEncoder; extern VALUE rb_cPG_CompositeDecoder; extern VALUE rb_cPG_CopyCoder; extern VALUE rb_cPG_CopyEncoder; extern VALUE rb_cPG_CopyDecoder; extern VALUE rb_mPG_TextEncoder; extern VALUE rb_mPG_TextDecoder; extern VALUE rb_mPG_BinaryEncoder; extern VALUE rb_mPG_BinaryDecoder; extern VALUE rb_mPG_BinaryFormatting; extern const struct pg_typemap_funcs pg_tmbc_funcs; extern const struct pg_typemap_funcs pg_typemap_funcs; extern VALUE pg_typemap_all_strings; /*************************************************************************** * MACROS **************************************************************************/ #define UNUSED(x) ((void)(x)) #define SINGLETON_ALIAS(klass,new,old) rb_define_alias(rb_singleton_class((klass)),(new),(old)) /*************************************************************************** * PROTOTYPES **************************************************************************/ void Init_pg_ext _(( void )); void init_pg_connection _(( void )); void init_pg_result _(( void )); void init_pg_errors _(( void )); void init_pg_type_map _(( void )); void init_pg_type_map_all_strings _(( void )); void init_pg_type_map_by_class _(( void )); void init_pg_type_map_by_column _(( void )); void init_pg_type_map_by_mri_type _(( void )); void init_pg_type_map_by_oid _(( void )); void init_pg_type_map_in_ruby _(( void )); void init_pg_coder _(( void )); void init_pg_copycoder _(( void )); void init_pg_recordcoder _(( void )); void init_pg_text_encoder _(( void )); void init_pg_text_decoder _(( void )); void init_pg_binary_encoder _(( void )); void init_pg_binary_decoder _(( void )); void init_pg_tuple _(( void )); VALUE lookup_error_class _(( const char * )); VALUE pg_bin_dec_bytea _(( t_pg_coder*, const char *, int, int, int, int )); VALUE pg_text_dec_string _(( t_pg_coder*, const char *, int, int, int, int )); int pg_coder_enc_to_s _(( t_pg_coder*, VALUE, char *, VALUE *, int)); int pg_text_enc_identifier _(( t_pg_coder*, VALUE, char *, VALUE *, int)); t_pg_coder_enc_func pg_coder_enc_func _(( t_pg_coder* )); t_pg_coder_dec_func pg_coder_dec_func _(( t_pg_coder*, int )); VALUE pg_define_coder _(( const char *, void *, VALUE, VALUE )); VALUE pg_obj_to_i _(( VALUE )); VALUE pg_tmbc_allocate _(( void )); void pg_coder_init_encoder _(( VALUE )); void pg_coder_init_decoder _(( VALUE )); void pg_coder_compact _(( void * )); char *pg_rb_str_ensure_capa _(( VALUE, long, char *, char ** )); #define PG_RB_STR_ENSURE_CAPA( str, expand_len, curr_ptr, end_ptr ) \ do { \ if( (curr_ptr) + (expand_len) >= (end_ptr) ) \ (curr_ptr) = pg_rb_str_ensure_capa( (str), (expand_len), (curr_ptr), &(end_ptr) ); \ } while(0); #define PG_RB_STR_NEW( str, curr_ptr, end_ptr ) ( \ (str) = rb_str_new( NULL, 0 ), \ (curr_ptr) = (end_ptr) = RSTRING_PTR(str) \ ) VALUE pg_typemap_fit_to_result _(( VALUE, VALUE )); VALUE pg_typemap_fit_to_query _(( VALUE, VALUE )); int pg_typemap_fit_to_copy_get _(( VALUE )); VALUE pg_typemap_result_value _(( t_typemap *, VALUE, int, int )); t_pg_coder *pg_typemap_typecast_query_param _(( t_typemap *, VALUE, int )); VALUE pg_typemap_typecast_copy_get _(( t_typemap *, VALUE, int, int, int )); void pg_typemap_mark _(( void * )); size_t pg_typemap_memsize _(( const void * )); void pg_typemap_compact _(( void * )); PGconn *pg_get_pgconn _(( VALUE )); t_pg_connection *pg_get_connection _(( VALUE )); VALUE pgconn_block _(( int, VALUE *, VALUE )); VALUE pg_new_result _(( PGresult *, VALUE )); VALUE pg_new_result_autoclear _(( PGresult *, VALUE )); PGresult* pgresult_get _(( VALUE )); VALUE pg_result_check _(( VALUE )); VALUE pg_result_clear _(( VALUE )); VALUE pg_tuple_new _(( VALUE, int )); /* * Fetch the data pointer for the result object */ static inline t_pg_result * pgresult_get_this( VALUE self ) { return RTYPEDDATA_DATA(self); } rb_encoding * pg_get_pg_encname_as_rb_encoding _(( const char * )); const char * pg_get_rb_encoding_as_pg_encoding _(( rb_encoding * )); rb_encoding *pg_conn_enc_get _(( PGconn * )); void notice_receiver_proxy(void *arg, const PGresult *result); void notice_processor_proxy(void *arg, const char *message); /* reports if `-W' specified and PG_SKIP_DEPRECATION_WARNING environment variable isn't set * * message_id identifies the warning, so that it's reported only once. */ #define pg_deprecated(message_id, format_args) \ do { \ if( !(pg_skip_deprecation_warning & (1 << message_id)) ){ \ pg_skip_deprecation_warning |= 1 << message_id; \ rb_warning format_args; \ } \ } while(0); #endif /* end __pg_h */ pg-1.5.5/ext/errorcodes.def0000644000004100000410000007344014563476204015632 0ustar www-datawww-data/* * ext/errorcodes.def - Definition of error classes. * * WARNING: This file is autogenerated. Please edit ext/errorcodes.rb ! * */ { VALUE klass = define_error_class( "SqlStatementNotYetComplete", NULL ); register_error_class( "03000", klass ); register_error_class( "03", klass ); } { VALUE klass = define_error_class( "ConnectionException", NULL ); register_error_class( "08000", klass ); register_error_class( "08", klass ); } { VALUE klass = define_error_class( "ConnectionDoesNotExist", "08" ); register_error_class( "08003", klass ); } { VALUE klass = define_error_class( "ConnectionFailure", "08" ); register_error_class( "08006", klass ); } { VALUE klass = define_error_class( "SqlclientUnableToEstablishSqlconnection", "08" ); register_error_class( "08001", klass ); } { VALUE klass = define_error_class( "SqlserverRejectedEstablishmentOfSqlconnection", "08" ); register_error_class( "08004", klass ); } { VALUE klass = define_error_class( "TransactionResolutionUnknown", "08" ); register_error_class( "08007", klass ); } { VALUE klass = define_error_class( "ProtocolViolation", "08" ); register_error_class( "08P01", klass ); } { VALUE klass = define_error_class( "TriggeredActionException", NULL ); register_error_class( "09000", klass ); register_error_class( "09", klass ); } { VALUE klass = define_error_class( "FeatureNotSupported", NULL ); register_error_class( "0A000", klass ); register_error_class( "0A", klass ); } { VALUE klass = define_error_class( "InvalidTransactionInitiation", NULL ); register_error_class( "0B000", klass ); register_error_class( "0B", klass ); } { VALUE klass = define_error_class( "LocatorException", NULL ); register_error_class( "0F000", klass ); register_error_class( "0F", klass ); } { VALUE klass = define_error_class( "LEInvalidSpecification", "0F" ); register_error_class( "0F001", klass ); } { VALUE klass = define_error_class( "InvalidGrantor", NULL ); register_error_class( "0L000", klass ); register_error_class( "0L", klass ); } { VALUE klass = define_error_class( "InvalidGrantOperation", "0L" ); register_error_class( "0LP01", klass ); } { VALUE klass = define_error_class( "InvalidRoleSpecification", NULL ); register_error_class( "0P000", klass ); register_error_class( "0P", klass ); } { VALUE klass = define_error_class( "DiagnosticsException", NULL ); register_error_class( "0Z000", klass ); register_error_class( "0Z", klass ); } { VALUE klass = define_error_class( "StackedDiagnosticsAccessedWithoutActiveHandler", "0Z" ); register_error_class( "0Z002", klass ); } { VALUE klass = define_error_class( "CaseNotFound", NULL ); register_error_class( "20000", klass ); register_error_class( "20", klass ); } { VALUE klass = define_error_class( "CardinalityViolation", NULL ); register_error_class( "21000", klass ); register_error_class( "21", klass ); } { VALUE klass = define_error_class( "DataException", NULL ); register_error_class( "22000", klass ); register_error_class( "22", klass ); } { VALUE klass = define_error_class( "ArraySubscriptError", "22" ); register_error_class( "2202E", klass ); } { VALUE klass = define_error_class( "CharacterNotInRepertoire", "22" ); register_error_class( "22021", klass ); } { VALUE klass = define_error_class( "DatetimeFieldOverflow", "22" ); register_error_class( "22008", klass ); } { VALUE klass = define_error_class( "DivisionByZero", "22" ); register_error_class( "22012", klass ); } { VALUE klass = define_error_class( "ErrorInAssignment", "22" ); register_error_class( "22005", klass ); } { VALUE klass = define_error_class( "EscapeCharacterConflict", "22" ); register_error_class( "2200B", klass ); } { VALUE klass = define_error_class( "IndicatorOverflow", "22" ); register_error_class( "22022", klass ); } { VALUE klass = define_error_class( "IntervalFieldOverflow", "22" ); register_error_class( "22015", klass ); } { VALUE klass = define_error_class( "InvalidArgumentForLog", "22" ); register_error_class( "2201E", klass ); } { VALUE klass = define_error_class( "InvalidArgumentForNtile", "22" ); register_error_class( "22014", klass ); } { VALUE klass = define_error_class( "InvalidArgumentForNthValue", "22" ); register_error_class( "22016", klass ); } { VALUE klass = define_error_class( "InvalidArgumentForPowerFunction", "22" ); register_error_class( "2201F", klass ); } { VALUE klass = define_error_class( "InvalidArgumentForWidthBucketFunction", "22" ); register_error_class( "2201G", klass ); } { VALUE klass = define_error_class( "InvalidCharacterValueForCast", "22" ); register_error_class( "22018", klass ); } { VALUE klass = define_error_class( "InvalidDatetimeFormat", "22" ); register_error_class( "22007", klass ); } { VALUE klass = define_error_class( "InvalidEscapeCharacter", "22" ); register_error_class( "22019", klass ); } { VALUE klass = define_error_class( "InvalidEscapeOctet", "22" ); register_error_class( "2200D", klass ); } { VALUE klass = define_error_class( "InvalidEscapeSequence", "22" ); register_error_class( "22025", klass ); } { VALUE klass = define_error_class( "NonstandardUseOfEscapeCharacter", "22" ); register_error_class( "22P06", klass ); } { VALUE klass = define_error_class( "InvalidIndicatorParameterValue", "22" ); register_error_class( "22010", klass ); } { VALUE klass = define_error_class( "InvalidParameterValue", "22" ); register_error_class( "22023", klass ); } { VALUE klass = define_error_class( "InvalidPrecedingOrFollowingSize", "22" ); register_error_class( "22013", klass ); } { VALUE klass = define_error_class( "InvalidRegularExpression", "22" ); register_error_class( "2201B", klass ); } { VALUE klass = define_error_class( "InvalidRowCountInLimitClause", "22" ); register_error_class( "2201W", klass ); } { VALUE klass = define_error_class( "InvalidRowCountInResultOffsetClause", "22" ); register_error_class( "2201X", klass ); } { VALUE klass = define_error_class( "InvalidTablesampleArgument", "22" ); register_error_class( "2202H", klass ); } { VALUE klass = define_error_class( "InvalidTablesampleRepeat", "22" ); register_error_class( "2202G", klass ); } { VALUE klass = define_error_class( "InvalidTimeZoneDisplacementValue", "22" ); register_error_class( "22009", klass ); } { VALUE klass = define_error_class( "InvalidUseOfEscapeCharacter", "22" ); register_error_class( "2200C", klass ); } { VALUE klass = define_error_class( "MostSpecificTypeMismatch", "22" ); register_error_class( "2200G", klass ); } { VALUE klass = define_error_class( "NullValueNotAllowed", "22" ); register_error_class( "22004", klass ); } { VALUE klass = define_error_class( "NullValueNoIndicatorParameter", "22" ); register_error_class( "22002", klass ); } { VALUE klass = define_error_class( "NumericValueOutOfRange", "22" ); register_error_class( "22003", klass ); } { VALUE klass = define_error_class( "SequenceGeneratorLimitExceeded", "22" ); register_error_class( "2200H", klass ); } { VALUE klass = define_error_class( "StringDataLengthMismatch", "22" ); register_error_class( "22026", klass ); } { VALUE klass = define_error_class( "StringDataRightTruncation", "22" ); register_error_class( "22001", klass ); } { VALUE klass = define_error_class( "SubstringError", "22" ); register_error_class( "22011", klass ); } { VALUE klass = define_error_class( "TrimError", "22" ); register_error_class( "22027", klass ); } { VALUE klass = define_error_class( "UnterminatedCString", "22" ); register_error_class( "22024", klass ); } { VALUE klass = define_error_class( "ZeroLengthCharacterString", "22" ); register_error_class( "2200F", klass ); } { VALUE klass = define_error_class( "FloatingPointException", "22" ); register_error_class( "22P01", klass ); } { VALUE klass = define_error_class( "InvalidTextRepresentation", "22" ); register_error_class( "22P02", klass ); } { VALUE klass = define_error_class( "InvalidBinaryRepresentation", "22" ); register_error_class( "22P03", klass ); } { VALUE klass = define_error_class( "BadCopyFileFormat", "22" ); register_error_class( "22P04", klass ); } { VALUE klass = define_error_class( "UntranslatableCharacter", "22" ); register_error_class( "22P05", klass ); } { VALUE klass = define_error_class( "NotAnXmlDocument", "22" ); register_error_class( "2200L", klass ); } { VALUE klass = define_error_class( "InvalidXmlDocument", "22" ); register_error_class( "2200M", klass ); } { VALUE klass = define_error_class( "InvalidXmlContent", "22" ); register_error_class( "2200N", klass ); } { VALUE klass = define_error_class( "InvalidXmlComment", "22" ); register_error_class( "2200S", klass ); } { VALUE klass = define_error_class( "InvalidXmlProcessingInstruction", "22" ); register_error_class( "2200T", klass ); } { VALUE klass = define_error_class( "DuplicateJsonObjectKeyValue", "22" ); register_error_class( "22030", klass ); } { VALUE klass = define_error_class( "InvalidArgumentForSqlJsonDatetimeFunction", "22" ); register_error_class( "22031", klass ); } { VALUE klass = define_error_class( "InvalidJsonText", "22" ); register_error_class( "22032", klass ); } { VALUE klass = define_error_class( "InvalidSqlJsonSubscript", "22" ); register_error_class( "22033", klass ); } { VALUE klass = define_error_class( "MoreThanOneSqlJsonItem", "22" ); register_error_class( "22034", klass ); } { VALUE klass = define_error_class( "NoSqlJsonItem", "22" ); register_error_class( "22035", klass ); } { VALUE klass = define_error_class( "NonNumericSqlJsonItem", "22" ); register_error_class( "22036", klass ); } { VALUE klass = define_error_class( "NonUniqueKeysInAJsonObject", "22" ); register_error_class( "22037", klass ); } { VALUE klass = define_error_class( "SingletonSqlJsonItemRequired", "22" ); register_error_class( "22038", klass ); } { VALUE klass = define_error_class( "SqlJsonArrayNotFound", "22" ); register_error_class( "22039", klass ); } { VALUE klass = define_error_class( "SqlJsonMemberNotFound", "22" ); register_error_class( "2203A", klass ); } { VALUE klass = define_error_class( "SqlJsonNumberNotFound", "22" ); register_error_class( "2203B", klass ); } { VALUE klass = define_error_class( "SqlJsonObjectNotFound", "22" ); register_error_class( "2203C", klass ); } { VALUE klass = define_error_class( "TooManyJsonArrayElements", "22" ); register_error_class( "2203D", klass ); } { VALUE klass = define_error_class( "TooManyJsonObjectMembers", "22" ); register_error_class( "2203E", klass ); } { VALUE klass = define_error_class( "SqlJsonScalarRequired", "22" ); register_error_class( "2203F", klass ); } { VALUE klass = define_error_class( "SqlJsonItemCannotBeCastToTargetType", "22" ); register_error_class( "2203G", klass ); } { VALUE klass = define_error_class( "IntegrityConstraintViolation", NULL ); register_error_class( "23000", klass ); register_error_class( "23", klass ); } { VALUE klass = define_error_class( "RestrictViolation", "23" ); register_error_class( "23001", klass ); } { VALUE klass = define_error_class( "NotNullViolation", "23" ); register_error_class( "23502", klass ); } { VALUE klass = define_error_class( "ForeignKeyViolation", "23" ); register_error_class( "23503", klass ); } { VALUE klass = define_error_class( "UniqueViolation", "23" ); register_error_class( "23505", klass ); } { VALUE klass = define_error_class( "CheckViolation", "23" ); register_error_class( "23514", klass ); } { VALUE klass = define_error_class( "ExclusionViolation", "23" ); register_error_class( "23P01", klass ); } { VALUE klass = define_error_class( "InvalidCursorState", NULL ); register_error_class( "24000", klass ); register_error_class( "24", klass ); } { VALUE klass = define_error_class( "InvalidTransactionState", NULL ); register_error_class( "25000", klass ); register_error_class( "25", klass ); } { VALUE klass = define_error_class( "ActiveSqlTransaction", "25" ); register_error_class( "25001", klass ); } { VALUE klass = define_error_class( "BranchTransactionAlreadyActive", "25" ); register_error_class( "25002", klass ); } { VALUE klass = define_error_class( "HeldCursorRequiresSameIsolationLevel", "25" ); register_error_class( "25008", klass ); } { VALUE klass = define_error_class( "InappropriateAccessModeForBranchTransaction", "25" ); register_error_class( "25003", klass ); } { VALUE klass = define_error_class( "InappropriateIsolationLevelForBranchTransaction", "25" ); register_error_class( "25004", klass ); } { VALUE klass = define_error_class( "NoActiveSqlTransactionForBranchTransaction", "25" ); register_error_class( "25005", klass ); } { VALUE klass = define_error_class( "ReadOnlySqlTransaction", "25" ); register_error_class( "25006", klass ); } { VALUE klass = define_error_class( "SchemaAndDataStatementMixingNotSupported", "25" ); register_error_class( "25007", klass ); } { VALUE klass = define_error_class( "NoActiveSqlTransaction", "25" ); register_error_class( "25P01", klass ); } { VALUE klass = define_error_class( "InFailedSqlTransaction", "25" ); register_error_class( "25P02", klass ); } { VALUE klass = define_error_class( "IdleInTransactionSessionTimeout", "25" ); register_error_class( "25P03", klass ); } { VALUE klass = define_error_class( "InvalidSqlStatementName", NULL ); register_error_class( "26000", klass ); register_error_class( "26", klass ); } { VALUE klass = define_error_class( "TriggeredDataChangeViolation", NULL ); register_error_class( "27000", klass ); register_error_class( "27", klass ); } { VALUE klass = define_error_class( "InvalidAuthorizationSpecification", NULL ); register_error_class( "28000", klass ); register_error_class( "28", klass ); } { VALUE klass = define_error_class( "InvalidPassword", "28" ); register_error_class( "28P01", klass ); } { VALUE klass = define_error_class( "DependentPrivilegeDescriptorsStillExist", NULL ); register_error_class( "2B000", klass ); register_error_class( "2B", klass ); } { VALUE klass = define_error_class( "DependentObjectsStillExist", "2B" ); register_error_class( "2BP01", klass ); } { VALUE klass = define_error_class( "InvalidTransactionTermination", NULL ); register_error_class( "2D000", klass ); register_error_class( "2D", klass ); } { VALUE klass = define_error_class( "SqlRoutineException", NULL ); register_error_class( "2F000", klass ); register_error_class( "2F", klass ); } { VALUE klass = define_error_class( "SREFunctionExecutedNoReturnStatement", "2F" ); register_error_class( "2F005", klass ); } { VALUE klass = define_error_class( "SREModifyingSqlDataNotPermitted", "2F" ); register_error_class( "2F002", klass ); } { VALUE klass = define_error_class( "SREProhibitedSqlStatementAttempted", "2F" ); register_error_class( "2F003", klass ); } { VALUE klass = define_error_class( "SREReadingSqlDataNotPermitted", "2F" ); register_error_class( "2F004", klass ); } { VALUE klass = define_error_class( "InvalidCursorName", NULL ); register_error_class( "34000", klass ); register_error_class( "34", klass ); } { VALUE klass = define_error_class( "ExternalRoutineException", NULL ); register_error_class( "38000", klass ); register_error_class( "38", klass ); } { VALUE klass = define_error_class( "EREContainingSqlNotPermitted", "38" ); register_error_class( "38001", klass ); } { VALUE klass = define_error_class( "EREModifyingSqlDataNotPermitted", "38" ); register_error_class( "38002", klass ); } { VALUE klass = define_error_class( "EREProhibitedSqlStatementAttempted", "38" ); register_error_class( "38003", klass ); } { VALUE klass = define_error_class( "EREReadingSqlDataNotPermitted", "38" ); register_error_class( "38004", klass ); } { VALUE klass = define_error_class( "ExternalRoutineInvocationException", NULL ); register_error_class( "39000", klass ); register_error_class( "39", klass ); } { VALUE klass = define_error_class( "ERIEInvalidSqlstateReturned", "39" ); register_error_class( "39001", klass ); } { VALUE klass = define_error_class( "ERIENullValueNotAllowed", "39" ); register_error_class( "39004", klass ); } { VALUE klass = define_error_class( "ERIETriggerProtocolViolated", "39" ); register_error_class( "39P01", klass ); } { VALUE klass = define_error_class( "ERIESrfProtocolViolated", "39" ); register_error_class( "39P02", klass ); } { VALUE klass = define_error_class( "ERIEEventTriggerProtocolViolated", "39" ); register_error_class( "39P03", klass ); } { VALUE klass = define_error_class( "SavepointException", NULL ); register_error_class( "3B000", klass ); register_error_class( "3B", klass ); } { VALUE klass = define_error_class( "SEInvalidSpecification", "3B" ); register_error_class( "3B001", klass ); } { VALUE klass = define_error_class( "InvalidCatalogName", NULL ); register_error_class( "3D000", klass ); register_error_class( "3D", klass ); } { VALUE klass = define_error_class( "InvalidSchemaName", NULL ); register_error_class( "3F000", klass ); register_error_class( "3F", klass ); } { VALUE klass = define_error_class( "TransactionRollback", NULL ); register_error_class( "40000", klass ); register_error_class( "40", klass ); } { VALUE klass = define_error_class( "TRIntegrityConstraintViolation", "40" ); register_error_class( "40002", klass ); } { VALUE klass = define_error_class( "TRSerializationFailure", "40" ); register_error_class( "40001", klass ); } { VALUE klass = define_error_class( "TRStatementCompletionUnknown", "40" ); register_error_class( "40003", klass ); } { VALUE klass = define_error_class( "TRDeadlockDetected", "40" ); register_error_class( "40P01", klass ); } { VALUE klass = define_error_class( "SyntaxErrorOrAccessRuleViolation", NULL ); register_error_class( "42000", klass ); register_error_class( "42", klass ); } { VALUE klass = define_error_class( "SyntaxError", "42" ); register_error_class( "42601", klass ); } { VALUE klass = define_error_class( "InsufficientPrivilege", "42" ); register_error_class( "42501", klass ); } { VALUE klass = define_error_class( "CannotCoerce", "42" ); register_error_class( "42846", klass ); } { VALUE klass = define_error_class( "GroupingError", "42" ); register_error_class( "42803", klass ); } { VALUE klass = define_error_class( "WindowingError", "42" ); register_error_class( "42P20", klass ); } { VALUE klass = define_error_class( "InvalidRecursion", "42" ); register_error_class( "42P19", klass ); } { VALUE klass = define_error_class( "InvalidForeignKey", "42" ); register_error_class( "42830", klass ); } { VALUE klass = define_error_class( "InvalidName", "42" ); register_error_class( "42602", klass ); } { VALUE klass = define_error_class( "NameTooLong", "42" ); register_error_class( "42622", klass ); } { VALUE klass = define_error_class( "ReservedName", "42" ); register_error_class( "42939", klass ); } { VALUE klass = define_error_class( "DatatypeMismatch", "42" ); register_error_class( "42804", klass ); } { VALUE klass = define_error_class( "IndeterminateDatatype", "42" ); register_error_class( "42P18", klass ); } { VALUE klass = define_error_class( "CollationMismatch", "42" ); register_error_class( "42P21", klass ); } { VALUE klass = define_error_class( "IndeterminateCollation", "42" ); register_error_class( "42P22", klass ); } { VALUE klass = define_error_class( "WrongObjectType", "42" ); register_error_class( "42809", klass ); } { VALUE klass = define_error_class( "GeneratedAlways", "42" ); register_error_class( "428C9", klass ); } { VALUE klass = define_error_class( "UndefinedColumn", "42" ); register_error_class( "42703", klass ); } { VALUE klass = define_error_class( "UndefinedFunction", "42" ); register_error_class( "42883", klass ); } { VALUE klass = define_error_class( "UndefinedTable", "42" ); register_error_class( "42P01", klass ); } { VALUE klass = define_error_class( "UndefinedParameter", "42" ); register_error_class( "42P02", klass ); } { VALUE klass = define_error_class( "UndefinedObject", "42" ); register_error_class( "42704", klass ); } { VALUE klass = define_error_class( "DuplicateColumn", "42" ); register_error_class( "42701", klass ); } { VALUE klass = define_error_class( "DuplicateCursor", "42" ); register_error_class( "42P03", klass ); } { VALUE klass = define_error_class( "DuplicateDatabase", "42" ); register_error_class( "42P04", klass ); } { VALUE klass = define_error_class( "DuplicateFunction", "42" ); register_error_class( "42723", klass ); } { VALUE klass = define_error_class( "DuplicatePstatement", "42" ); register_error_class( "42P05", klass ); } { VALUE klass = define_error_class( "DuplicateSchema", "42" ); register_error_class( "42P06", klass ); } { VALUE klass = define_error_class( "DuplicateTable", "42" ); register_error_class( "42P07", klass ); } { VALUE klass = define_error_class( "DuplicateAlias", "42" ); register_error_class( "42712", klass ); } { VALUE klass = define_error_class( "DuplicateObject", "42" ); register_error_class( "42710", klass ); } { VALUE klass = define_error_class( "AmbiguousColumn", "42" ); register_error_class( "42702", klass ); } { VALUE klass = define_error_class( "AmbiguousFunction", "42" ); register_error_class( "42725", klass ); } { VALUE klass = define_error_class( "AmbiguousParameter", "42" ); register_error_class( "42P08", klass ); } { VALUE klass = define_error_class( "AmbiguousAlias", "42" ); register_error_class( "42P09", klass ); } { VALUE klass = define_error_class( "InvalidColumnReference", "42" ); register_error_class( "42P10", klass ); } { VALUE klass = define_error_class( "InvalidColumnDefinition", "42" ); register_error_class( "42611", klass ); } { VALUE klass = define_error_class( "InvalidCursorDefinition", "42" ); register_error_class( "42P11", klass ); } { VALUE klass = define_error_class( "InvalidDatabaseDefinition", "42" ); register_error_class( "42P12", klass ); } { VALUE klass = define_error_class( "InvalidFunctionDefinition", "42" ); register_error_class( "42P13", klass ); } { VALUE klass = define_error_class( "InvalidPstatementDefinition", "42" ); register_error_class( "42P14", klass ); } { VALUE klass = define_error_class( "InvalidSchemaDefinition", "42" ); register_error_class( "42P15", klass ); } { VALUE klass = define_error_class( "InvalidTableDefinition", "42" ); register_error_class( "42P16", klass ); } { VALUE klass = define_error_class( "InvalidObjectDefinition", "42" ); register_error_class( "42P17", klass ); } { VALUE klass = define_error_class( "WithCheckOptionViolation", NULL ); register_error_class( "44000", klass ); register_error_class( "44", klass ); } { VALUE klass = define_error_class( "InsufficientResources", NULL ); register_error_class( "53000", klass ); register_error_class( "53", klass ); } { VALUE klass = define_error_class( "DiskFull", "53" ); register_error_class( "53100", klass ); } { VALUE klass = define_error_class( "OutOfMemory", "53" ); register_error_class( "53200", klass ); } { VALUE klass = define_error_class( "TooManyConnections", "53" ); register_error_class( "53300", klass ); } { VALUE klass = define_error_class( "ConfigurationLimitExceeded", "53" ); register_error_class( "53400", klass ); } { VALUE klass = define_error_class( "ProgramLimitExceeded", NULL ); register_error_class( "54000", klass ); register_error_class( "54", klass ); } { VALUE klass = define_error_class( "StatementTooComplex", "54" ); register_error_class( "54001", klass ); } { VALUE klass = define_error_class( "TooManyColumns", "54" ); register_error_class( "54011", klass ); } { VALUE klass = define_error_class( "TooManyArguments", "54" ); register_error_class( "54023", klass ); } { VALUE klass = define_error_class( "ObjectNotInPrerequisiteState", NULL ); register_error_class( "55000", klass ); register_error_class( "55", klass ); } { VALUE klass = define_error_class( "ObjectInUse", "55" ); register_error_class( "55006", klass ); } { VALUE klass = define_error_class( "CantChangeRuntimeParam", "55" ); register_error_class( "55P02", klass ); } { VALUE klass = define_error_class( "LockNotAvailable", "55" ); register_error_class( "55P03", klass ); } { VALUE klass = define_error_class( "UnsafeNewEnumValueUsage", "55" ); register_error_class( "55P04", klass ); } { VALUE klass = define_error_class( "OperatorIntervention", NULL ); register_error_class( "57000", klass ); register_error_class( "57", klass ); } { VALUE klass = define_error_class( "QueryCanceled", "57" ); register_error_class( "57014", klass ); } { VALUE klass = define_error_class( "AdminShutdown", "57" ); register_error_class( "57P01", klass ); } { VALUE klass = define_error_class( "CrashShutdown", "57" ); register_error_class( "57P02", klass ); } { VALUE klass = define_error_class( "CannotConnectNow", "57" ); register_error_class( "57P03", klass ); } { VALUE klass = define_error_class( "DatabaseDropped", "57" ); register_error_class( "57P04", klass ); } { VALUE klass = define_error_class( "IdleSessionTimeout", "57" ); register_error_class( "57P05", klass ); } { VALUE klass = define_error_class( "SystemError", NULL ); register_error_class( "58000", klass ); register_error_class( "58", klass ); } { VALUE klass = define_error_class( "IoError", "58" ); register_error_class( "58030", klass ); } { VALUE klass = define_error_class( "UndefinedFile", "58" ); register_error_class( "58P01", klass ); } { VALUE klass = define_error_class( "DuplicateFile", "58" ); register_error_class( "58P02", klass ); } { VALUE klass = define_error_class( "SnapshotTooOld", NULL ); register_error_class( "72000", klass ); register_error_class( "72", klass ); } { VALUE klass = define_error_class( "ConfigFileError", NULL ); register_error_class( "F0000", klass ); register_error_class( "F0", klass ); } { VALUE klass = define_error_class( "LockFileExists", "F0" ); register_error_class( "F0001", klass ); } { VALUE klass = define_error_class( "FdwError", NULL ); register_error_class( "HV000", klass ); register_error_class( "HV", klass ); } { VALUE klass = define_error_class( "FdwColumnNameNotFound", "HV" ); register_error_class( "HV005", klass ); } { VALUE klass = define_error_class( "FdwDynamicParameterValueNeeded", "HV" ); register_error_class( "HV002", klass ); } { VALUE klass = define_error_class( "FdwFunctionSequenceError", "HV" ); register_error_class( "HV010", klass ); } { VALUE klass = define_error_class( "FdwInconsistentDescriptorInformation", "HV" ); register_error_class( "HV021", klass ); } { VALUE klass = define_error_class( "FdwInvalidAttributeValue", "HV" ); register_error_class( "HV024", klass ); } { VALUE klass = define_error_class( "FdwInvalidColumnName", "HV" ); register_error_class( "HV007", klass ); } { VALUE klass = define_error_class( "FdwInvalidColumnNumber", "HV" ); register_error_class( "HV008", klass ); } { VALUE klass = define_error_class( "FdwInvalidDataType", "HV" ); register_error_class( "HV004", klass ); } { VALUE klass = define_error_class( "FdwInvalidDataTypeDescriptors", "HV" ); register_error_class( "HV006", klass ); } { VALUE klass = define_error_class( "FdwInvalidDescriptorFieldIdentifier", "HV" ); register_error_class( "HV091", klass ); } { VALUE klass = define_error_class( "FdwInvalidHandle", "HV" ); register_error_class( "HV00B", klass ); } { VALUE klass = define_error_class( "FdwInvalidOptionIndex", "HV" ); register_error_class( "HV00C", klass ); } { VALUE klass = define_error_class( "FdwInvalidOptionName", "HV" ); register_error_class( "HV00D", klass ); } { VALUE klass = define_error_class( "FdwInvalidStringLengthOrBufferLength", "HV" ); register_error_class( "HV090", klass ); } { VALUE klass = define_error_class( "FdwInvalidStringFormat", "HV" ); register_error_class( "HV00A", klass ); } { VALUE klass = define_error_class( "FdwInvalidUseOfNullPointer", "HV" ); register_error_class( "HV009", klass ); } { VALUE klass = define_error_class( "FdwTooManyHandles", "HV" ); register_error_class( "HV014", klass ); } { VALUE klass = define_error_class( "FdwOutOfMemory", "HV" ); register_error_class( "HV001", klass ); } { VALUE klass = define_error_class( "FdwNoSchemas", "HV" ); register_error_class( "HV00P", klass ); } { VALUE klass = define_error_class( "FdwOptionNameNotFound", "HV" ); register_error_class( "HV00J", klass ); } { VALUE klass = define_error_class( "FdwReplyHandle", "HV" ); register_error_class( "HV00K", klass ); } { VALUE klass = define_error_class( "FdwSchemaNotFound", "HV" ); register_error_class( "HV00Q", klass ); } { VALUE klass = define_error_class( "FdwTableNotFound", "HV" ); register_error_class( "HV00R", klass ); } { VALUE klass = define_error_class( "FdwUnableToCreateExecution", "HV" ); register_error_class( "HV00L", klass ); } { VALUE klass = define_error_class( "FdwUnableToCreateReply", "HV" ); register_error_class( "HV00M", klass ); } { VALUE klass = define_error_class( "FdwUnableToEstablishConnection", "HV" ); register_error_class( "HV00N", klass ); } { VALUE klass = define_error_class( "PlpgsqlError", NULL ); register_error_class( "P0000", klass ); register_error_class( "P0", klass ); } { VALUE klass = define_error_class( "RaiseException", "P0" ); register_error_class( "P0001", klass ); } { VALUE klass = define_error_class( "NoDataFound", "P0" ); register_error_class( "P0002", klass ); } { VALUE klass = define_error_class( "TooManyRows", "P0" ); register_error_class( "P0003", klass ); } { VALUE klass = define_error_class( "AssertFailure", "P0" ); register_error_class( "P0004", klass ); } { VALUE klass = define_error_class( "InternalError", NULL ); register_error_class( "XX000", klass ); register_error_class( "XX", klass ); } { VALUE klass = define_error_class( "DataCorrupted", "XX" ); register_error_class( "XX001", klass ); } { VALUE klass = define_error_class( "IndexCorrupted", "XX" ); register_error_class( "XX002", klass ); } pg-1.5.5/ext/extconf.rb0000644000004100000410000001214714563476204014773 0ustar www-datawww-datarequire 'pp' require 'mkmf' if ENV['MAINTAINER_MODE'] $stderr.puts "Maintainer mode enabled." $CFLAGS << ' -Wall' << ' -ggdb' << ' -DDEBUG' << ' -pedantic' end if pgdir = with_config( 'pg' ) ENV['PATH'] = "#{pgdir}/bin" + File::PATH_SEPARATOR + ENV['PATH'] end if enable_config("gvl-unlock", true) $defs.push( "-DENABLE_GVL_UNLOCK" ) $stderr.puts "Calling libpq with GVL unlocked" else $stderr.puts "Calling libpq with GVL locked" end if enable_config("windows-cross") # Avoid dependency to external libgcc.dll on x86-mingw32 $LDFLAGS << " -static-libgcc" # Don't use pg_config for cross build, but --with-pg-* path options dir_config 'pg' else # Native build pgconfig = with_config('pg-config') || with_config('pg_config') || find_executable('pg_config') if pgconfig && pgconfig != 'ignore' $stderr.puts "Using config values from %s" % [ pgconfig ] incdir = IO.popen([pgconfig, "--includedir"], &:read).chomp libdir = IO.popen([pgconfig, "--libdir"], &:read).chomp dir_config 'pg', incdir, libdir # Windows traditionally stores DLLs beside executables, not in libdir dlldir = RUBY_PLATFORM=~/mingw|mswin/ ? IO.popen([pgconfig, "--bindir"], &:read).chomp : libdir elsif checking_for "libpq per pkg-config" do _cflags, ldflags, _libs = pkg_config("libpq") dlldir = ldflags && ldflags[/-L([^ ]+)/] && $1 end else incdir, libdir = dir_config 'pg' dlldir = libdir end # Try to use runtime path linker option, even if RbConfig doesn't know about it. # The rpath option is usually set implicit by dir_config(), but so far not # on MacOS-X. if dlldir && RbConfig::CONFIG["RPATHFLAG"].to_s.empty? append_ldflags "-Wl,-rpath,#{dlldir.quote}" end if /mswin/ =~ RUBY_PLATFORM $libs = append_library($libs, 'ws2_32') end end $stderr.puts "Using libpq from #{dlldir}" File.write("postgresql_lib_path.rb", <<-EOT) module PG POSTGRESQL_LIB_PATH = #{dlldir.inspect} end EOT $INSTALLFILES = { "./postgresql_lib_path.rb" => "$(RUBYLIBDIR)/pg/" } if RUBY_VERSION >= '2.3.0' && /solaris/ =~ RUBY_PLATFORM append_cppflags( '-D__EXTENSIONS__' ) end begin find_header( 'libpq-fe.h' ) or abort "Can't find the 'libpq-fe.h header" find_header( 'libpq/libpq-fs.h' ) or abort "Can't find the 'libpq/libpq-fs.h header" find_header( 'pg_config_manual.h' ) or abort "Can't find the 'pg_config_manual.h' header" abort "Can't find the PostgreSQL client library (libpq)" unless have_library( 'pq', 'PQconnectdb', ['libpq-fe.h'] ) || have_library( 'libpq', 'PQconnectdb', ['libpq-fe.h'] ) || have_library( 'ms/libpq', 'PQconnectdb', ['libpq-fe.h'] ) rescue SystemExit install_text = case RUBY_PLATFORM when /linux/ <<-EOT Please install libpq or postgresql client package like so: sudo apt install libpq-dev sudo yum install postgresql-devel sudo zypper in postgresql-devel sudo pacman -S postgresql-libs EOT when /darwin/ <<-EOT Please install libpq or postgresql client package like so: brew install libpq EOT when /mingw/ <<-EOT Please install libpq or postgresql client package like so: ridk exec sh -c "pacman -S ${MINGW_PACKAGE_PREFIX}-postgresql" EOT else <<-EOT Please install libpq or postgresql client package. EOT end $stderr.puts <<-EOT ***************************************************************************** Unable to find PostgreSQL client library. #{install_text} or try again with: gem install pg -- --with-pg-config=/path/to/pg_config or set library paths manually with: gem install pg -- --with-pg-include=/path/to/libpq-fe.h/ --with-pg-lib=/path/to/libpq.so/ EOT raise end if /mingw/ =~ RUBY_PLATFORM && RbConfig::MAKEFILE_CONFIG['CC'] =~ /gcc/ # Work around: https://sourceware.org/bugzilla/show_bug.cgi?id=22504 checking_for "workaround gcc version with link issue" do `#{RbConfig::MAKEFILE_CONFIG['CC']} --version`.chomp =~ /\s(\d+)\.\d+\.\d+(\s|$)/ && $1.to_i >= 6 && have_library(':libpq.lib') # Prefer linking to libpq.lib over libpq.dll if available end end have_func 'PQconninfo', 'libpq-fe.h' or abort "Your PostgreSQL is too old. Either install an older version " + "of this gem or upgrade your database to at least PostgreSQL-9.3." # optional headers/functions have_func 'PQsslAttribute', 'libpq-fe.h' # since PostgreSQL-9.5 have_func 'PQresultVerboseErrorMessage', 'libpq-fe.h' # since PostgreSQL-9.6 have_func 'PQencryptPasswordConn', 'libpq-fe.h' # since PostgreSQL-10 have_func 'PQresultMemorySize', 'libpq-fe.h' # since PostgreSQL-12 have_func 'PQenterPipelineMode', 'libpq-fe.h' do |src| # since PostgreSQL-14 # Ensure header files fit as well src + " int con(){ return PGRES_PIPELINE_SYNC; }" end have_func 'timegm' have_func 'rb_gc_adjust_memory_usage' # since ruby-2.4 have_func 'rb_gc_mark_movable' # since ruby-2.7 have_func 'rb_io_wait' # since ruby-3.0 # unistd.h confilicts with ruby/win32.h when cross compiling for win32 and ruby 1.9.1 have_header 'unistd.h' have_header 'inttypes.h' have_header('ruby/fiber/scheduler.h') if RUBY_PLATFORM=~/mingw|mswin/ checking_for "C99 variable length arrays" do $defs.push( "-DHAVE_VARIABLE_LENGTH_ARRAYS" ) if try_compile('void test_vla(int l){ int vla[l]; }') end create_header() create_makefile( "pg_ext" ) pg-1.5.5/.hgsigs0000644000004100000410000003153414563476204013466 0ustar www-datawww-data872063e42b129af10539f73b3c083ad8a031f961 0 iEYEABECAAYFAkuKoCoACgkQ+zlz4UKpE6QzewCgrFcSsAwju/KpZ8myuWexlcSbe04AmwWCbf4HM95tDXdFvsvzeegPg8AS 3993015a841e43c9cd9d1321819cbf5e74264f1d 0 iEYEABECAAYFAkz2ycMACgkQ+zlz4UKpE6SYjQCgi/1Ik2rntK2dU93Hb91wYh0Yv4sAoKxEXVuXaEIAiwB4vSQ/7JQGIBzM 230ea3e68db2360548097542c4856dec4c3cd97a 0 iEYEABECAAYFAk03CpAACgkQ+zlz4UKpE6SPAgCfbRwKmAgHTmrudSoC09c37Tuyff0AnRHrSaqKhiCO7KlX5UJq6x0ttoKH 24aa7899c6966ce349c8e4f2a87b17c3e943ff56 0 iEYEABECAAYFAk2s1wQACgkQ+zlz4UKpE6SkLQCdHOS5yxoUFguEo885HkDyOZg4Y7wAoMVofhwOUHVQ6djXr0hgAmahI1lW 19b551f972e27dcfa281b92914e2a98661243206 0 iEYEABECAAYFAk7f51sACgkQ+zlz4UKpE6RkYACg0WZjt1crbi72DQYs3kYKSYRflNYAnA80+VVwmMUQiWuFuQ+7gbiUPCyY f72b14d349bf385c769aacfddbea7a0e60ff5e9e 0 iEYEABECAAYFAk8CFCIACgkQ+zlz4UKpE6QbYACgyLQwHPQH50sGVgzTD3y13XKwi38AoIrF5zSOiMXAeL+sk++iwDYV4ddW f3dfdb6929b70ddd3bb952757bdfb199e6916245 0 iEYEABECAAYFAk8Di+MACgkQ+zlz4UKpE6TVvwCg+ibuW22lRdnOIrRF2V4am7b4YxYAn0bDEnP93JX6qKAaU8kcoCrTKDXp b67309d3ccf2f9de56535e01f58c7af994426827 0 iEYEABECAAYFAk8iJKkACgkQ+zlz4UKpE6SjUQCgpItY5hW5NyVkfL5+nkRhJqaetQMAoJQQkNPL2jQLgJREfj3PtMBbn2VG 0e7f0c2451e55855b4a90efce8db0cafbf04b26f 0 iEYEABECAAYFAk8kb0cACgkQ+zlz4UKpE6RpxgCfQDV3zq2N+zle1XLKoXGMr7EK19IAnR3llz7WPf2j9lqXdZjw4xtl0XBk 9c262b875047f9acfedb63396a262ab5a5b101ca 0 iEYEABECAAYFAk80EvkACgkQ+zlz4UKpE6SUHQCeJuJMb8+k8ynIDPSmcKHL/a5gD6AAoPXMns9HF2c3XwtS1CMRf6rcZp3e 1ba641824000abbf1b22000772815c24e9b5f6d9 0 iEYEABECAAYFAk84LSUACgkQ+zlz4UKpE6RlPQCgiGZbYJFbeWEAdehVUrIZdU7sRe4AoOgESbvEWynP4X0OKbvdC9rLchYl 41e071bdd6ed970887d4ed4da59fdfa62003c39e 0 iEYEABECAAYFAk9FXikACgkQ+zlz4UKpE6TB8ACgt/VSo/kJMg9UVLKd5UUPBPjbgOIAn0DJuOla9GF85mW74sEkCOqE6Ada a45710f8db303c400200017242589562936fcf1b 0 iEYEABECAAYFAk/l/kgACgkQ+zlz4UKpE6QCkwCg049BpW4kSvaKuICyvKokeoXbNiAAoPWAaiDuK6xjZhqGSuuGVWMmCRwk 52d22b060501ab90a89b3a758aca8ce70ad05231 0 iEYEABECAAYFAlBDfn8ACgkQ+zlz4UKpE6R3GACgzLiZ+fyM4Hx8/Qp9fyWF+mHk4FQAn3P3Y06AHadVvKwyksrAgKk/33LV 384fcbc92366ca0108b6c0984d861ffef2d38570 0 iEYEABECAAYFAlFRsM4ACgkQ+zlz4UKpE6TYXgCgksacYvWJ5dhx9oYFRR+oSH6wPgwAoJ3QO01zfiDbBz6Z9Mxy7tNi3jx6 0bfb6ff650be2d003af3d0fc6c75be16369605e1 0 iEYEABECAAYFAlFjCYkACgkQ+zlz4UKpE6RldACg77Rp2I4vYUXpFakUrYq6uSfPLLQAn266JL6CiQG44cSroW+Mgz4CZgJn 4e0606f5f5aab87855860a3eeaf4c9eaaea77f09 0 iEYEABECAAYFAlHuizwACgkQ+zlz4UKpE6QphACg4FNFwvVju9wk6PC6vwkY8cZRtvkAn1nDR0pbto9xMdMUqhJxOc5Dqisr eed93df350a6cc657d5151bd3aa29ab427fba7cc 0 iEYEABECAAYFAlI3Sy4ACgkQ+zlz4UKpE6ShLQCffDunkSEo5TCnzCx8PjVF9jetDxYAn02ZCfDJ2UPgojF+gjhHCGk9haFq 22d57e3a2b378a34675982a77e6daa643f38fa6e 0 iEYEABECAAYFAlKyO9QACgkQ+zlz4UKpE6QO/wCfWabZRMDkk/vNME1LK1cHCp4oOtMAoORYSAU8OTOxjhPW3IGDMFShHKHv c519766e3ec9a60b1960dcb008f01434f98a17b2 0 iEYEABECAAYFAlSoTtUACgkQ+zlz4UKpE6TIoQCg2nBKrFlaMtD1P4H1KuDxQJBsDkQAniIdmVBVhWvBU+pUfMHhPRBY+puR ba5aff64b5cbe818ddabaac924d0bee6ab27f1b0 0 iEYEABECAAYFAlSq+v4ACgkQ+zlz4UKpE6SfvwCg8cL68fxxt7k/yzTV7hLNyOovci0AnAoPXmKEYaoyWehwMUpxOitaVRwf 7d31b04e79134d276c1e8a3a64ee35b7002da1ef 0 iQIcBAABAgAGBQJVVO4yAAoJEGE7GvLhImG9r6cP/jMU8jKHKbFiyRLCz/IXw72bnORdGiOwZzIjFyRSpXnrZ9dkIF8Hjllv27XW2jiQ2eg+N+MQmchO3VAqNEgad782535p01LY2hmP8s6LAKM7GFCTi6yCVcavcGUS8GDwK1df1nLK0Sfi3TrRsaduhizd0BI0MPuVt2qjDE+8AA0/6DkIkPsohUbvpJXMMl8BiuZBM3IViHYn4janRdeUdSvv9hDo3gYqMH9OsihhacOVX1KoHirkeO14JGfrTN9P7wgtQeIa6VP/cC6ek3qsUhahGXqFPvMw5oApcGyBMmVdfw4rgVVCgVKK1XRLGstt1JozgFIB9Dcjppjcv5VnawuDBvrQDNpFChxyAW8coyssKYG4Mug2wpoJawsy3Mb+rmDyw5KHXJXdWMS0uf+2h6+6FG4Y+DDb4LM8PGgSilJPktS7f9CqY6pROT4bPyG0o0z2VNa+3pdnQ3J4LMap9cdhPtTArvc0S/GwxrffRzKlXZW6LH3Apu9dn9dVwf+fUr8yui2DxNaZ/l8u5dYOixbCOp6rFSdHq/SYKOMfi3DrvdoWTBrhsUfI3ulJQxa13fFWrKVGOcEswjBxnaYEd7sIBt3ij/z3/1bCz9Phhp8N8u+5wQjbHhLrVqkb/u0I7lM6WSG8o7zg5abeotLbL4ieDsO/BBw3WuKzZ9ylie8h 57291f1e96b95a2545f98ec95ba1b01f8d3a0cf5 0 iQIcBAABAgAGBQJV6LWaAAoJEGE7GvLhImG9TMEP/jGHXPtiwWWb1xS+hL1i7b5J13IjciOHW+hGtp4lFb/J1jtF4o3JoPDdq+q1Ytuc0zo/lcYU73kw2gseGgO96MIEFdDcdCS1tbE5EP8456ADCn4TKykSSCdIuBXizhh/CTIJyry7i8VXpio1K26Uav2J2M2G91IADqmg2AWFtHmboGmaGRwU4TMuZbZPMFkiPyhFMMz9FH9VhVOEqF4KaEzUQM3RyKsfJ9RvJk7g3oxBS6vq/bPzQq3LNXVqirKfx4kSv8Rv+dyGHadKfdhigTXDWfzplnmuDcmOvhIcEnUsgPQyoPFfKM6RDaaNswFaLAXrGQXirx5hXDUhehXYjBuRB5iF372AACcnRJUJHV+mdW9L5jmJw64umZ7FuKOVqojumMLIEj16nz7ucAJpgOwbWKgLiUk+6vVr6QknjNYC6FDlgJ04nYfjovbzrT+HCC5UAVRBBX+w/khybhhsvvZUIZOzt6RPkriin7NQi3LST2ZN2AOolkDtSJd6esExXkUod7qGfTl/nKa8qWpeAQ7XSq+bv8/Wbj/bqN7kIDy6qYcy2J+aL/PNdrzuOSWKeQrOWhsb02mlsxC9bmRBEWJ1WbpdrnX7/6aVuPwF0LKsftitkFR6IqPza20qUebz+UF9Pd8lW4qn28BCRtwLprw/Oh0Qct1cVE9OUiB4GVXP da42b972b5ab3381b41a3a63cc2a65b42e5caa05 0 iQIcBAABAgAGBQJWRjUaAAoJEGE7GvLhImG9gFUP/34+eviBFlK2TPDBAp/AQz8aQp4dcPBZ9S5JCCXW1c2YE+UL6X7MpkRR3t/eXrzBJFSgiXmB+TzTkfz1DsFKKoAXymq5hP5AIf+5dpkvL+JH24f/+Jzv3qaNWYqJbUNYajy+GXMI8OGwmQ7x3EtynJmYpMVWdgtjcfCRGVRw38Zun+ePiluI83K/I52RptZenhcQP9I7wehdUtCp8bH7LX1nbeHH/HDY5OmkN22HkFzkPPLjYFgAzNfciZMI7bmxmTbLZ1wqGFyTHjGONEiKPW8vgnMK26QXm+/+DkPkg0RwqeA4oUwlT5+8m/pBlzJBY+Boz0+ffCBxpHOSto04hP2rCcBd1hihr6OWtZiZJ1S/YMsKW4vnZoIBVDr+z7fAOaLkZ6GX580BtoVH3Etr7/727ebaWYQfPknlAPn6lkO271/+r8X8GlTqxqlF/gvq5baqCLXvdjIgUgJAseuf4RWsSef+GxMaC/w9cScoqnr/v3DAcTKPY4FdomDUlEp/3HcjzothsXIDifrH1FhX0NjPzAMMvQm+jOsZWF7Z0ipfsPQGjx7enOdsUiUQzU+pYxiIZHdZ2vpkALFB8VhRB8QoO0hnyORLVrSqYHNQ+UdcV2lwwThi6qVfLjT0gKuxCG2e2u3pGvv28iW7nk7SYFCpHCRtaEpZh+4VDa8GPAOj 75d4f016a17f58cb049f1891aa4794c3308dc274 0 iQIcBAABAgAGBQJWRjdlAAoJEGE7GvLhImG9cbIP/jSrGQnXeTML/pYtcVj/3DigVd7M03MHAX1hyIz4cFCE8yZHXkOzMgoMe+47OoC+bRANvmh9zJcgVcgIbA/ooXFP2AiiutH5aI20mKES9N5bTqEPyiMACqjs1eb4ZIBMbDEt6UTD1256l5xd9wCBVzlXahuNQN5FyDMxFyrKcsWRoB/vW1ano4jT+1+R8SkSJzf0reJaooJAif4HHM1mwRsgepWFH91dT766m63/jZV8TrHmQHxh+jrCCDhBtZCbrrYEq2FTzSD6ZyBYIKa7lGbJaDH86XuAnFGMszDAkdTGxp+riWmpPfmssh9e17aayzoG5wLWGKfRgiV7/18YuYBzFnbnyZ+VPep5XKnm20L08T3WPId/nK3IdnShROLLm/B8MIxSOlmLYouFGuWQ9LP9Wpgsk07qDRtA7W8R2ooQI3F3iU7UIspA4oPO/P509wVcTJpf1WSnfkJ3K/yRifiKFL+FLlklXF+B5HEZttRzmjzx8/Qvn9lMfYh5pzqhDGxTkt1L5hftEtxp5inWtT9a4HPaG/jcp8MJgmS0eXmw4hTXb1gKQmTACJfZSiitSWCwvaE4oIoVXJ6HZZUCEfHNlGxAQ643AaApNeOCAe0FmzcXfyuCJtwhM5lDXgPM7sWZuKsUxeLElQ2sWXLDsNvQ35yr4wKsi1n1hMU2DbX8 8beaa5d72670b40cbcaaf11d77a27cb9655ca00d 0 iQIcBAABAgAGBQJX406jAAoJEGE7GvLhImG9iVwQAKBeA2NODvHZLBLFjheeRBMjRbSGWV8lscY/bNnSiIu1n99tLjnRKXszAXowUJnYVa14IWB9U56aoNdc+yWm1e1V+x7q6UXLkC6Jjecra9pfxDmW2VTs4o9D1wL2IVVOOB+3XrgF4N1Jb9TyhbF6ya6kdt9UYHgtMYwL3+fe5s5cTjCwoQNlS9L4drmCTGndtE6CTGrW6I2+S5Soc34QhDp8+WVDi6BTtHNDz1QkK3sO2X3MIJZcfxLSeWegR1JaZ48/dgensvmDFTAnqBf66cjJpjBkhwotqfA54G5M4xOcqKC6SMCJ599UpA+RBs4ntBGuVeSoLyFDpoebrMF1A/xFRfzcnyQLu0/o4LJYBb1+XNUdhrfTLmHxTrgPB4z7iJuNOfgz8sTvFRd4Ip/2hq43JCTFVNpv6d1qFCgf75WAXAqi91LaUpNFr1DoUsXlm4OTBB2PAycGF5N6E4YQDUAdXp792k7DFJJ+n6zHxuhDP6dBbzJbHzWrri4nAQDO1O1RhSjOSgIMadPs8UsOWf/WhvZPJ5TVUJk9bnnSoKMa+CWysg8koxwFeT67EAAZOdeKDKgqomw4Rb76fGlAjVVR+SJZx522I8SY48cc0tVVJyeM88I94WdSCnOupSvrcwEZNeA14xbV//alAN+odUR0ffMPb4KBOtIBQo1Q2OdY 838985377b4829c61b45bfe61d2ec76e5f8e5672 0 iQIcBAABCAAGBQJYwyEyAAoJEGE7GvLhImG9srMP/inukxZyEl/ZyA+gpqlWZegYvrE/Pyd6IinGIAzehbjuiixizZZMf35FYkF33TjVGKTBZyxeLV6UNaQMf6+cM9JHdLVf9HWbLcdCYelQtyvGpJvloVXK2twNMg0Gd/PP9nXaMbbGQ2a4j54zRpOZ28W37hM0pH94GNtRuN+wy4scHtFAHewK9K4GQWU1APf37EXq0Aoxf0OUs0BK5To8EvI6C9nCPpoD2VlxS3i+2UhLMogRhlYw7zBpBqZkdKnhRWIDP/Oc6WfMsxz7St5E7S/V5Lk6+iWnkdmdXSuWiUByWUckPGaSXa/IRa0S9LhBeUmZkVTHic//nWOA8uib8iaT7YlU9oSYmba4kYgHsUeNk035v9f4z6yJdxNdSrqPGtRp3EHGsCC8XuYzew7W8MwPVaN1CsscOJZKRAGNFJrwEMzo5pcg7dk0rJwalCwOzXAVNWAWAPw21cl3H03BsR5lQBDeZdjBbu37OWMFy/LYKQN3Be2znb6OHWla7FbsUtuFKesTGII56coSQVkrFgdoySOwsFd8V3DlTbYPPfd9SZyxwMgmyQzwVrBU95SrsLeQ4/WiEG9ZEr7Av+VOO+B3FFahED5Q7Bv279PMEhMWAh08zXz5/4OUMzvrTIhuYnjT2DKyqjxzkpif529zIbn3vOiK4ugK4pL7YVrnM9UV f275e318641f185b8a15a2220e7c189b1769f84c 0 iQIzBAABCAAdFiEEoYl52o0gA4yRUmHQ4yEXIpU5F6MFAlk/FR8ACgkQ4yEXIpU5F6PobRAAl1JEPMiyMSWGfMyu/h3OtL3xkOpwcONp0ySv4DQHjOh3u638FMWnEUMF+7pRTnlqQssctH4vO88gnhz5XJOfFtisn4xg64gT0JhR/OiOSOmp90pm//8cXwocjwoOotFKAHxM96b8KWSyGCoFXS+FmMmhCvKWxAp4+qwW133DdaOcDdQeLG19Dcp/ffKGEt83NSoNBCmmG2WsQ11TGAp5Bj8aK+4844HMZFGpvxK0Vl/AecWcxkuB9ql6cPZS0V1Z3Ndwh8mKPkrtAZgPFjZStLXT3iCjEszsQmD6LUQ3x5hWGtPODZpo7uWhi9jFrrcHEGO7/u6l8T3ho7UWqJ1lp6xapCeZH2ZFxKtVar9RzjRB0kvtkwjUBIgpJuVZJCHdTfivJkLWPeapDjiJ4P4NiVMef0KLRDAF5EHO4VYasU754U2/GSkZBrmQgHyYUX3x3VDkPPglfhEphLKMTY3wvM5+EnuO1DDGl0aGGsx+yI/RyJVzp5+jImEdfKHrXS2OTVFB9CGR7t8gcIbrbVaiGrTV4WJPn9Qi4RIoXCizd0rtrBEkd820dGtPhSpOkPY5WNrh9I2B+YkoL1OjR92qZsnx6ByIyJlwvg/A1qZ8PaGVjFk/XkMJnJLOwDC7nVZNs2x4+w91qyG40WcO3FycLogvBp6OsQ2rNSJ1A0UUIhc= fef434914848ccb2497776324dbf0850852b980d 0 iQIzBAABCAAdFiEEoYl52o0gA4yRUmHQ4yEXIpU5F6MFAlpWN+QACgkQ4yEXIpU5F6O+/Q/+LkYHQHS2sbU+vKlCSMjNCl1wpc1MZ3obh2I+gjjqN5n+3QPNrZC+XMLPlfC54H6bsbuPo9r22Mln574RmX7W2ckc0OHWzNaCDzYLPI7NvnTxyLsVtM7lUvdkuK1YgOAqZE5uK62Dzo8F8Ou2GNUMRH+nsVgCEgT4liyYhSL3NoNlE+h4RAYnvkkBt8NpBSB661wNBZCkL2DxSzidwE8DT20gnhb5ngiRwNevP0DbUnZIs4CUxzkKgnZdyOL8cTftAGj/XnYxm0I9+rUBl0lfJrLhlw5sCtc+vMUHarF2g5CCyvT/dpeACkjOArDj7o3rgqEvwX5zoRUeaUbVv8k7CcJ1uu5R0G+5VLLs9o/a6ilC37tQiGkl4zDJtD7G3bQs4hxYNVvZEmj/SrebNeOjJkouNsIKWJ2tfVnAyI5hZt4+jNZFET8RPAsTvgOLI5u17zU90O4KS7RLuzcT8TxLb55CkKSKsn1qbn8WdObZsoOvk4VQA9Dek7ZH8ZY9v8KtHAzDH+ip5xc01doEJziybz41fibTVma5rgbvnNXXCMqPRRkkSE+k7ogLgC6R3kCoqZUu1V6qoMkPh9s3WzQoSBqGL17l9RBCTh2o2QriBxZNPS2jG+AUrFOKGusc9M8c1AcEO1Z2tbVZBEzKssOSQ22CGpO13ZVzKFJcs601qgw= ca83074366ac1610134deaee6f12e3a58981e757 0 iQIzBAABCAAdFiEEoYl52o0gA4yRUmHQ4yEXIpU5F6MFAluAfD0ACgkQ4yEXIpU5F6O7tw/7By3wuM1HaXnVADqL66w3xGm15GV5zTZCkIY/lx3s1Lfg314dWRM3V2X4gI/dhvnQZEpo361kmT9cVGs0ggN3uz/jigX1Anjy/5t1L3mz9OBWWHd9+NXCuW3QzoChLrFyq7QD9u+tYdls5mGDisB5PfbSukc0j+69N/4jIPD7kABP//eF8auaWGN1WKs65xjPgXoATkV6FjQL9wLjnzuLnXOVQspzN3G/YwzlJTKQ3CH/7UxffmTLqWGrhLZcwDlCt7QYmHxSVmxBsjQ1tQcLWYjDAdR+nNTSBB9whE4zW6rHsB0Qs7gtWypmTKrKZ7ikRQcFGj3enlK+gzspA+rc/NGDHTGYSSbHS1BhE3SpOD7zyR06UnrsYGk2M6Cg5cvE/9RGFUIZ7MPbSfW5a/RRV0ZZgihJjkvsL1w8rzsl/8eaXi7Nkp5KsgEb/hQ+o4V+TEBIxe+TpOZDjpvPwTHC5f0rbRLdWxTxVpUteHwps4N8I5S3+HrMNw+9ewB92V1SxUVMUIBYKoiVJSo92eVnaw/GKIZduZrrr8/XzKh69gvXOrshucxH7AsIvGo4WQR6VtWeFrtUrQpUXPJ1fXi7nc6ciksIMMWKCUaMDsv0nN7k/kuL4m/NRAjCDlbZRoJ0Pj2+ukiGz8+GOhXjshkoj4oatX9UMp4mueDEWdl5JIQ= 71d5c24f937e00c2348f8d5b9680b9abe8597618 0 iQIzBAABCAAdFiEEoYl52o0gA4yRUmHQ4yEXIpU5F6MFAluEcKMACgkQ4yEXIpU5F6OLDg/8DKSI8HzShD47HQMiqMtRSanmlJz2qrKea9gY3Y9HgXnIODTFUdi1v8CLzOU/NyD+GtYl8AiJSCUQfT2W0IEZkf7AeAsliq/0edPKQvCbHbjdOcWTTe07x8E1BbQoP8sS8EyYj3nbjQrYaTu3qKxOIwYhN9h8DP6C4xDQeGS05kdMcEbOXcvL0wNRnLW6mBJK2fqNFKR5BSIWgZUlqqYC90URD546D/Y5a+zR9tzsxiFXP/yKkDKVXoaFWiMEt/PjClgt7pcaCiyt1ZUt0N398/oVLPrIgMPOSqlbQB3eFYcdx3I2DpWIvWm9NjUe7FhkJfRUOQEnZ6IiqiLb6UwaRY1B+yt7SVOPygv86B1sACwE2G9+tcBGaIcKILdxD2HAt1nVsxVOKGI/9prmrVGrIzX6AX4phPpVS+PZRbd+CwI+dcGYXIjhVs+oy/6G+soo3ayLqqzWgcu+kC4cf39mVr7kNLGGbg15xJUmMp49WEk8/tiLgUDySdcCJAjADzWFvQrjy40JMxmt8dJPZwHhIBUZ0mXsGZWD0OVunQT9B3yswRj7I4TV/ztyfNYN7OtCyzxs1gwzrKiS8T/COSp7cb3+hXXoI9doWOPTZbO1Na51ch+4nqR6mtOCOmR7sCqSXw/VRLn97DsPK0miKbEQUm7mNT/XcxWPwHsWUI3yg+Y= bbf57bf7e58354bc8052c9914da6d88940c0b493 0 iQIzBAABCAAdFiEEoYl52o0gA4yRUmHQ4yEXIpU5F6MFAluF62EACgkQ4yEXIpU5F6PEyxAAuA11YAWDZ6Bqag8h0IpxgbT9kcLZPo9s4XhQmFnKN7Ykga+UZK+mUs/pjcu9zk/CdZwEzLATtNiI9Zyvf4IkGkyNVj4qU4eVtvndClzflPP54fW/mxQCrALsIT7uqZQitZDomOJUnJPmbPni/dSjcbr/u77qh0rQifcABga3e+R3AmY8BSfsJbkEIV5A1wAC9O5ccCJRubvPLsQiQ0OhaMA+xjMtLA7/DVS/C9PIXexBrCK1vFtMkPN9RzPLQ3Y4hAxMwrYuMZ+l7l+Zngi+yIFEDJrTOIxAfP2dSxTrEwWEL/JfRPaN7BGFR9j/RG3KfG6aTggPIRc0ru4fAFVXhQ1zWPBYqnw+w8JM5w8A0vNuIV2fSQe7AmpE4d8obXqRKkvHTWvXQ1nqNbTK+X7DTLO+vTlcMbI7YJUzVZO2oiuJt5ZJ1irHtr/eIwgBEfLV7GvhmyZT74qTRJ8v+vCko0thjrfVOs/Fstfw4PB2QBb1f4LifKSL1Ol3yvygbLdbxXZr6SDjXUcG09ABb5Xk6yyVYn3/DivMGyNqU9e9ZK39i76Vmh156Ml/MHmA6ZQFEA0zAWB47imnkqigQHF/CEJW9yYEHAK5UXVC9uQcHboFNFHAbzRiDcaTCjBNrFk5vglIQG+IZR30KbYfkODcjp/NGekppii8fjLioq8cuL4= 6f611e78845adc38eac1fffe4793bea2d52bf099 0 iQIzBAABCAAdFiEEoYl52o0gA4yRUmHQ4yEXIpU5F6MFAluRxokACgkQ4yEXIpU5F6Np6w/8D4R+QDUJN4mPGlbQNOSB8ew6do1ixP+uMZtMTgltYtbX+Uf9vNgCM+rmy0Tb+HjkMeNy47YZXaobfZ7ejmT/pFt9WXEr90mADAwrWcbqCCKC9OVmQG3tQQT2BGqxKsaB3t0s2+tNBGXs5o7AzHjqzain1nBe19y29EzpWfYusRA+exvIYWk8hf1JpY0wAaNIVBKdALSNcR/t0msHJRBkn2UxB+0e5te+A09atd0K1sXp9qii4WvGZBGJtpiFVK8Ayk+7Q2/RxqYoyJoIR2xcoZTA2e1no4uRjXa0ZTphPQpsDEJEwAQ5ymsFTtWPapWoJIgeyGopPckR85GT3c7MgkOGLLCJs1lKeQGWevjWquHV4lP3NZr4tmnKC9BHoqc16tlBbtbuaPXyGRkBP4ROn3NIm8rftixDGQtTZwlaNzJ8OoXCqj5PzTMUO8zbgVv9QrEHAcpxB+j+tINB5kdc4vICH2zoe0J7jK32ynwY9JH36cg8GVLkNIwzYWP5GKmeFm1GQKvv7Ptqfa4rTPKDy2G1/yTCwrzpg5vRNy+7ouhHRrVAxEaIvehdoYs+3K+SwGzgk0RoL0sw0n54zv/svTtIugO1inE9tKA31cuPujIcIrWPthHBY7bSPO18HGyoHkOcaS0rkkwQ0Ug85aBtyuFLjPUkUt4NL4kGmHOALFs= b86eef21886cbc08a29dbb1893c68c360fbec7cc 0 iQIzBAABCAAdFiEEoYl52o0gA4yRUmHQ4yEXIpU5F6MFAlw2ItgACgkQ4yEXIpU5F6OQEA//Y7qc4spxKxzAUEhXl8c1DGJ2CCHS1vMNoXTTgpCYGk6oDTCa00bHa4dUTyGQXUAbpjcAYD5uiyB8VTj1a8Qy7QFFPXofWx+Ljyfi3hx8isfjal7ktlHh7Y0PvnYBEH8a9zK7BUymDXF4n4qIGfKA3wgDB427yHGXlApIvXjmEk1C08GVzKCX03xWhAhOJyRk2aKwwlyZfYXJvpdhk18sI1DNhR925iz/e/wcS8cO0ESWR7gzTZrWPm4N32q3KoYWmWwp2msb3JsaEWpMdbY2/JJmwx6kkOwtn4GR+G4AujY6d7/XQQ8Yqtsld7x5LK7l44BG6RpHFgRWKZpWStxxp+VhuVpqZekXNuyfP1MIlR7w3B1p0wBIWekDGO8eEDGoK9TewzufZzJa1uCed9JulgGtvlcvpyIghwODLGjbGBr4YztPf9W16iOXt0Mtx1a9ni3C9xF3KgakvYYkLY3osFZG8my3AIXgFps5fNzfcq4GbJIPx3PcF9ka0iP6114/3g92vXpVKlvxczwil+lqPYBT2rxc3+9JtW77bG7tQOllKjnfAiQ0BK1fMBZRVDzN4QU8jifyDwUQvjB4tZMvGzXWxRW3TmTKOdqp+s6hJnBma9lAwUerGa5wtk1xZhO1udJYsk7PMt590bxgUl+1PjKJ6AG1Sj5MhgMct9euex4= pg-1.5.5/.travis.yml0000644000004100000410000000274014563476204014307 0ustar www-datawww-datasudo: required dist: focal services: - docker language: ruby matrix: include: # i386: Intel 32-bit - name: i386 language: generic env: - PGPATH="/usr/lib/postgresql/10/bin" before_install: | docker run --rm --privileged multiarch/qemu-user-static --reset -p yes && docker build --rm --build-arg PGPATH="${PGPATH}" -t ruby-pg -f spec/env/Dockerfile.i386 . script: | docker run --rm -t --network=host ruby-pg - rvm: "2.5" env: - "PGVERSION=9.3" # Use Ubuntu-16.04 since postgresql-9.3 depends on openssl-1.0.0, which isn't available in 20.04 dist: xenial - rvm: ruby-head env: - "PGVERSION=14" - rvm: truffleruby env: - "PGVERSION=14" allow_failures: - rvm: ruby-head fast_finish: true before_install: - bundle install # Download and install postgresql version to test against in /opt (for non-cross compile only) - echo "deb http://apt.postgresql.org/pub/repos/apt/ ${TRAVIS_DIST}-pgdg main $PGVERSION" | sudo tee -a /etc/apt/sources.list.d/pgdg.list - wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - - sudo apt-get -y update - sudo apt-get -y --allow-downgrades install postgresql-$PGVERSION libpq5=$PGVERSION* libpq-dev=$PGVERSION* - export PATH=/usr/lib/postgresql/$PGVERSION/bin:$PATH script: - bundle exec rake compile test PG_DEBUG=0 after_failure: - "find tmp -name mkmf.log | xargs cat" pg-1.5.5/.gems0000644000004100000410000000034314563476204013127 0ustar www-datawww-data# .gems generated gem export file. Note that any env variable settings will be missing. Append these after using a ';' field separator rake-compiler -v1.1.0 rake-compiler-dock -v1.0.0 hoe-deveiate -v0.10.0 hoe-bundler -v1.3.0 pg-1.5.5/metadata.gz.sig0000444000004100000410000000040014563476204015066 0ustar www-datawww-data'(eTpK4[@fnUW#u GВ6M񷊠oj5řܺ穯":h KX\@lG6V"=FLmRGvJO~Ϡ #:`:}c\ %͋;0y}ˉ/HH/q[IpYϕt͇ݵ2`$nA 1.0' ``` ## How To Install Install via RubyGems: gem install pg You may need to specify the path to the 'pg_config' program installed with Postgres: gem install pg -- --with-pg-config= If you're installing via Bundler, you can provide compile hints like so: bundle config build.pg --with-pg-config= See README-OS_X.rdoc for more information about installing under MacOS X, and README-Windows.rdoc for Windows build/installation instructions. There's also [a Google+ group](http://goo.gl/TFy1U) and a [mailing list](http://groups.google.com/group/ruby-pg) if you get stuck, or just want to chat about something. If you want to install as a signed gem, the public certs of the gem signers can be found in [the `certs` directory](https://github.com/ged/ruby-pg/tree/master/certs) of the repository. ## Type Casts Pg can optionally type cast result values and query parameters in Ruby or native C code. This can speed up data transfers to and from the database, because String allocations are reduced and conversions in (slower) Ruby code can be omitted. Very basic type casting can be enabled by: ```ruby conn.type_map_for_results = PG::BasicTypeMapForResults.new conn # ... this works for result value mapping: conn.exec("select 1, now(), '{2,3}'::int[]").values # => [[1, 2014-09-21 20:51:56 +0200, [2, 3]]] conn.type_map_for_queries = PG::BasicTypeMapForQueries.new conn # ... and this for param value mapping: conn.exec_params("SELECT $1::text, $2::text, $3::text", [1, 1.23, [2,3]]).values # => [["1", "1.2300000000000000E+00", "{2,3}"]] ``` But Pg's type casting is highly customizable. That's why it's divided into 2 layers: ### Encoders / Decoders (ext/pg_*coder.c, lib/pg/*coder.rb) This is the lower layer, containing encoding classes that convert Ruby objects for transmission to the DBMS and decoding classes to convert received data back to Ruby objects. The classes are namespaced according to their format and direction in PG::TextEncoder, PG::TextDecoder, PG::BinaryEncoder and PG::BinaryDecoder. It is possible to assign a type OID, format code (text or binary) and optionally a name to an encoder or decoder object. It's also possible to build composite types by assigning an element encoder/decoder. PG::Coder objects can be used to set up a PG::TypeMap or alternatively to convert single values to/from their string representation. The following PostgreSQL column types are supported by ruby-pg (TE = Text Encoder, TD = Text Decoder, BE = Binary Encoder, BD = Binary Decoder): * Integer: [TE](rdoc-ref:PG::TextEncoder::Integer), [TD](rdoc-ref:PG::TextDecoder::Integer), [BD](rdoc-ref:PG::BinaryDecoder::Integer) 💡 No links? Switch to [here](https://deveiate.org/code/pg/README_md.html#label-Type+Casts) 💡 * BE: [Int2](rdoc-ref:PG::BinaryEncoder::Int2), [Int4](rdoc-ref:PG::BinaryEncoder::Int4), [Int8](rdoc-ref:PG::BinaryEncoder::Int8) * Float: [TE](rdoc-ref:PG::TextEncoder::Float), [TD](rdoc-ref:PG::TextDecoder::Float), [BD](rdoc-ref:PG::BinaryDecoder::Float) * BE: [Float4](rdoc-ref:PG::BinaryEncoder::Float4), [Float8](rdoc-ref:PG::BinaryEncoder::Float8) * Numeric: [TE](rdoc-ref:PG::TextEncoder::Numeric), [TD](rdoc-ref:PG::TextDecoder::Numeric) * Boolean: [TE](rdoc-ref:PG::TextEncoder::Boolean), [TD](rdoc-ref:PG::TextDecoder::Boolean), [BE](rdoc-ref:PG::BinaryEncoder::Boolean), [BD](rdoc-ref:PG::BinaryDecoder::Boolean) * String: [TE](rdoc-ref:PG::TextEncoder::String), [TD](rdoc-ref:PG::TextDecoder::String), [BE](rdoc-ref:PG::BinaryEncoder::String), [BD](rdoc-ref:PG::BinaryDecoder::String) * Bytea: [TE](rdoc-ref:PG::TextEncoder::Bytea), [TD](rdoc-ref:PG::TextDecoder::Bytea), [BE](rdoc-ref:PG::BinaryEncoder::Bytea), [BD](rdoc-ref:PG::BinaryDecoder::Bytea) * Base64: [TE](rdoc-ref:PG::TextEncoder::ToBase64), [TD](rdoc-ref:PG::TextDecoder::FromBase64), [BE](rdoc-ref:PG::BinaryEncoder::FromBase64), [BD](rdoc-ref:PG::BinaryDecoder::ToBase64) * Timestamp: * TE: [local](rdoc-ref:PG::TextEncoder::TimestampWithoutTimeZone), [UTC](rdoc-ref:PG::TextEncoder::TimestampUtc), [with-TZ](rdoc-ref:PG::TextEncoder::TimestampWithTimeZone) * TD: [local](rdoc-ref:PG::TextDecoder::TimestampLocal), [UTC](rdoc-ref:PG::TextDecoder::TimestampUtc), [UTC-to-local](rdoc-ref:PG::TextDecoder::TimestampUtcToLocal) * BE: [local](rdoc-ref:PG::BinaryEncoder::TimestampLocal), [UTC](rdoc-ref:PG::BinaryEncoder::TimestampUtc) * BD: [local](rdoc-ref:PG::BinaryDecoder::TimestampLocal), [UTC](rdoc-ref:PG::BinaryDecoder::TimestampUtc), [UTC-to-local](rdoc-ref:PG::BinaryDecoder::TimestampUtcToLocal) * Date: [TE](rdoc-ref:PG::TextEncoder::Date), [TD](rdoc-ref:PG::TextDecoder::Date), [BE](rdoc-ref:PG::BinaryEncoder::Date), [BD](rdoc-ref:PG::BinaryDecoder::Date) * JSON and JSONB: [TE](rdoc-ref:PG::TextEncoder::JSON), [TD](rdoc-ref:PG::TextDecoder::JSON) * Inet: [TE](rdoc-ref:PG::TextEncoder::Inet), [TD](rdoc-ref:PG::TextDecoder::Inet) * Array: [TE](rdoc-ref:PG::TextEncoder::Array), [TD](rdoc-ref:PG::TextDecoder::Array) * Composite Type (also called "Row" or "Record"): [TE](rdoc-ref:PG::TextEncoder::Record), [TD](rdoc-ref:PG::TextDecoder::Record) The following text and binary formats can also be encoded although they are not used as column type: * COPY input and output data: [TE](rdoc-ref:PG::TextEncoder::CopyRow), [TD](rdoc-ref:PG::TextDecoder::CopyRow), [BE](rdoc-ref:PG::BinaryEncoder::CopyRow), [BD](rdoc-ref:PG::BinaryDecoder::CopyRow) * Literal for insertion into SQL string: [TE](rdoc-ref:PG::TextEncoder::QuotedLiteral) * SQL-Identifier: [TE](rdoc-ref:PG::TextEncoder::Identifier), [TD](rdoc-ref:PG::TextDecoder::Identifier) ### PG::TypeMap and derivations (ext/pg_type_map*.c, lib/pg/type_map*.rb) A TypeMap defines which value will be converted by which encoder/decoder. There are different type map strategies, implemented by several derivations of this class. They can be chosen and configured according to the particular needs for type casting. The default type map is PG::TypeMapAllStrings. A type map can be assigned per connection or per query respectively per result set. Type maps can also be used for COPY in and out data streaming. See PG::Connection#copy_data . The following base type maps are available: * PG::TypeMapAllStrings - encodes and decodes all values to and from strings (default) * PG::TypeMapByClass - selects encoder based on the class of the value to be sent * PG::TypeMapByColumn - selects encoder and decoder by column order * PG::TypeMapByOid - selects decoder by PostgreSQL type OID * PG::TypeMapInRuby - define a custom type map in ruby The following type maps are prefilled with type mappings from the PG::BasicTypeRegistry : * PG::BasicTypeMapForResults - a PG::TypeMapByOid prefilled with decoders for common PostgreSQL column types * PG::BasicTypeMapBasedOnResult - a PG::TypeMapByOid prefilled with encoders for common PostgreSQL column types * PG::BasicTypeMapForQueries - a PG::TypeMapByClass prefilled with encoders for common Ruby value classes ## Thread support PG is thread safe in such a way that different threads can use different PG::Connection objects concurrently. However it is not safe to access any Pg objects simultaneously from more than one thread. So make sure to open a new database server connection for every new thread or use a wrapper library like ActiveRecord that manages connections in a thread safe way. If messages like the following are printed to stderr, you're probably using one connection from several threads: message type 0x31 arrived from server while idle message type 0x32 arrived from server while idle message type 0x54 arrived from server while idle message type 0x43 arrived from server while idle message type 0x5a arrived from server while idle ## Fiber IO scheduler support Pg is fully compatible with `Fiber.scheduler` introduced in Ruby-3.0 since pg-1.3.0. On Windows support for `Fiber.scheduler` is available on Ruby-3.1 or newer. All possibly blocking IO operations are routed through the `Fiber.scheduler` if one is registered for the running thread. That is why pg internally uses the asynchronous libpq interface even for synchronous/blocking method calls. It also uses Ruby's DNS resolution instead of libpq's builtin functions. Internally Pg always uses the nonblocking connection mode of libpq. It then behaves like running in blocking mode but ensures, that all blocking IO is handled in Ruby through a possibly registered `Fiber.scheduler`. When `PG::Connection.setnonblocking(true)` is called then the nonblocking state stays enabled, but the additional handling of blocking states is disabled, so that the calling program has to handle blocking states on its own. An exception to this rule are the methods for large objects like `PG::Connection#lo_create` and authentication methods using external libraries (like GSSAPI authentication). They are not compatible with `Fiber.scheduler`, so that blocking states are not passed to the registered IO scheduler. That means the operation will work properly, but IO waiting states can not be used to switch to another Fiber doing IO. ## Ractor support Pg is fully compatible with Ractor introduced in Ruby-3.0 since pg-1.5.0. All type en/decoders and type maps are shareable between ractors if they are made frozen by `Ractor.make_shareable`. Also frozen PG::Result and PG::Tuple objects can be shared. All frozen objects (except PG::Connection) can still be used to do communication with the PostgreSQL server or to read retrieved data. PG::Connection is not shareable and must be created within each Ractor to establish a dedicated connection. ## Contributing To report bugs, suggest features, or check out the source with Git, [check out the project page](https://github.com/ged/ruby-pg). After checking out the source, install all dependencies: $ bundle install Cleanup extension files, packaging files, test databases. Run this to change between PostgreSQL versions: $ rake clean Compile extension: $ rake compile Run tests/specs on the PostgreSQL version that `pg_config --bindir` points to: $ rake test Or run a specific test per file and line number on a specific PostgreSQL version: $ PATH=/usr/lib/postgresql/14/bin:$PATH rspec -Ilib -fd spec/pg/connection_spec.rb:455 Generate the API documentation: $ rake docs Make sure, that all bugs and new features are verified by tests. The current maintainers are Michael Granger and Lars Kanis . ## Copying Copyright (c) 1997-2022 by the authors. * Jeff Davis * Guy Decoux (ts) * Michael Granger * Lars Kanis * Dave Lee * Eiji Matsumoto * Yukihiro Matsumoto * Noboru Saitou You may redistribute this software under the same terms as Ruby itself; see https://www.ruby-lang.org/en/about/license.txt or the BSDL file in the source for details. Portions of the code are from the PostgreSQL project, and are distributed under the terms of the PostgreSQL license, included in the file POSTGRES. Portions copyright LAIKA, Inc. ## Acknowledgments See Contributors.rdoc for the many additional fine people that have contributed to this library over the years. We are thankful to the people at the ruby-list and ruby-dev mailing lists. And to the people who developed PostgreSQL. pg-1.5.5/.gemtest0000644000004100000410000000000014563476204013632 0ustar www-datawww-datapg-1.5.5/pg.gemspec0000644000004100000410000000307314563476204014151 0ustar www-datawww-data# frozen_string_literal: true # -*- encoding: utf-8 -*- require_relative 'lib/pg/version' Gem::Specification.new do |spec| spec.name = "pg" spec.version = PG::VERSION spec.authors = ["Michael Granger", "Lars Kanis"] spec.email = ["ged@FaerieMUD.org", "lars@greiz-reinsdorf.de"] spec.summary = "Pg is the Ruby interface to the PostgreSQL RDBMS" spec.description = "Pg is the Ruby interface to the PostgreSQL RDBMS. It works with PostgreSQL 9.3 and later." spec.homepage = "https://github.com/ged/ruby-pg" spec.license = "BSD-2-Clause" spec.required_ruby_version = ">= 2.5" spec.metadata["homepage_uri"] = spec.homepage spec.metadata["source_code_uri"] = "https://github.com/ged/ruby-pg" spec.metadata["changelog_uri"] = "https://github.com/ged/ruby-pg/blob/master/History.md" spec.metadata["documentation_uri"] = "http://deveiate.org/code/pg" # Specify which files should be added to the gem when it is released. # The `git ls-files -z` loads the files in the RubyGem that have been added into git. spec.files = Dir.chdir(File.expand_path(__dir__)) do `git ls-files -z`.split("\x0").reject { |f| f.match(%r{\A(?:test|spec|features|translation)/}) } end spec.extensions = ["ext/extconf.rb"] spec.require_paths = ["lib"] spec.cert_chain = ["certs/ged.pem"] spec.rdoc_options = ["--main", "README.md", "--title", "PG: The Ruby PostgreSQL Driver"] spec.extra_rdoc_files = `git ls-files -z *.rdoc *.md lib/*.rb lib/*/*.rb lib/*/*/*.rb ext/*.c ext/*.h`.split("\x0") end pg-1.5.5/History.md0000644000004100000410000012036714563476204014167 0ustar www-datawww-data## v1.5.5 [2024-02-15] Lars Kanis - Explicitly retype timespec fields to int64_t to fix compatibility with 32bit arches. [#547](https://github.com/ged/ruby-pg/pull/547) - Fix possible buffer overflows in PG::BinaryDecoder::CopyRow on 32 bit systems. [#548](https://github.com/ged/ruby-pg/pull/548) - Add binary Windows gems for Ruby 3.3. - Update Windows fat binary gem to OpenSSL-3.2.1 and PostgreSQL-16.2. ## v1.5.4 [2023-09-01] Lars Kanis - Fix compiling the pg extension with MSVC 2022. [#535](https://github.com/ged/ruby-pg/pull/535) - Set PG::Connection's encoding even if setting client_encoding on connection startup fails. [#541](https://github.com/ged/ruby-pg/pull/541) - Don't set the server's client_encoding if it's unnecessary. [#542](https://github.com/ged/ruby-pg/pull/542) This is important for connection proxies, who disallow configuration settings. - Update Windows fat binary gem to OpenSSL-3.1.2 and PostgreSQL-15.4. ## v1.5.3 [2023-04-28] Lars Kanis - Fix possible segfault when creating a new PG::Result with type map. [#530](https://github.com/ged/ruby-pg/pull/530) - Add category to deprecation warnings of Coder.new, so that they are suppressed for most users. [#528](https://github.com/ged/ruby-pg/pull/528) ## v1.5.2 [2023-04-26] Lars Kanis - Fix regression in copy_data regarding binary format when using no coder. [#527](https://github.com/ged/ruby-pg/pull/527) ## v1.5.1 [2023-04-24] Lars Kanis - Don't overwrite flags of timestamp coders. [#524](https://github.com/ged/ruby-pg/pull/524) Fixes a regression in rails: https://github.com/rails/rails/issues/48049 ## v1.5.0 [2023-04-24] Lars Kanis Enhancements: - Better support for binary format: - Extend PG::Connection#copy_data to better support binary transfers [#511](https://github.com/ged/ruby-pg/pull/511) - Add binary COPY encoder and decoder: * PG::BinaryEncoder::CopyRow * PG::BinaryDecoder::CopyRow - Add binary timestamp encoders: * PG::BinaryEncoder::TimestampUtc * PG::BinaryEncoder::TimestampLocal * PG::BinaryEncoder::Timestamp - Add PG::BinaryEncoder::Float4 and Float8 - Add binary date type: [#515](https://github.com/ged/ruby-pg/pull/515) * PG::BinaryEncoder::Date * PG::BinaryDecoder::Date - Add PG::Result#binary_tuples [#511](https://github.com/ged/ruby-pg/pull/511) It is useful for COPY and not deprecated in that context. - Add PG::TextEncoder::Bytea to BasicTypeRegistry [#506](https://github.com/ged/ruby-pg/pull/506) - Ractor support: [#519](https://github.com/ged/ruby-pg/pull/519) - Pg is now fully compatible with Ractor introduced in Ruby-3.0 and doesn't use any global mutable state. - All type en/decoders and type maps are shareable between ractors if they are made frozen by `Ractor.make_shareable`. - Also frozen PG::Result and PG::Tuple objects can be shared. - All frozen objects (except PG::Connection) can still be used to do communication with the PostgreSQL server or to read retrieved data. - PG::Connection is not shareable and must be created within each Ractor to establish a dedicated connection. - Use keyword arguments instead of hashes for Coder initialization and #to_h. [#511](https://github.com/ged/ruby-pg/pull/511) - Add PG::Result.res_status as a class method and extend Result#res_status to return the status of self. [#508](https://github.com/ged/ruby-pg/pull/508) - Reduce the number of files loaded at `require 'pg'` by using autoload. [#513](https://github.com/ged/ruby-pg/pull/513) Previously stdlib libraries `date`, `json`, `ipaddr` and `bigdecimal` were static dependencies, but now only `socket` is mandatory. - Improve garbage collector performance by adding write barriers to all PG classes. [#518](https://github.com/ged/ruby-pg/pull/518) Now they can be promoted to the old generation, which means they only get marked on major GC. - New method PG::Connection#check_socket to check the socket state. [#521](https://github.com/ged/ruby-pg/pull/521) - Mark many internal constants as private. [#522](https://github.com/ged/ruby-pg/pull/522) - Update Windows fat binary gem to OpenSSL-3.1.0. Bugfixes: - Move nfields-check of stream-methods after result status check [#507](https://github.com/ged/ruby-pg/pull/507) This ensures that the nfield-check doesn't hide errors like statement timeout. Removed: - Remove deprecated PG::BasicTypeRegistry.register_type and co. [Part of #519](https://github.com/ged/ruby-pg/commit/2919ee1a0c6b216e18e1d06c95c2616ef69d2f97) - Add deprecation warning about PG::Coder initialization per Hash argument. [#514](https://github.com/ged/ruby-pg/pull/514) It is recommended to use keyword arguments instead. - The internal encoding cache was removed. [#516](https://github.com/ged/ruby-pg/pull/516) It shouldn't have a practical performance impact. Repository: - `rake test` tries to find PostgreSQL server commands by pg_config [#503](https://github.com/ged/ruby-pg/pull/503) So there's no need to set the PATH manuelly any longer. ## v1.4.6 [2023-02-26] Lars Kanis - Add japanese README file. [#502](https://github.com/ged/ruby-pg/pull/502) - Improve `discard_results` to not block under memory pressure. [#500](https://github.com/ged/ruby-pg/pull/500) - Use a dedicated error class `PG::LostCopyState` for errors due to another query within `copy_data` and mention that it's probably due to another query. Previously the "no COPY in progress" `PG::Error` was less specific. [#499](https://github.com/ged/ruby-pg/pull/499) - Make sure an error in `put_copy_end` of `copy_data` doesn't lose the original exception. - Disable nonblocking mode while large object calls. [#498](https://github.com/ged/ruby-pg/pull/498) Since pg-1.3.0 libpq's "lo_*" calls failed when a bigger amount of data was transferred. This specifically forced the `active_storage-postgresql` gem to use pg-1.2.3. - Add rdoc options to gemspec, so that "gem install" generates complete offline documentation. - Add binary Windows gems for Ruby 3.2. - Update Windows fat binary gem to PostgreSQL-15.2 and OpenSSL-3.0.8. ## v1.4.5 [2022-11-17] Lars Kanis - Return the libpq default port when blank in conninfo. [#492](https://github.com/ged/ruby-pg/pull/492) - Add PG::DEF_PGPORT constant and use it in specs. [#492](https://github.com/ged/ruby-pg/pull/492) - Fix name resolution when empty or `nil` port is given. - Update error codes to PostgreSQL-15. - Update Windows fat binary gem to PostgreSQL-15.1 AND OpenSSL-1.1.1s. ## v1.4.4 [2022-10-11] Lars Kanis - Revert to let libpq do the host iteration while connecting. [#485](https://github.com/ged/ruby-pg/pull/485) Ensure that parameter `connect_timeout` is still respected. - Handle multiple hosts in the connection string, where only one host has writable session. [#476](https://github.com/ged/ruby-pg/pull/476) - Add some useful information to PG::Connection#inspect. [#487](https://github.com/ged/ruby-pg/pull/487) - Support new pgresult_stream_any API in sequel_pg-1.17.0. [#481](https://github.com/ged/ruby-pg/pull/481) - Update Windows fat binary gem to PostgreSQL-14.5. ## v1.4.3 [2022-08-09] Lars Kanis - Avoid memory bloat possible in put_copy_data in pg-1.4.0 to 1.4.2. [#473](https://github.com/ged/ruby-pg/pull/473) - Use Encoding::BINARY for JOHAB, removing some useless code. [#472](https://github.com/ged/ruby-pg/pull/472) ## v1.4.2 [2022-07-27] Lars Kanis Bugfixes: - Properly handle empty host parameter when connecting. [#471](https://github.com/ged/ruby-pg/pull/471) - Update Windows fat binary gem to OpenSSL-1.1.1q. ## v1.4.1 [2022-06-24] Lars Kanis Bugfixes: - Fix another ruby-2.7 keyword warning. [#465](https://github.com/ged/ruby-pg/pull/465) - Allow PG::Error to be created without arguments. [#466](https://github.com/ged/ruby-pg/pull/466) ## v1.4.0 [2022-06-20] Lars Kanis Added: - Add PG::Connection#hostaddr, present since PostgreSQL-12. [#453](https://github.com/ged/ruby-pg/pull/453) - Add PG::Connection.conninfo_parse to wrap PQconninfoParse. [#453](https://github.com/ged/ruby-pg/pull/453) Bugfixes: - Try IPv6 and IPv4 addresses, if DNS resolves to both. [#452](https://github.com/ged/ruby-pg/pull/452) - Re-add block-call semantics to PG::Connection.new accidently removed in pg-1.3.0. [#454](https://github.com/ged/ruby-pg/pull/454) - Handle client error after all data consumed in #copy_data for output. [#455](https://github.com/ged/ruby-pg/pull/455) - Avoid spurious keyword argument warning on Ruby 2.7. [#456](https://github.com/ged/ruby-pg/pull/456) - Change connection setup to respect connect_timeout parameter. [#459](https://github.com/ged/ruby-pg/pull/459) - Fix indefinite hang in case of connection error on Windows [#458](https://github.com/ged/ruby-pg/pull/458) - Set connection attribute of PG::Error in various places where it was missing. [#461](https://github.com/ged/ruby-pg/pull/461) - Fix transaction leak on early break/return. [#463](https://github.com/ged/ruby-pg/pull/463) - Update Windows fat binary gem to OpenSSL-1.1.1o and PostgreSQL-14.4. Enhancements: - Don't flush at each put_copy_data call, but flush at get_result. [#462](https://github.com/ged/ruby-pg/pull/462) ## v1.3.5 [2022-03-31] Lars Kanis Bugfixes: - Handle PGRES_COMMAND_OK in pgresult_stream_any. [#447](https://github.com/ged/ruby-pg/pull/447) Fixes usage when trying to stream the result of a procedure call that returns no results. Enhancements: - Rename BasicTypeRegistry#define_default_types to #register_default_types to use a more consistent terminology. Keeping define_default_types for compatibility. - BasicTypeRegistry: return self instead of objects by accident. This allows call chaining. - Add some April fun. [#449](https://github.com/ged/ruby-pg/pull/449) Documentation: - Refine documentation of conn.socket_io and conn.connect_poll ## v1.3.4 [2022-03-10] Lars Kanis Bugfixes: - Don't leak IO in case of connection errors. [#439](https://github.com/ged/ruby-pg/pull/439) Previously it was kept open until the PG::Connection was garbage collected. - Fix a performance regession in conn.get_result noticed in single row mode. [#442](https://github.com/ged/ruby-pg/pull/442) - Fix occasional error Errno::EBADF (Bad file descriptor) while connecting. [#444](https://github.com/ged/ruby-pg/pull/444) - Fix compatibility of res.stream_each* methods with Fiber.scheduler. [#446](https://github.com/ged/ruby-pg/pull/446) - Remove FL_TEST and FL_SET, which are MRI-internal. [#437](https://github.com/ged/ruby-pg/pull/437) Enhancements: - Allow pgresult_stream_any to be used by sequel_pg. [#443](https://github.com/ged/ruby-pg/pull/443) ## v1.3.3 [2022-02-22] Lars Kanis Bugfixes: - Fix omission of the third digit of IPv4 addresses in connection URI. [#435](https://github.com/ged/ruby-pg/pull/435) - Fix wrong permission of certs/larskanis-2022.pem in the pg-1.3.2.gem. [#432](https://github.com/ged/ruby-pg/pull/432) ## v1.3.2 [2022-02-14] Lars Kanis Bugfixes: - Cancel only active query after failing transaction. [#430](https://github.com/ged/ruby-pg/pull/430) This avoids an incompatibility with pgbouncer since pg-1.3.0. - Fix String objects with non-applied encoding when using COPY or record decoders. [#427](https://github.com/ged/ruby-pg/pull/427) - Update Windows fat binary gem to PostgreSQL-14.2. Enhancements: - Improve extconf.rb checks to reduces the number of compiler calls. - Add a check for PGRES_PIPELINE_SYNC, to make sure the library version and the header files are PostgreSQL-14+. [#429](https://github.com/ged/ruby-pg/pull/429) ## v1.3.1 [2022-02-01] Michael Granger Bugfixes: - Fix wrong handling of socket writability on Windows introduced in [#417](https://github.com/ged/ruby-pg/pull/417). This caused starvation in conn.put_copy_data. - Fix error in PG.version_string(true). [#419](https://github.com/ged/ruby-pg/pull/419) - Fix a regression in pg 1.3.0 where Ruby 2.x busy-looping any fractional seconds for every wait. [#420](https://github.com/ged/ruby-pg/pull/420) Enhancements: - Raise an error when conn.copy_data is used in nonblocking mode. ## v1.3.0 [2022-01-20] Michael Granger Install Enhancements: - Print some install help if libpq wasn't found. [#396](https://github.com/ged/ruby-pg/pull/396) This should help to pick the necessary package without googling. - Update Windows fat binary gem to OpenSSL-1.1.1m and PostgreSQL-14.1. - Add binary Windows gems for Ruby 3.0 and 3.1. - Make the library path of libpq available in ruby as PG::POSTGRESQL_LIB_PATH and add it to the search paths on Windows similar to +rpath+ on Unix systems. [#373](https://github.com/ged/ruby-pg/pull/373) - Fall back to pkg-config if pg_config is not found. [#380](https://github.com/ged/ruby-pg/pull/380) - Add option to extconf.rb to disable nogvl-wrapping of libpq functions. All methods (except PG::Connection.ping) are nonblocking now, so that GVL unlock is in theory no longer necessary. However it can have some advantage in concurrency, so that GVL unlock is still enabled by default. Use: - gem inst pg -- --disable-gvl-unlock API Enhancements: - Add full compatibility to Fiber.scheduler introduced in Ruby-3.0. [#397](https://github.com/ged/ruby-pg/pull/397) - Add async_connect and async_send methods and add specific specs for Fiber.scheduler [#342](https://github.com/ged/ruby-pg/pull/342) - Add async_get_result and async_get_last_result - Add async_get_copy_data - Implement async_put_copy_data/async_put_copy_end - Implement async_reset method using the nonblocking libpq API - Add async_set_client_encoding which is compatible to scheduler - Add async_cancel as a nonblocking version of conn#cancel - Add async_encrypt_password - Run Connection.ping in a second thread. - Make discard_results scheduler friendly - Do all socket waiting through the conn.socket_io object. - Avoid PG.connect blocking while address resolution by automatically providing the +hostaddr+ parameter and resolving in Ruby instead of libpq. - On Windows Fiber.scheduler support requires Ruby-3.1+. It is also only partly usable since may ruby IO methods are not yet scheduler aware on Windows. - Add support for pipeline mode of PostgreSQL-14. [#401](https://github.com/ged/ruby-pg/pull/401) - Allow specification of multiple hosts in PostgreSQL URI. [#387](https://github.com/ged/ruby-pg/pull/387) - Add new method conn.backend_key - used to implement our own cancel method. Type cast enhancements: - Add PG::BasicTypeMapForQueries::BinaryData for encoding of bytea columns. [#348](https://github.com/ged/ruby-pg/pull/348) - Reduce time to build coder maps and permit to reuse them for several type maps per PG::BasicTypeRegistry::CoderMapsBundle.new(conn) . [#376](https://github.com/ged/ruby-pg/pull/376) - Make BasicTypeRegistry a class and use a global default instance of it. Now a local type registry can be instanciated and given to the type map, to avoid changing shared global states. - Allow PG::BasicTypeMapForQueries to take a Proc as callback for undefined types. Other Enhancements: - Convert all PG classes implemented in C to TypedData objects. [#349](https://github.com/ged/ruby-pg/pull/349) - Support ObjectSpace.memsize_of(obj) on all classes implemented in C. [#393](https://github.com/ged/ruby-pg/pull/393) - Make all PG objects implemented in C memory moveable and therefore GC.compact friendly. [#349](https://github.com/ged/ruby-pg/pull/349) - Update errorcodes and error classes to PostgreSQL-14.0. - Add PG::CONNECTION_* constants for conn.status of newer PostgreSQL versions. - Add better support for logical replication. [#339](https://github.com/ged/ruby-pg/pull/339) - Change conn.socket_io to read+write mode and to a BasicSocket object instead of IO. - Use rb_io_wait() and the conn.socket_io object if available for better compatibility to Fiber.scheduler . Fall back to rb_wait_for_single_fd() on ruby < 3.0. - On Windows use a specialized wait function as a workaround for very poor performance of rb_io_wait(). [#416](https://github.com/ged/ruby-pg/pull/416) Bugfixes: - Release GVL while calling PQping which is a blocking method, but it didn't release GVL so far. - Fix Connection#transaction to no longer block on interrupts, for instance when pressing Ctrl-C and cancel a running query. [#390](https://github.com/ged/ruby-pg/pull/390) - Avoid casting of OIDs to fix compat with Redshift database. [#369](https://github.com/ged/ruby-pg/pull/369) - Call conn.block before each conn.get_result call to avoid possible blocking in case of a slow network and multiple query results. - Sporadic Errno::ENOTSOCK when using conn.socket_io on Windows [#398](https://github.com/ged/ruby-pg/pull/398) Deprecated: - Add deprecation warning to PG::BasicTypeRegistry.register_type and siblings. Removed: - Remove support of ruby-2.2, 2.3 and 2.4. Minimum is ruby-2.5 now. - Remove support for PostgreSQL-9.2. Minimum is PostgreSQL-9.3 now. - Remove constant PG::REVISION, which was broken since pg-1.1.4. Repository: - Replace Hoe by Bundler for gem packaging - Add Github Actions CI and testing of source and binary gems. ## v1.2.3 [2020-03-18] Michael Granger Bugfixes: - Fix possible segfault at `PG::Coder#encode`, `decode` or their implicit calls through a typemap after GC.compact. [#327](https://github.com/ged/ruby-pg/pull/327) - Fix possible segfault in `PG::TypeMapByClass` after GC.compact. [#328](https://github.com/ged/ruby-pg/pull/328) ## v1.2.2 [2020-01-06] Michael Granger Enhancements: - Add a binary gem for Ruby 2.7. ## v1.2.1 [2020-01-02] Michael Granger Enhancements: - Added internal API for sequel_pg compatibility. ## v1.2.0 [2019-12-20] Michael Granger Repository: - Our primary repository has been moved to Github https://github.com/ged/ruby-pg . Most of the issues from https://bitbucket.org/ged/ruby-pg have been migrated. [#43](https://github.com/ged/ruby-pg/pull/43) API enhancements: - Add PG::Result#field_name_type= and siblings to allow symbols to be used as field names. [#306](https://github.com/ged/ruby-pg/pull/306) - Add new methods for error reporting: - PG::Connection#set_error_context_visibility - PG::Result#verbose_error_message - PG::Result#result_verbose_error_message (alias) - Update errorcodes and error classes to PostgreSQL-12.0. - New constants: PG_DIAG_SEVERITY_NONLOCALIZED, PQERRORS_SQLSTATE, PQSHOW_CONTEXT_NEVER, PQSHOW_CONTEXT_ERRORS, PQSHOW_CONTEXT_ALWAYS Type cast enhancements: - Add PG::TextEncoder::Record and PG::TextDecoder::Record for en/decoding of Composite Types. [#258](https://github.com/ged/ruby-pg/pull/258), [#36](https://github.com/ged/ruby-pg/pull/36) - Add PG::BasicTypeRegistry.register_coder to register instances instead of classes. This is useful to register parametrized en/decoders like PG::TextDecoder::Record . - Add PG::BasicTypeMapForQueries#encode_array_as= to switch between various interpretations of ruby arrays. - Add Time, Array