influxdb-0.8.1/ 0000755 0000041 0000041 00000000000 14027467176 013401 5 ustar www-data www-data influxdb-0.8.1/README.md 0000644 0000041 0000041 00000054745 14027467176 014677 0 ustar www-data www-data # influxdb-ruby
[](https://badge.fury.io/rb/influxdb)
[](https://github.com/influxdata/influxdb-ruby/actions)
The official Ruby client library for [InfluxDB](https://influxdata.com/time-series-platform/influxdb/).
Maintained by [@toddboom](https://github.com/toddboom) and [@dmke](https://github.com/dmke).
#### Note: This library is for use with InfluxDB 1.x. For connecting to InfluxDB 2.x instances, please use the [influxdb-client-ruby](https://github.com/influxdata/influxdb-client-ruby) client.
## Contents
- [Platform support](#platform-support)
- [Ruby support](#ruby-support)
- [Installation](#installation)
- [Usage](#usage)
- [Creating a client](#creating-a-client)
- [Writing data](#writing-data)
- [A Note About Time Precision](#a-note-about-time-precision)
- [Querying](#querying)
- [Advanced Topics](#advanced-topics)
- [Administrative tasks](#administrative-tasks)
- [Continuous queries](#continuous-queries)
- [Retention policies](#retention-policies)
- [Reading data](#reading-data)
- [De-normalization](#de--normalization)
- [Streaming response](#streaming-response)
- [Retry](#retry)
- [List of configuration options](#list-of-configuration-options)
- [Testing](#testing)
- [Contributing](#contributing)
## Platform support
> **Support for InfluxDB v0.8.x is now deprecated**. The final version of this
> library that will support the older InfluxDB interface is `v0.1.9`, which is
> available as a gem and tagged on this repository.
>
> If you're reading this message, then you should only expect support for
> InfluxDB v0.9.1 and higher.
## Ruby support
Since v0.7.0, this gem requires Ruby >= 2.3.0. MRI 2.2 *should* still work,
however we are unable to test this properly, since our toolchain (Bundler)
has dropped support for it. Support for MRI < 2.2 is still available in the
v0.3.x series, see [stable-03 branch](https://github.com/influxdata/influxdb-ruby/tree/stable-03)
for documentation.
## Installation
```
$ [sudo] gem install influxdb
```
Or add it to your `Gemfile`, and run `bundle install`.
## Usage
*All examples assume you have a `require "influxdb"` in your code.*
### Creating a client
Connecting to a single host:
``` ruby
influxdb = InfluxDB::Client.new # default connects to localhost:8086
# or
influxdb = InfluxDB::Client.new host: "influxdb.domain.com"
```
Connecting to multiple hosts (with built-in load balancing and failover):
``` ruby
influxdb = InfluxDB::Client.new hosts: ["influxdb1.domain.com", "influxdb2.domain.com"]
```
#### Using a configuration URL
You can also provide a URL to connect to your server. This is particulary
useful for 12-factor apps, i.e. you can put the configuration in an environment
variable:
``` ruby
url = ENV["INFLUXDB_URL"] || "https://influxdb.example.com:8086/database_name?retry=3"
influxdb = InfluxDB::Client.new url: url
```
Please note, that the config options found in the URL have a lower precedence
than those explicitly given in the options hash. This means, that the following
sample will use an open-timeout of 10 seconds:
``` ruby
url = "https://influxdb.example.com:8086/database_name?open_timeout=3"
influxdb = InfluxDB::Client.new url: url, open_timeout: 10
```
#### Using a custom HTTP Proxy
By default, the `Net::HTTP` proxy behavior is used (see [Net::HTTP Proxy][proxy])
You can optionally set a proxy address and port via the `proxy_addr` and
`proxy_port` options:
``` ruby
influxdb = InfluxDB::Client.new database,
host: "influxdb.domain.com",
proxy_addr: "your.proxy.addr",
proxy_port: 8080
```
[proxy]: https://docs.ruby-lang.org/en/2.7.0/Net/HTTP.html#class-Net::HTTP-label-Proxies
### Writing data
Write some data:
``` ruby
username = 'foo'
password = 'bar'
database = 'site_development'
name = 'foobar'
influxdb = InfluxDB::Client.new database, username: username, password: password
# Enumerator that emits a sine wave
Value = (0..360).to_a.map {|i| Math.send(:sin, i / 10.0) * 10 }.each
loop do
data = {
values: { value: Value.next },
tags: { wave: 'sine' } # tags are optional
}
influxdb.write_point(name, data)
sleep 1
end
```
Write data with time precision (precision can be set in 2 ways):
``` ruby
username = 'foo'
password = 'bar'
database = 'site_development'
name = 'foobar'
time_precision = 's'
# either in the client initialization:
influxdb = InfluxDB::Client.new database,
username: username,
password: password,
time_precision: time_precision
data = {
values: { value: 0 },
timestamp: Time.now.to_i # timestamp is optional, if not provided point will be saved with current time
}
influxdb.write_point(name, data)
# or in a method call:
influxdb.write_point(name, data, time_precision)
```
> **Attention:** Please also read the
> [note about time precision](#a-note-about-time-precision) below.
Allowed values for `time_precision` are:
- `"ns"` or `nil` for nanosecond
- `"u"` for microsecond
- `"ms"` for millisecond
- `"s"` for second
- `"m"` for minute
- `"h"` for hour
Write data with a specific retention policy:
``` ruby
database = 'site_development'
name = 'foobar'
precision = 's'
retention = '1h.cpu'
influxdb = InfluxDB::Client.new database,
username: "foo",
password: "bar"
data = {
values: { value: 0 },
tags: { foo: 'bar', bar: 'baz' },
timestamp: Time.now.to_i
}
influxdb.write_point(name, data, precision, retention)
```
Write data while choosing the database:
``` ruby
database = 'site_development'
name = 'foobar'
precision = 's'
retention = '1h.cpu'
influxdb = InfluxDB::Client.new {
username: "foo",
password: "bar"
}
data = {
values: { value: 0 },
tags: { foo: 'bar', bar: 'baz' },
timestamp: Time.now.to_i
}
influxdb.write_point(name, data, precision, retention, database)
```
Write multiple points in a batch (performance boost):
``` ruby
data = [
{
series: 'cpu',
tags: { host: 'server_1', region: 'us' },
values: { internal: 5, external: 0.453345 }
},
{
series: 'gpu',
values: { value: 0.9999 },
}
]
influxdb.write_points(data)
# you can also specify precision in method call
precision = 'm'
influxdb.write_points(data, precision)
```
Write multiple points in a batch with a specific retention policy:
``` ruby
data = [
{
series: 'cpu',
tags: { host: 'server_1', region: 'us' },
values: { internal: 5, external: 0.453345 }
},
{
series: 'gpu',
values: { value: 0.9999 },
}
]
precision = 'm'
retention = '1h.cpu'
influxdb.write_points(data, precision, retention)
```
Write asynchronously:
``` ruby
database = 'site_development'
name = 'foobar'
influxdb = InfluxDB::Client.new database,
username: "foo",
password: "bar",
async: true
data = {
values: { value: 0 },
tags: { foo: 'bar', bar: 'baz' },
timestamp: Time.now.to_i
}
influxdb.write_point(name, data)
```
Using `async: true` is a shortcut for the following:
``` ruby
async_options = {
# number of points to write to the server at once
max_post_points: 1000,
# queue capacity
max_queue_size: 10_000,
# number of threads
num_worker_threads: 3,
# max. time (in seconds) a thread sleeps before
# checking if there are new jobs in the queue
sleep_interval: 5,
# whether client will block if queue is full
block_on_full_queue: false,
# Max time (in seconds) the main thread will wait for worker threads to stop
# on shutdown. Defaults to 2x sleep_interval.
shutdown_timeout: 10
}
influxdb = InfluxDB::Client.new database, async: async_options
```
Write data via UDP (note that a retention policy cannot be specified for UDP writes):
``` ruby
influxdb = InfluxDB::Client.new udp: { host: "127.0.0.1", port: 4444 }
name = 'hitchhiker'
data = {
values: { value: 666 },
tags: { foo: 'bar', bar: 'baz' }
}
influxdb.write_point(name, data)
```
Discard write errors:
``` ruby
influxdb = InfluxDB::Client.new(
udp: { host: "127.0.0.1", port: 4444 },
discard_write_errors: true
)
influxdb.write_point('hitchhiker', { values: { value: 666 } })
```
### A Note About Time Precision
The default precision in this gem is `"s"` (second), as Ruby's `Time#to_i`
operates on this resolution.
If you write data points with sub-second resolution, you _have_ to configure
your client instance with a more granular `time_precision` option **and** you
need to provide timestamp values which reflect this precision. **If you don't do
this, your points will be squashed!**
> A point is uniquely identified by the measurement name, tag set, and
> timestamp. If you submit a new point with the same measurement, tag set, and
> timestamp as an existing point, the field set becomes the union of the old
> field set and the new field set, where any ties go to the new field set. This
> is the intended behavior.
See [How does InfluxDB handle duplicate points?][docs-faq] for details.
For example, this is how to specify millisecond precision (which moves the
pitfall from the second- to the millisecond barrier):
```ruby
client = InfluxDB::Client.new(time_precision: "ms")
time = (Time.now.to_r * 1000).to_i
client.write_point("foobar", { values: { n: 42 }, timestamp: time })
```
For convenience, InfluxDB provides a few helper methods:
```ruby
# to get a timestamp with the precision configured in the client:
client.now
# to get a timestamp with the given precision:
InfluxDB.now(time_precision)
# to convert a Time into a timestamp with the given precision:
InfluxDB.convert_timestamp(Time.now, time_precision)
```
As with `client.write_point`, allowed values for `time_precision` are:
- `"ns"` or `nil` for nanosecond
- `"u"` for microsecond
- `"ms"` for millisecond
- `"s"` for second
- `"m"` for minute
- `"h"` for hour
[docs-faq]: http://docs.influxdata.com/influxdb/v1.7/troubleshooting/frequently-asked-questions/#how-does-influxdb-handle-duplicate-points
### Querying
``` ruby
database = 'site_development'
influxdb = InfluxDB::Client.new database,
username: "foo",
password: "bar"
# without a block:
influxdb.query 'select * from time_series_1 group by region'
# results are grouped by name, but also their tags:
#
# [
# {
# "name"=>"time_series_1",
# "tags"=>{"region"=>"uk"},
# "values"=>[
# {"time"=>"2015-07-09T09:03:31Z", "count"=>32, "value"=>0.9673},
# {"time"=>"2015-07-09T09:03:49Z", "count"=>122, "value"=>0.4444}
# ]
# },
# {
# "name"=>"time_series_1",
# "tags"=>{"region"=>"us"},
# "values"=>[
# {"time"=>"2015-07-09T09:02:54Z", "count"=>55, "value"=>0.4343}
# ]
# }
# ]
# with a block:
influxdb.query 'select * from time_series_1 group by region' do |name, tags, points|
printf "%s [ %p ]\n", name, tags
points.each do |pt|
printf " -> %p\n", pt
end
end
# result:
# time_series_1 [ {"region"=>"uk"} ]
# -> {"time"=>"2015-07-09T09:03:31Z", "count"=>32, "value"=>0.9673}
# -> {"time"=>"2015-07-09T09:03:49Z", "count"=>122, "value"=>0.4444}]
# time_series_1 [ {"region"=>"us"} ]
# -> {"time"=>"2015-07-09T09:02:54Z", "count"=>55, "value"=>0.4343}
```
If you would rather receive points with integer timestamp, it's possible to set
`epoch` parameter:
``` ruby
# globally, on client initialization:
influxdb = InfluxDB::Client.new database, epoch: 's'
influxdb.query 'select * from time_series group by region'
# [
# {
# "name"=>"time_series",
# "tags"=>{"region"=>"uk"},
# "values"=>[
# {"time"=>1438411376, "count"=>32, "value"=>0.9673}
# ]
# }
# ]
# or for a specific query call:
influxdb.query 'select * from time_series group by region', epoch: 'ms'
# [
# {
# "name"=>"time_series",
# "tags"=>{"region"=>"uk"},
# "values"=>[
# {"time"=>1438411376000, "count"=>32, "value"=>0.9673}
# ]
# }
# ]
```
Working with parameterized query strings works as expected:
``` ruby
influxdb = InfluxDB::Client.new database
named_parameter_query = "select * from time_series_0 where time > %{min_time}"
influxdb.query named_parameter_query, params: { min_time: 0 }
# compiles to:
# select * from time_series_0 where time > 0
positional_params_query = "select * from time_series_0 where f = %{1} and i < %{2}"
influxdb.query positional_params_query, params: ["foobar", 42]
# compiles to (note the automatic escaping):
# select * from time_series_0 where f = 'foobar' and i < 42
```
## Advanced Topics
### Administrative tasks
Create a database:
``` ruby
database = 'site_development'
influxdb.create_database(database)
```
Delete a database:
``` ruby
database = 'site_development'
influxdb.delete_database(database)
```
List databases:
``` ruby
influxdb.list_databases
```
Create a user for a database:
``` ruby
database = 'site_development'
new_username = 'foo'
new_password = 'bar'
permission = :write
# with all permissions
influxdb.create_database_user(database, new_username, new_password)
# with specified permission - options are: :read, :write, :all
influxdb.create_database_user(database, new_username, new_password, permissions: permission)
```
Update a user password:
``` ruby
username = 'foo'
new_password = 'bar'
influxdb.update_user_password(username, new_password)
```
Grant user privileges on database:
``` ruby
username = 'foobar'
database = 'foo'
permission = :read # options are :read, :write, :all
influxdb.grant_user_privileges(username, database, permission)
```
Revoke user privileges from database:
``` ruby
username = 'foobar'
database = 'foo'
permission = :write # options are :read, :write, :all
influxdb.revoke_user_privileges(username, database, permission)
```
Delete a user:
``` ruby
username = 'foobar'
influxdb.delete_user(username)
```
List users:
``` ruby
influxdb.list_users
```
Create cluster admin:
``` ruby
username = 'foobar'
password = 'pwd'
influxdb.create_cluster_admin(username, password)
```
List cluster admins:
``` ruby
influxdb.list_cluster_admins
```
Revoke cluster admin privileges from user:
``` ruby
username = 'foobar'
influxdb.revoke_cluster_admin_privileges(username)
```
### Continuous Queries
List continuous queries of a database:
``` ruby
database = 'foo'
influxdb.list_continuous_queries(database)
```
Create a continuous query for a database:
``` ruby
database = 'foo'
name = 'clicks_count'
query = 'SELECT COUNT(name) INTO clicksCount_1h FROM clicks GROUP BY time(1h)'
influxdb.create_continuous_query(name, database, query)
```
Additionally, you can specify the resample interval and the time range over
which the CQ runs:
``` ruby
influxdb.create_continuous_query(name, database, query, resample_every: "10m", resample_for: "65m")
```
Delete a continuous query from a database:
``` ruby
database = 'foo'
name = 'clicks_count'
influxdb.delete_continuous_query(name, database)
```
### Retention Policies
List retention policies of a database:
``` ruby
database = 'foo'
influxdb.list_retention_policies(database)
```
Create a retention policy for a database:
``` ruby
database = 'foo'
name = '1h.cpu'
duration = '10m'
replication = 2
influxdb.create_retention_policy(name, database, duration, replication)
```
Delete a retention policy from a database:
``` ruby
database = 'foo'
name = '1h.cpu'
influxdb.delete_retention_policy(name, database)
```
Alter a retention policy for a database:
``` ruby
database = 'foo'
name = '1h.cpu'
duration = '10m'
replication = 2
influxdb.alter_retention_policy(name, database, duration, replication)
```
### Reading data
#### (De-) Normalization
By default, InfluxDB::Client will denormalize points (received from InfluxDB as
columns and rows). If you want to get *raw* data add `denormalize: false` to
the initialization options or to query itself:
``` ruby
influxdb.query 'select * from time_series_1 group by region', denormalize: false
# [
# {
# "name"=>"time_series_1",
# "tags"=>{"region"=>"uk"},
# "columns"=>["time", "count", "value"],
# "values"=>[
# ["2015-07-09T09:03:31Z", 32, 0.9673],
# ["2015-07-09T09:03:49Z", 122, 0.4444]
# ]
# },
# {
# "name"=>"time_series_1",
# "tags"=>{"region"=>"us"},
# "columns"=>["time", "count", "value"],
# "values"=>[
# ["2015-07-09T09:02:54Z", 55, 0.4343]
# ]
# }
# ]
influxdb.query 'select * from time_series_1 group by region', denormalize: false do |name, tags, points|
printf "%s [ %p ]\n", name, tags
points.each do |key, values|
printf " %p -> %p\n", key, values
end
end
# time_series_1 [ {"region"=>"uk"} ]
# columns -> ["time", "count", "value"]
# values -> [["2015-07-09T09:03:31Z", 32, 0.9673], ["2015-07-09T09:03:49Z", 122, 0.4444]]}
# time_series_1 [ {"region"=>"us"} ]
# columns -> ["time", "count", "value"]
# values -> [["2015-07-09T09:02:54Z", 55, 0.4343]]}
```
You can also pick the database to query from:
```
influxdb.query 'select * from time_series_1', database: 'database'
```
#### Streaming response
If you expect large quantities of data in a response, you may want to enable
JSON streaming by setting a `chunk_size`:
``` ruby
influxdb = InfluxDB::Client.new database,
username: username,
password: password,
chunk_size: 10000
```
See the [official documentation][docs-chunking] for more details.
#### Retry
By default, InfluxDB::Client will keep trying (with exponential fall-off) to
connect to the database until it gets a connection. If you want to retry only
a finite number of times (or disable retries altogether), you can pass the
`:retry` option.
`:retry` can be either `true`, `false` or an `Integer` to retry infinite times,
disable retries or retry a finite number of times, respectively. Passing `0` is
equivalent to `false` and `-1` is equivalent to `true`.
```
$ irb -r influxdb
> influxdb = InfluxDB::Client.new 'database', retry: 8
=> #
> influxdb.query 'select * from serie limit 1'
E, [2016-08-31T23:55:18.287947 #23476] WARN -- InfluxDB: Failed to contact host localhost: # - retrying in 0.01s.
E, [2016-08-31T23:55:18.298455 #23476] WARN -- InfluxDB: Failed to contact host localhost: # - retrying in 0.02s.
E, [2016-08-31T23:55:18.319122 #23476] WARN -- InfluxDB: Failed to contact host localhost: # - retrying in 0.04s.
E, [2016-08-31T23:55:18.359785 #23476] WARN -- InfluxDB: Failed to contact host localhost: # - retrying in 0.08s.
E, [2016-08-31T23:55:18.440422 #23476] WARN -- InfluxDB: Failed to contact host localhost: # - retrying in 0.16s.
E, [2016-08-31T23:55:18.600936 #23476] WARN -- InfluxDB: Failed to contact host localhost: # - retrying in 0.32s.
E, [2016-08-31T23:55:18.921740 #23476] WARN -- InfluxDB: Failed to contact host localhost: # - retrying in 0.64s.
E, [2016-08-31T23:55:19.562428 #23476] WARN -- InfluxDB: Failed to contact host localhost: # - retrying in 1.28s.
InfluxDB::ConnectionError: Tried 8 times to reconnect but failed.
```
## List of configuration options
This index might be out of date. Please refer to `InfluxDB::DEFAULT_CONFIG_OPTIONS`,
found in `lib/influxdb/config.rb` for the source of truth.
| Category | Option | Default value | Notes
|:----------------|:------------------------|:--------------|:-----
| HTTP connection | `:host` or `:hosts` | "localhost" | can be an array and can include port
| | `:port` | 8086 | fallback port, unless provided by `:host` option
| | `:prefix` | "" | URL path prefix (e.g. server is behind reverse proxy)
| | `:username` | "root" | user credentials
| | `:password` | "root" | user credentials
| | `:open_timeout` | 5 | socket timeout
| | `:read_timeout` | 300 | socket timeout
| | `:auth_method` | "params" | "params", "basic_auth" or "none"
| Retry | `:retry` | -1 | max. number of retry attempts (reading and writing)
| | `:initial_delay` | 0.01 | initial wait time (doubles every retry attempt)
| | `:max_delay` | 30 | max. wait time when retrying
| SSL/HTTPS | `:use_ssl` | false | whether or not to use SSL (HTTPS)
| | `:verify_ssl` | true | verify vertificate when using SSL
| | `:ssl_ca_cert` | false | path to or name of CA cert
| Database | `:database` | *empty* | name of database
| | `:time_precision` | "s" | time resolution for data send to server
| | `:epoch` | false | time resolution for server responses (false = server default)
| Writer | `:async` | false | Async options hash, [details here](#async-options)
| | `:udp` | false | UDP connection info, [details here](#udp-options)
| | `:discard_write_errors` | false | suppress UDP socket errors
| Query | `:chunk_size` | *empty* | [details here](#streaming-response)
| | `:denormalize` | true | format of result
## Testing
```
git clone git@github.com:influxdata/influxdb-ruby.git
cd influxdb-ruby
bundle
bundle exec rake
```
## Contributing
- Fork this repository on GitHub.
- Make your changes.
- Add tests.
- Add an entry in the `CHANGELOG.md` in the "unreleased" section on top.
- Run the tests: `bundle exec rake`.
- Send a pull request.
- Please rebase against the master branch.
- If your changes look good, we'll merge them.
influxdb-0.8.1/bin/ 0000755 0000041 0000041 00000000000 14027467176 014151 5 ustar www-data www-data influxdb-0.8.1/bin/provision.sh 0000755 0000041 0000041 00000004351 14027467176 016543 0 ustar www-data www-data #!/bin/sh -e
if [ -z "$influx_version" ]; then
echo "== Provisioning InfluxDB: Skipping, influx_version is empty"
exit 0
else
echo "== Provisioning InfluxDB ${influx_version}"
fi
package_name="influxdb_${influx_version}_amd64.deb"
[ -z "${channel}" ] && channel="releases"
download_url="https://dl.influxdata.com/influxdb/${channel}/${package_name}"
echo "== Downloading package"
if which curl 2>&1 >/dev/null; then
curl "${download_url}" > "${HOME}/${package_name}"
else
echo >&2 "E: Could not find curl"
exit 1
fi
echo "== Download verification"
sha2_sum=$(sha256sum "${HOME}/${package_name}" | awk '{ print $1 }')
if [ -z "${pkghash}" ]; then
echo "-- Skipping, pkghash is empty"
elif [ -n "${pkghash}" ] && [ "${sha2_sum}" != "${pkghash}" ]; then
echo >&2 "E: Hash sum mismatch (got ${sha2_sum}, expected ${pkghash})"
exit 1
fi
echo "-- Download has SHA256 hash: ${sha2_sum}"
echo "== Installing"
sudo dpkg -i "${HOME}/${package_name}"
sudo service influxdb start || true
echo "-- waiting for daemon to start"
while ! curl --head --fail --silent http://localhost:8086/ping; do
echo -n "."
sleep 1
done
echo "== Configuring"
echo "-- create admin user"
/usr/bin/influx -execute "CREATE USER root WITH PASSWORD 'toor' WITH ALL PRIVILEGES"
echo "-- create non-admin user"
/usr/bin/influx -execute "CREATE USER test_user WITH PASSWORD 'resu_tset'"
echo "-- create databases"
/usr/bin/influx -execute "CREATE DATABASE db_one"
/usr/bin/influx -execute "CREATE DATABASE db_two"
echo "-- grant access"
/usr/bin/influx -execute "GRANT ALL ON db_two TO test_user"
echo "== Download and import NOAA sample data"
curl https://s3.amazonaws.com/noaa.water-database/NOAA_data.txt > noaa.txt
/usr/bin/influx -import -path noaa.txt -precision s
echo "-- grant access"
/usr/bin/influx -execute "GRANT ALL ON NOAA_water_database TO test_user"
echo "== Enable authentication"
if [ ! -f /etc/influxdb/influxdb.conf ]; then
echo >&2 "E: config file not found"
exit 1
fi
sudo sed -i 's/auth-enabled = false/auth-enabled = true/' /etc/influxdb/influxdb.conf
sudo service influxdb restart || true
echo "-- waiting for daemon to restart"
while ! curl --head --fail --silent http://localhost:8086/ping; do
echo -n "."
sleep 1
done
echo "== Done"
influxdb-0.8.1/influxdb.gemspec 0000644 0000041 0000041 00000001765 14027467176 016572 0 ustar www-data www-data lib = File.expand_path('lib', __dir__)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require 'influxdb/version'
Gem::Specification.new do |spec|
spec.name = "influxdb"
spec.version = InfluxDB::VERSION
spec.authors = ["Todd Persen"]
spec.email = ["influxdb@googlegroups.com"]
spec.description = "This is the official Ruby library for InfluxDB."
spec.summary = "Ruby library for InfluxDB."
spec.homepage = "http://influxdb.org"
spec.license = "MIT"
spec.files = `git ls-files`.split($/) # rubocop:disable Style/SpecialGlobalVars
spec.test_files = spec.files.grep(%r{^(test|spec|features|smoke)/})
spec.require_paths = ["lib"]
spec.required_ruby_version = ">= 2.2.0"
spec.add_development_dependency "bundler"
spec.add_development_dependency "rake"
spec.add_development_dependency "rspec", "~> 3.6"
spec.add_development_dependency "rubocop", "~> 0.61.1"
spec.add_development_dependency "webmock", "~> 3.0"
end
influxdb-0.8.1/spec/ 0000755 0000041 0000041 00000000000 14027467176 014333 5 ustar www-data www-data influxdb-0.8.1/spec/smoke/ 0000755 0000041 0000041 00000000000 14027467176 015451 5 ustar www-data www-data influxdb-0.8.1/spec/smoke/smoke_batch_spec.rb 0000644 0000041 0000041 00000006123 14027467176 021271 0 ustar www-data www-data require "spec_helper"
describe InfluxDB::Client, smoke: true, if: min_influx_version("1.2.0") do
before do
WebMock.allow_net_connect!
end
after do
WebMock.disable_net_connect!
end
let(:client) do
InfluxDB::Client.new \
database: "NOAA_water_database",
username: "test_user",
password: "resu_tset",
retry: 4
end
let :queries do
[
"select count(water_level) from h2o_feet where location = 'santa_monica'",
"select * from h2o_feet where time > now()", # empty result
"select count(water_level) from h2o_feet where location = 'coyote_creek'",
]
end
it "#query filters empty results incorrect result" do
results = client.query(queries.join(";"))
expect(results.size).to be 2 # but should be 3!
expect(results[0]["values"][0]["count"]).to be 7654
expect(results[1]["values"][0]["count"]).to be 7604
end
context "#batch.execute" do
it "returns expected results" do
results = client.batch do |b|
queries.each { |q| b.add(q) }
end.execute
expect(results.size).to be 3
expect(results[0][0]["values"][0]["count"]).to be 7654
expect(results[1]).to eq []
expect(results[2][0]["values"][0]["count"]).to be 7604
end
it "with block yields statement id" do
batch = client.batch do |b|
queries.each { |q| b.add(q) }
end
batch.execute do |sid, _, _, values|
case sid
when 0
expect(values[0]["count"]).to be 7654
when 1
expect(values).to eq []
when 2
expect(values[0]["count"]).to be 7604
end
end
end
context "with tags" do
let :queries do
[
"select count(water_level) from h2o_feet group by location",
"select * from h2o_feet where time > now()", # empty result
]
end
it "returns expected results" do
results = client.batch do |b|
queries.each { |q| b.add(q) }
end.execute
expect(results.size).to be 2
results[0].each do |res|
location = res["tags"]["location"]
expect(%w[coyote_creek santa_monica]).to include location
value = location == "santa_monica" ? 7654 : 7604
expect(res["values"][0]["count"]).to be value
end
end
it "with block yields statement id" do
batch = client.batch do |b|
queries.each { |q| b.add(q) }
end
got_santa_monica = got_coyote_creek = got_empty_result = false
batch.execute do |sid, _, tags, values|
case [sid, tags["location"]]
when [0, "santa_monica"]
expect(values[0]["count"]).to be 7654
got_santa_monica = true
when [0, "coyote_creek"]
expect(values[0]["count"]).to be 7604
got_coyote_creek = true
when [1, nil]
expect(values).to eq []
got_empty_result = true
end
end
expect(got_coyote_creek).to be true
expect(got_santa_monica).to be true
expect(got_empty_result).to be true
end
end
end
end
influxdb-0.8.1/spec/smoke/smoke_spec.rb 0000644 0000041 0000041 00000004441 14027467176 020131 0 ustar www-data www-data require "spec_helper"
describe InfluxDB::Client, smoke: true do
before do
WebMock.allow_net_connect!
end
after do
WebMock.disable_net_connect!
end
let(:client) do
InfluxDB::Client.new \
database: "NOAA_water_database",
username: "test_user",
password: "resu_tset",
retry: 4
end
context "connects to the database" do
it "returns the version number" do
expect(client.version).to be_truthy
end
end
context "retrieves data from the NOAA database" do
sample_data1 = {
"time" => "2019-08-17T00:00:00Z",
"level description" => "below 3 feet",
"location" => "santa_monica",
"water_level" => 2.064
}
sample_data2 = {
"time" => "2019-08-17T00:12:00Z",
"level description" => "below 3 feet",
"location" => "santa_monica",
"water_level" => 2.028
}
it "returns all five measurements" do
result = client.query("show measurements")[0]["values"].map { |v| v["name"] }
expect(result).to eq(%w[average_temperature h2o_feet h2o_pH h2o_quality h2o_temperature])
end
it "counts the number of non-null values of water level in h2o feet" do
result = client.query("select count(water_level) from h2o_feet")[0]["values"][0]["count"]
expect(result).to eq(15_258)
end
it "selects the first five observations in the measurement h2o_feet" do
result = client
.query("select * from h2o_feet WHERE location = 'santa_monica'")
.first["values"]
expect(result.size). to eq(7654)
expect(result).to include(sample_data1)
expect(result).to include(sample_data2)
end
end
context "batch query" do
let :queries do
[
"select count(water_level) from h2o_feet where location = 'santa_monica'",
"select * from h2o_feet where time > now()", # empty result
"select count(water_level) from h2o_feet where location = 'coyote_creek'",
]
end
it "#query filters empty results incorrect result" do
results = client.query(queries.join(";"))
expect(results.size).to be 2 # but should be 3!
expect(results[0]["values"][0]["count"]).to be 7654
expect(results[1]["values"][0]["count"]).to be 7604
end
end
end
influxdb-0.8.1/spec/spec_helper.rb 0000644 0000041 0000041 00000002711 14027467176 017152 0 ustar www-data www-data require "influxdb"
require "webmock/rspec"
# rubocop:disable Lint/HandleExceptions
begin
require "pry-byebug"
rescue LoadError
end
# rubocop:enable Lint/HandleExceptions
def min_influx_version(version)
v = ENV.fetch("influx_version", "0")
return true if v == "nightly"
current = Gem::Version.new(v)
current >= Gem::Version.new(version)
end
RSpec.configure do |config|
config.color = ENV["CI"] != "true"
config.filter_run_excluding smoke: ENV["CI"] != "true" || !ENV.key?("influx_version")
puts "SMOKE TESTS ARE NOT CURRENTLY RUNNING" if ENV["CI"] != "true"
# rubocop:disable Style/ConditionalAssignment
if config.files_to_run.one? || ENV["CI"] == "true"
config.formatter = :documentation
else
config.formatter = :progress
end
# rubocop:enable Style/ConditionalAssignment
if ENV["LOG"]
Dir.mkdir("tmp") unless Dir.exist?("tmp")
logfile = File.open("tmp/spec.log", File::WRONLY | File::TRUNC | File::CREAT)
InfluxDB::Logging.logger = Logger.new(logfile).tap do |logger|
logger.formatter = proc { |severity, _datetime, progname, message|
format "%-5s - %s: %s\n", severity, progname, message
}
end
config.before(:each) do
InfluxDB::Logging.logger.info("RSpec") { self.class }
InfluxDB::Logging.logger.info("RSpec") { @__inspect_output }
InfluxDB::Logging.log_level = Logger.const_get(ENV["LOG"].upcase)
end
config.after(:each) do
logfile.write "\n"
end
end
end
influxdb-0.8.1/spec/influxdb/ 0000755 0000041 0000041 00000000000 14027467176 016146 5 ustar www-data www-data influxdb-0.8.1/spec/influxdb/cases/ 0000755 0000041 0000041 00000000000 14027467176 017244 5 ustar www-data www-data influxdb-0.8.1/spec/influxdb/cases/show_field_keys_spec.rb 0000644 0000041 0000041 00000003244 14027467176 023764 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password"
end
let(:query) { nil }
let(:response) { { "results" => [{ "statement_id" => 0 }] } }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query, db: "database" })
.to_return(body: JSON.generate(response))
end
describe "#show_field_keys" do
let(:query) { "SHOW FIELD KEYS" }
let(:response) do
{
"results" => [{
"series" => [{
"name" => "measurement_a",
"columns" => %w[fieldKey fieldType],
"values" => [%w[a_string_field string],
%w[a_boolean_field boolean],
%w[a_float_field float],
%w[an_integer_field integer]]
}, {
"name" => "measurement_b",
"columns" => %w[fieldKey fieldType],
"values" => [%w[another_string string]]
}]
}]
}
end
let(:expected_result) do
{
"measurement_a" => {
"a_string_field" => ["string"],
"a_boolean_field" => ["boolean"],
"a_float_field" => ["float"],
"an_integer_field" => ["integer"],
},
"measurement_b" => {
"another_string" => ["string"],
}
}
end
it "should GET a list of field/type pairs per measurement" do
expect(subject.show_field_keys).to eq(expected_result)
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_retention_policy_spec.rb 0000644 0000041 0000041 00000011010 14027467176 025407 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
let(:query) { nil }
let(:response) { { "results" => [{ "statement_id" => 0 }] } }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query })
.to_return(body: JSON.generate(response))
end
describe "#list_retention_policies" do
let(:query) { "SHOW RETENTION POLICIES ON \"database\"" }
context "database with RPs" do
let(:response) { { "results" => [{ "statement_id" => 0, "series" => [{ "columns" => %w[name duration replicaN default], "values" => [["default", "0", 1, true], ["another", "1", 2, false]] }] }] } }
let(:expected_result) { [{ "name" => "default", "duration" => "0", "replicaN" => 1, "default" => true }, { "name" => "another", "duration" => "1", "replicaN" => 2, "default" => false }] }
it "should GET a list of retention policies" do
expect(subject.list_retention_policies('database')).to eq(expected_result)
end
end
context "database without RPs" do
let(:response) { { "results" => [{ "statement_id" => 0, "series" => [{ "columns" => %w[name duration shardGroupDuration replicaN default] }] }] } }
let(:expected_result) { [] }
it "should GET a list of retention policies" do
expect(subject.list_retention_policies('database')).to eq(expected_result)
end
end
end
describe "#create_retention_policy" do
context "default" do
let(:query) { "CREATE RETENTION POLICY \"1h.cpu\" ON \"foo\" DURATION 1h REPLICATION 2 DEFAULT" }
it "should GET to create a new database" do
expect(subject.create_retention_policy('1h.cpu', 'foo', '1h', 2, true)).to be_a(Net::HTTPOK)
end
end
context "non-default" do
let(:query) { "CREATE RETENTION POLICY \"1h.cpu\" ON \"foo\" DURATION 1h REPLICATION 2" }
it "should GET to create a new database" do
expect(subject.create_retention_policy('1h.cpu', 'foo', '1h', 2)).to be_a(Net::HTTPOK)
end
end
context "default_with_shard_duration" do
let(:query) { "CREATE RETENTION POLICY \"1h.cpu\" ON \"foo\" DURATION 48h REPLICATION 2 SHARD DURATION 1h DEFAULT" }
it "should GET to create a new database" do
expect(subject.create_retention_policy('1h.cpu', 'foo', '48h', 2, true, shard_duration: '1h')).to be_a(Net::HTTPOK)
end
end
context "non-default_with_shard_duration" do
let(:query) { "CREATE RETENTION POLICY \"1h.cpu\" ON \"foo\" DURATION 48h REPLICATION 2 SHARD DURATION 1h" }
it "should GET to create a new database" do
expect(subject.create_retention_policy('1h.cpu', 'foo', '48h', 2, shard_duration: '1h')).to be_a(Net::HTTPOK)
end
end
end
describe "#delete_retention_policy" do
let(:query) { "DROP RETENTION POLICY \"1h.cpu\" ON \"foo\"" }
it "should GET to remove a database" do
expect(subject.delete_retention_policy('1h.cpu', 'foo')).to be_a(Net::HTTPOK)
end
end
describe "#alter_retention_policy" do
context "default" do
let(:query) { "ALTER RETENTION POLICY \"1h.cpu\" ON \"foo\" DURATION 1h REPLICATION 2 DEFAULT" }
it "should GET to alter a new database" do
expect(subject.alter_retention_policy('1h.cpu', 'foo', '1h', 2, true)).to be_a(Net::HTTPOK)
end
end
context "non-default" do
let(:query) { "ALTER RETENTION POLICY \"1h.cpu\" ON \"foo\" DURATION 1h REPLICATION 2" }
it "should GET to alter a new database" do
expect(subject.alter_retention_policy('1h.cpu', 'foo', '1h', 2)).to be_a(Net::HTTPOK)
end
end
context "default_with_shard_duration" do
let(:query) { "ALTER RETENTION POLICY \"1h.cpu\" ON \"foo\" DURATION 48h REPLICATION 2 SHARD DURATION 1h DEFAULT" }
it "should GET to alter a new database" do
expect(subject.alter_retention_policy('1h.cpu', 'foo', '48h', 2, true, shard_duration: '1h')).to be_a(Net::HTTPOK)
end
end
context "non-default_with_shard_duration" do
let(:query) { "ALTER RETENTION POLICY \"1h.cpu\" ON \"foo\" DURATION 48h REPLICATION 2 SHARD DURATION 1h" }
it "should GET to alter a new database" do
expect(subject.alter_retention_policy('1h.cpu', 'foo', '48h', 2, shard_duration: '1h')).to be_a(Net::HTTPOK)
end
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_database_spec.rb 0000644 0000041 0000041 00000003646 14027467176 023605 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
let(:query) { nil }
let(:response) { { "results" => [{ "statement_id" => 0 }] } }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query })
.to_return(body: JSON.generate(response))
end
describe "#create_database" do
describe "from param" do
let(:query) { "CREATE DATABASE \"foo\"" }
it "should GET to create a new database" do
expect(subject.create_database("foo")).to be_a(Net::HTTPOK)
end
end
describe "from config" do
let(:query) { "CREATE DATABASE \"database\"" }
it "should GET to create a new database using database name from config" do
expect(subject.create_database).to be_a(Net::HTTPOK)
end
end
end
describe "#delete_database" do
describe "from param" do
let(:query) { "DROP DATABASE \"foo\"" }
it "should GET to remove a database" do
expect(subject.delete_database("foo")).to be_a(Net::HTTPOK)
end
end
describe "from config" do
let(:query) { "DROP DATABASE \"database\"" }
it "should GET to remove a database using database name from config" do
expect(subject.delete_database).to be_a(Net::HTTPOK)
end
end
end
describe "#list_databases" do
let(:query) { "SHOW DATABASES" }
let(:response) { { "results" => [{ "series" => [{ "name" => "databases", "columns" => ["name"], "values" => [["foobar"]] }] }] } }
let(:expected_result) { [{ "name" => "foobar" }] }
it "should GET a list of databases" do
expect(subject.list_databases).to eq(expected_result)
end
end
end
influxdb-0.8.1/spec/influxdb/cases/udp_client_spec.rb 0000644 0000041 0000041 00000002320 14027467176 022726 0 ustar www-data www-data require "spec_helper"
describe InfluxDB::Client do
let(:socket) { UDPSocket.new.tap { |s| s.bind "localhost", 0 } }
after { socket.close rescue nil }
let(:client) { described_class.new(udp: { host: "localhost", port: socket.addr[1] }) }
specify { expect(client.writer).to be_a(InfluxDB::Writer::UDP) }
describe "#write" do
let(:message) { 'responses,region=eu value=5i' }
it "sends a UDP packet" do
client.write_point("responses", values: { value: 5 }, tags: { region: 'eu' })
rec_message = socket.recvfrom(30).first
expect(rec_message).to eq message
end
end
describe "#write with discard_write_errors" do
let(:client) do
described_class.new \
udp: { host: "localhost", port: socket.addr[1] },
discard_write_errors: true
end
it "doesn't raise" do
client.write_point("responses", values: { value: 5 }, tags: { region: 'eu' })
socket.close
client.write_point("responses", values: { value: 7 }, tags: { region: 'eu' })
allow(client).to receive(:log)
expect do
client.write_point("responses", values: { value: 7 }, tags: { region: 'eu' })
end.not_to raise_error
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_series_spec.rb 0000644 0000041 0000041 00000004446 14027467176 023332 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new(
"database",
**{
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
}.merge(args)
)
end
let(:args) { {} }
describe "GET #list_series" do
let(:response) { { "results" => [{ "series" => [{ "columns" => "key", "values" => [["series1,name=default,duration=0"], ["series2,name=another,duration=1"]] }] }] } }
let(:data) { %w[series1 series2] }
let(:query) { "SHOW SERIES" }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query, db: "database" })
.to_return(body: JSON.generate(response))
end
it "returns a list of all series names" do
expect(subject.list_series).to eq data
end
end
describe "GET empty #list_series" do
let(:response) { { "results" => [{ "series" => [] }] } }
let(:query) { "SHOW SERIES" }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query, db: "database" })
.to_return(body: JSON.generate(response))
end
it "returns a empty list" do
expect(subject.list_series).to eq []
end
end
describe "#delete_series" do
describe "without a where clause" do
let(:name) { "events" }
let(:query) { "DROP SERIES FROM \"#{name}\"" }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query, db: "database" })
end
it "should GET to remove a database" do
expect(subject.delete_series(name)).to be_a(Net::HTTPOK)
end
end
describe "with a where clause" do
let(:name) { "events" }
let(:query) { "DROP SERIES FROM \"#{name}\" WHERE \"tag\"='value'" }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query, db: "database" })
end
it "should GET to remove a database" do
expect(subject.delete_series(name, where: "\"tag\"='value'")).to be_a(Net::HTTPOK)
end
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_measurements.rb 0000644 0000041 0000041 00000003141 14027467176 023525 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
let(:query) { nil }
let(:response) { { "results" => [{ "statement_id" => 0 }] } }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query })
.to_return(body: JSON.generate(response))
end
describe "#list_measuremtns" do
let(:query) { "SHOW MEASUREMENTS" }
context "with measurements" do
let(:response) { { "results" => [{ "statement_id" => 0, "series" => [{ "columns" => "name", "values" => [["average_temperature"], ["h2o_feet"], ["h2o_pH"], ["h2o_quality"], ["h2o_temperature"]] }] }] } }
let(:expected_result) { %w[average_temperature h2o_feet h2o_pH h2o_quality h2o_temperature] }
it "should GET a list of measurements" do
expect(subject.list_measurements).to eq(expected_result)
end
end
context "without measurements" do
let(:response) { { "results" => [{ "statement_id" => 0 }] } }
let(:expected_result) { nil }
it "should GET a list of measurements" do
expect(subject.list_measurements).to eq(expected_result)
end
end
end
describe "#delete_retention_policy" do
let(:query) { "DROP MEASUREMENT \"foo\"" }
it "should GET to remove a database" do
expect(subject.delete_measurement('foo')).to be true
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_cluster_spec.rb 0000644 0000041 0000041 00000004542 14027467176 023516 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new "database", **{
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s",
}.merge(args)
end
let(:args) { {} }
describe "#create_cluster_admin" do
let(:user) { 'adminadmin' }
let(:pass) { 'passpass' }
let(:query) { "CREATE USER \"#{user}\" WITH PASSWORD '#{pass}' WITH ALL PRIVILEGES" }
context 'with existing admin user' do
before do
stub_request(:get, "http://influxdb.test:9999/query").with(
query: { u: "username", p: "password", q: query }
)
end
it "should GET to create a new cluster admin" do
expect(subject.create_cluster_admin(user, pass)).to be_a(Net::HTTPOK)
end
end
context 'with no admin user' do
let(:args) { { auth_method: 'none' } }
before do
stub_request(:get, "http://influxdb.test:9999/query").with(
query: { q: query }
)
end
it "should GET to create a new cluster admin" do
expect(subject.create_cluster_admin(user, pass)).to be_a(Net::HTTPOK)
end
end
end
describe "#list_cluster_admins" do
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "columns" => %w[user admin],
"values" => [["dbadmin", true], ["foobar", false]] }] }] }
end
let(:expected_result) { ["dbadmin"] }
before do
stub_request(:get, "http://influxdb.test:9999/query").with(
query: { u: "username", p: "password", q: "SHOW USERS" }
).to_return(body: JSON.generate(response), status: 200)
end
it "should GET a list of cluster admins" do
expect(subject.list_cluster_admins).to eq(expected_result)
end
end
describe "#revoke_cluster_admin_privileges" do
let(:user) { 'useruser' }
let(:query) { "REVOKE ALL PRIVILEGES FROM \"#{user}\"" }
before do
stub_request(:get, "http://influxdb.test:9999/query").with(
query: { u: "username", p: "password", q: query }
)
end
it "should GET to revoke cluster admin privileges from a user" do
expect(subject.revoke_cluster_admin_privileges(user)).to be_a(Net::HTTPOK)
end
end
end
influxdb-0.8.1/spec/influxdb/cases/async_client_spec.rb 0000644 0000041 0000041 00000011367 14027467176 023266 0 ustar www-data www-data require "spec_helper"
require "timeout"
describe InfluxDB::Client do
let(:async_options) { { sleep_interval: 0.1 } }
let(:client) { described_class.new(async: async_options) }
let(:subject) { client }
let(:stub_url) { "http://localhost:8086/write?db=&p=root&precision=s&u=root" }
let(:worker) { client.writer.worker }
specify { expect(subject.writer).to be_a(InfluxDB::Writer::Async) }
describe "#write_point" do
it "sends writes to client" do
post_request = stub_request(:post, stub_url).to_return(status: 204)
(worker.max_post_points + 100).times do |i|
subject.write_point('a', values: { i: i })
end
sleep 1 until worker.threads.none? { |t| t[:influxdb].nil? }
subject.stop!
worker.threads.each do |t|
expect(t.stop?).to be true
end
# exact times can be 2 or 3 (because we have 3 worker threads),
# but cannot be less than 2 due to MAX_POST_POINTS limit
expect(post_request).to have_been_requested.at_least_times(2)
end
context 'when precision, retention_policy and database are given' do
let(:series) { 'test_series' }
let(:precision) { 'test_precision' }
let(:retention_policy) { 'test_period' }
let(:database) { 'test_database' }
let(:async_options) { { num_worker_threads: 1, sleep_interval: 0.1 } }
it "writes aggregate payload to the client" do
queue = Queue.new
allow(client).to receive(:write) do |*args|
queue.push(args)
end
subject.write_point(series, { values: { t: 60 } }, precision, retention_policy, database)
subject.write_point(series, { values: { t: 61 } }, precision, retention_policy, database)
sleep 1 until worker.threads.none? { |t| t[:influxdb].nil? }
subject.stop!
expect(queue.pop).to eq ["#{series} t=60i\n#{series} t=61i", precision, retention_policy, database]
end
context 'when different precision, retention_policy and database are given' do
let(:precision2) { 'test_precision2' }
let(:retention_policy2) { 'test_period2' }
let(:database2) { 'test_database2' }
it "writes separated payloads for each {precision, retention_policy, database} set" do
queue = Queue.new
allow(client).to receive(:write) do |*args|
queue.push(args)
end
subject.write_point(series, { values: { t: 60 } }, precision, retention_policy, database)
subject.write_point(series, { values: { t: 61 } }, precision2, retention_policy, database)
subject.write_point(series, { values: { t: 62 } }, precision, retention_policy2, database)
subject.write_point(series, { values: { t: 63 } }, precision, retention_policy, database2)
sleep 1 until worker.threads.none? { |t| t[:influxdb].nil? }
subject.stop!
expect(queue.pop).to eq ["#{series} t=60i", precision, retention_policy, database]
expect(queue.pop).to eq ["#{series} t=61i", precision2, retention_policy, database]
expect(queue.pop).to eq ["#{series} t=62i", precision, retention_policy2, database]
expect(queue.pop).to eq ["#{series} t=63i", precision, retention_policy, database2]
end
end
end
end
describe "async options" do
subject { worker }
before { worker.stop! }
context 'when all options are given' do
let(:async_options) do
{
max_post_points: 10,
max_queue_size: 100,
num_worker_threads: 1,
sleep_interval: 0.5,
block_on_full_queue: false,
shutdown_timeout: 0.6,
}
end
it "uses the specified values" do
expect(subject.max_post_points).to be 10
expect(subject.max_queue_size).to be 100
expect(subject.num_worker_threads).to be 1
expect(subject.sleep_interval).to be_within(0.0001).of(0.5)
expect(subject.block_on_full_queue).to be false
expect(subject.queue).to be_kind_of(InfluxDB::MaxQueue)
expect(subject.shutdown_timeout).to be_within(0.0001).of(0.6)
end
end
context 'when only sleep_interval is given' do
let(:async_options) { { sleep_interval: 0.2 } }
it "uses a value for shutdown_timeout that is 2x sleep_interval" do
expect(subject.sleep_interval).to be_within(0.0001).of(0.2)
expect(subject.shutdown_timeout).to be_within(0.0001).of(0.4)
end
end
context 'when only shutdown_timeout is given' do
let(:async_options) { { shutdown_timeout: 0.3 } }
it "uses that value" do
expect(subject.sleep_interval).to be_within(0.0001).of(5)
expect(subject.shutdown_timeout).to be_within(0.0001).of(0.3)
end
end
end
end
influxdb-0.8.1/spec/influxdb/cases/querying_spec.rb 0000644 0000041 0000041 00000047314 14027467176 022457 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new "database", **{
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
}.merge(args)
end
let(:args) { {} }
let(:database) { subject.config.database }
let(:extra_params) { {} }
let(:response) {}
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { q: query, u: "username", p: "password", precision: 's', db: database }.merge(extra_params))
.to_return(body: JSON.generate(response))
end
describe "#query" do
context "with single series with multiple points" do
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu", "tags" => { "region" => "us" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T14:58:37Z", 92, 0.3445], ["2015-07-07T14:59:09Z", 68, 0.8787]] }] }] }
end
let(:expected_result) do
[{ "name" => "cpu", "tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "temp" => 92, "value" => 0.3445 },
{ "time" => "2015-07-07T14:59:09Z", "temp" => 68, "value" => 0.8787 }] }]
end
let(:query) { 'SELECT * FROM cpu' }
it "should return array with single hash containing multiple values" do
expect(subject.query(query)).to eq(expected_result)
end
end
context "with series with different tags" do
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu",
"tags" => { "region" => "pl" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T15:13:04Z", 34, 0.343443]] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T14:58:37Z", 92, 0.3445], ["2015-07-07T14:59:09Z", 68, 0.8787]] }] }] }
end
let(:expected_result) do
[{ "name" => "cpu",
"tags" => { "region" => "pl" },
"values" => [{ "time" => "2015-07-07T15:13:04Z", "temp" => 34, "value" => 0.343443 }] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "temp" => 92, "value" => 0.3445 },
{ "time" => "2015-07-07T14:59:09Z", "temp" => 68, "value" => 0.8787 }] }]
end
let(:query) { 'SELECT * FROM cpu' }
it "should return array with 2 elements grouped by tags" do
expect(subject.query(query)).to eq(expected_result)
end
end
context "with multiple series with different tags" do
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "access_times.service_1",
"tags" => { "code" => "200", "result" => "failure", "status" => "OK" },
"columns" => %w[time value],
"values" => [["2015-07-08T07:15:22Z", 327]] },
{ "name" => "access_times.service_1",
"tags" => { "code" => "500", "result" => "failure", "status" => "Internal Server Error" },
"columns" => %w[time value],
"values" => [["2015-07-08T06:15:22Z", 873]] },
{ "name" => "access_times.service_2",
"tags" => { "code" => "200", "result" => "failure", "status" => "OK" },
"columns" => %w[time value],
"values" => [["2015-07-08T07:15:22Z", 943]] },
{ "name" => "access_times.service_2",
"tags" => { "code" => "500", "result" => "failure", "status" => "Internal Server Error" },
"columns" => %w[time value],
"values" => [["2015-07-08T06:15:22Z", 606]] }] }] }
end
let(:expected_result) do
[{ "name" => "access_times.service_1",
"tags" => { "code" => "200", "result" => "failure", "status" => "OK" },
"values" => [{ "time" => "2015-07-08T07:15:22Z", "value" => 327 }] },
{ "name" => "access_times.service_1",
"tags" => { "code" => "500", "result" => "failure", "status" => "Internal Server Error" },
"values" => [{ "time" => "2015-07-08T06:15:22Z", "value" => 873 }] },
{ "name" => "access_times.service_2",
"tags" => { "code" => "200", "result" => "failure", "status" => "OK" },
"values" => [{ "time" => "2015-07-08T07:15:22Z", "value" => 943 }] },
{ "name" => "access_times.service_2",
"tags" => { "code" => "500", "result" => "failure", "status" => "Internal Server Error" },
"values" => [{ "time" => "2015-07-08T06:15:22Z", "value" => 606 }] }]
end
let(:query) { "SELECT * FROM /access_times.*/" }
it "should return array with 4 elements grouped by name and tags" do
expect(subject.query(query)).to eq(expected_result)
end
end
context "with multiple series for explicit value only" do
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "access_times.service_1",
"columns" => %w[time value],
"values" => [["2015-07-08T06:15:22Z", 873], ["2015-07-08T07:15:22Z", 327]] },
{ "name" => "access_times.service_2",
"columns" => %w[time value],
"values" => [["2015-07-08T06:15:22Z", 606], ["2015-07-08T07:15:22Z", 943]] }] }] }
end
let(:expected_result) do
[{ "name" => "access_times.service_1",
"tags" => nil,
"values" => [{ "time" => "2015-07-08T06:15:22Z", "value" => 873 }, { "time" => "2015-07-08T07:15:22Z", "value" => 327 }] },
{ "name" => "access_times.service_2",
"tags" => nil,
"values" => [{ "time" => "2015-07-08T06:15:22Z", "value" => 606 }, { "time" => "2015-07-08T07:15:22Z", "value" => 943 }] }]
end
let(:query) { "SELECT value FROM /access_times.*/" }
it "should return array with 2 elements grouped by name only and no tags" do
expect(subject.query(query)).to eq(expected_result)
end
end
context "with a block" do
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu",
"tags" => { "region" => "pl" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T15:13:04Z", 34, 0.343443]] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T14:58:37Z", 92, 0.3445], ["2015-07-07T14:59:09Z", 68, 0.8787]] }] }] }
end
let(:expected_result) do
[{ "name" => "cpu",
"tags" => { "region" => "pl" },
"values" => [{ "time" => "2015-07-07T15:13:04Z", "temp" => 34, "value" => 0.343443 }] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "temp" => 92, "value" => 0.3445 },
{ "time" => "2015-07-07T14:59:09Z", "temp" => 68, "value" => 0.8787 }] }]
end
let(:query) { 'SELECT * FROM cpu' }
it "should accept a block and yield name, tags and points" do
results = []
subject.query(query) do |name, tags, points|
results << { 'name' => name, 'tags' => tags, 'values' => points }
end
expect(results).to eq(expected_result)
end
end
context "with epoch set to seconds" do
let(:args) { { epoch: 's' } }
let(:extra_params) { { epoch: 's' } }
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu",
"tags" => { "region" => "pl" },
"columns" => %w[time temp value],
"values" => [[1_438_580_576, 34, 0.343443]] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"columns" => %w[time temp value],
"values" => [[1_438_612_976, 92, 0.3445], [1_438_612_989, 68, 0.8787]] }] }] }
end
let(:expected_result) do
[{ "name" => "cpu",
"tags" => { "region" => "pl" },
"values" => [{ "time" => 1_438_580_576, "temp" => 34, "value" => 0.343443 }] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"values" => [{ "time" => 1_438_612_976, "temp" => 92, "value" => 0.3445 },
{ "time" => 1_438_612_989, "temp" => 68, "value" => 0.8787 }] }]
end
let(:query) { 'SELECT * FROM cpu' }
it "should return results with integer timestamp" do
expect(subject.query(query)).to eq(expected_result)
end
end
context "with chunk_size set to 100" do
let(:args) { { chunk_size: 100 } }
let(:extra_params) { { chunked: "true", chunk_size: "100" } }
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu",
"tags" => { "region" => "pl" },
"columns" => %w[time temp value],
"values" => [[1_438_580_576, 34, 0.343443]] }] }] }
end
let(:expected_result) do
[{ "name" => "cpu",
"tags" => { "region" => "pl" },
"values" => [{ "time" => 1_438_580_576, "temp" => 34, "value" => 0.343443 }] }]
end
let(:query) { 'SELECT * FROM cpu' }
it "should set 'chunked' and 'chunk_size' parameters" do
expect(subject.query(query)).to eq(expected_result)
end
end
context "with database" do
let(:extra_params) { { db: 'overriden_db' } }
let(:response) do
{ "results" => [{ "series" => [{ "name" => "cpu",
"tags" => { "region" => "us" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T14:58:37Z", 92, 0.3445], ["2015-07-07T14:59:09Z", 68, 0.8787]] }] }] }
end
let(:expected_result) do
[{ "name" => "cpu",
"tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "temp" => 92, "value" => 0.3445 },
{ "time" => "2015-07-07T14:59:09Z", "temp" => 68, "value" => 0.8787 }] }]
end
let(:query) { 'SELECT * FROM cpu' }
it "should return array with single hash containing multiple values" do
expect(subject.query(query, database: 'overriden_db')).to eq(expected_result)
end
end
end
describe "multiple select queries" do
context "with single series with multiple points" do
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu",
"tags" => { "region" => "us" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T14:58:37Z", 92, 0.3445], ["2015-07-07T14:59:09Z", 68, 0.8787]] }] },
{ "statement_id" => 1,
"series" => [{ "name" => "memory",
"tags" => { "region" => "us" },
"columns" => %w[time free total],
"values" => [["2015-07-07T14:58:37Z", 96_468_992, 134_217_728], ["2015-07-07T14:59:09Z", 71_303_168, 134_217_728]] }] }] }
end
let(:expected_result) do
[{ "name" => "cpu",
"tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "temp" => 92, "value" => 0.3445 },
{ "time" => "2015-07-07T14:59:09Z", "temp" => 68, "value" => 0.8787 }] },
{ "name" => "memory",
"tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "free" => 92 * 2**20, "total" => 128 * 2**20 },
{ "time" => "2015-07-07T14:59:09Z", "free" => 68 * 2**20, "total" => 128 * 2**20 }] }]
end
let(:query) { 'SELECT * FROM cpu; SELECT * FROM memory' }
it "should return array with single hash containing multiple values" do
expect(subject.query(query)).to eq(expected_result)
end
end
context "with series with different tags" do
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu",
"tags" => { "region" => "pl" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T15:13:04Z", 34, 0.343443]] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T14:58:37Z", 92, 0.3445], ["2015-07-07T14:59:09Z", 68, 0.8787]] }] },
{ "statement_id" => 1,
"series" => [{ "name" => "memory",
"tags" => { "region" => "pl" },
"columns" => %w[time free total],
"values" => [["2015-07-07T15:13:04Z", 35_651_584, 134_217_728]] },
{ "name" => "memory",
"tags" => { "region" => "us" },
"columns" => %w[time free total],
"values" => [["2015-07-07T14:58:37Z", 96_468_992, 134_217_728], ["2015-07-07T14:59:09Z", 71_303_168, 134_217_728]] }] }] }
end
let(:expected_result) do
[{ "name" => "cpu",
"tags" => { "region" => "pl" },
"values" => [{ "time" => "2015-07-07T15:13:04Z", "temp" => 34, "value" => 0.343443 }] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "temp" => 92, "value" => 0.3445 },
{ "time" => "2015-07-07T14:59:09Z", "temp" => 68, "value" => 0.8787 }] },
{ "name" => "memory",
"tags" => { "region" => "pl" },
"values" => [{ "time" => "2015-07-07T15:13:04Z", "free" => 34 * 2**20, "total" => 128 * 2**20 }] },
{ "name" => "memory",
"tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "free" => 92 * 2**20, "total" => 128 * 2**20 },
{ "time" => "2015-07-07T14:59:09Z", "free" => 68 * 2**20, "total" => 128 * 2**20 }] }]
end
let(:query) { 'SELECT * FROM cpu; SELECT * FROM memory' }
it "should return array with 2 elements grouped by tags" do
expect(subject.query(query)).to eq(expected_result)
end
end
context "with a block" do
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu",
"tags" => { "region" => "pl" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T15:13:04Z", 34, 0.343443]] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"columns" => %w[time temp value],
"values" => [["2015-07-07T14:58:37Z", 92, 0.3445], ["2015-07-07T14:59:09Z", 68, 0.8787]] }] },
{ "statement_id" => 1,
"series" => [{ "name" => "memory",
"tags" => { "region" => "pl" },
"columns" => %w[time free total],
"values" => [["2015-07-07T15:13:04Z", 35_651_584, 134_217_728]] },
{ "name" => "memory",
"tags" => { "region" => "us" },
"columns" => %w[time free total],
"values" => [["2015-07-07T14:58:37Z", 96_468_992, 134_217_728], ["2015-07-07T14:59:09Z", 71_303_168, 134_217_728]] }] }] }
end
let(:expected_result) do
[{ "name" => "cpu",
"tags" => { "region" => "pl" },
"values" => [{ "time" => "2015-07-07T15:13:04Z", "temp" => 34, "value" => 0.343443 }] },
{ "name" => "cpu",
"tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "temp" => 92, "value" => 0.3445 },
{ "time" => "2015-07-07T14:59:09Z", "temp" => 68, "value" => 0.8787 }] },
{ "name" => "memory",
"tags" => { "region" => "pl" },
"values" => [{ "time" => "2015-07-07T15:13:04Z", "free" => 34 * 2**20, "total" => 128 * 2**20 }] },
{ "name" => "memory",
"tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "free" => 92 * 2**20, "total" => 128 * 2**20 },
{ "time" => "2015-07-07T14:59:09Z", "free" => 68 * 2**20, "total" => 128 * 2**20 }] }]
end
let(:query) { 'SELECT * FROM cpu; SELECT * FROM memory' }
it "should accept a block and yield name, tags and points" do
results = []
subject.query(query) do |name, tags, points|
results << { 'name' => name, 'tags' => tags, 'values' => points }
end
expect(results).to eq(expected_result)
end
end
end
end
influxdb-0.8.1/spec/influxdb/cases/querying_issue_7000_spec.rb 0000644 0000041 0000041 00000013341 14027467176 024326 0 ustar www-data www-data # This test spec addresses closed issue https://github.com/influxdata/influxdb/issues/7000 where
# it was confirmed that when chunking is enabled, the InfluxDB REST API returns multi-line JSON.
require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new "database", **{
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
}.merge(args)
end
let(:args) { {} }
let(:database) { subject.config.database }
let(:extra_params) { {} }
let(:response) { "" }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { q: query, u: "username", p: "password", precision: 's', db: database }.merge(extra_params))
.to_return(body: response)
end
describe "#query" do
context "with series with different tags (multi-line)" do
let(:args) { { chunk_size: 100 } }
let(:extra_params) { { chunked: "true", chunk_size: "100" } }
let(:response_line_1) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu", "tags" => { "region" => "pl" }, "columns" => %w[time temp value], "values" => [["2015-07-07T15:13:04Z", 34, 0.343443]] }] }] }
end
let(:response_line_2) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "cpu", "tags" => { "region" => "us" }, "columns" => %w[time temp value], "values" => [["2015-07-07T14:58:37Z", 92, 0.3445], ["2015-07-07T14:59:09Z", 68, 0.8787]] }] }] }
end
let(:response) do
JSON.generate(response_line_1) + "\n" + JSON.generate(response_line_2)
end
let(:expected_result) do
[{ "name" => "cpu", "tags" => { "region" => "pl" },
"values" => [{ "time" => "2015-07-07T15:13:04Z", "temp" => 34, "value" => 0.343443 }] },
{ "name" => "cpu", "tags" => { "region" => "us" },
"values" => [{ "time" => "2015-07-07T14:58:37Z", "temp" => 92, "value" => 0.3445 },
{ "time" => "2015-07-07T14:59:09Z", "temp" => 68, "value" => 0.8787 }] }]
end
let(:query) { 'SELECT * FROM cpu' }
it "should return array with 2 elements grouped by tags" do
expect(subject.query(query)).to eq(expected_result)
end
end
context "with multiple series with different tags" do
let(:args) { { chunk_size: 100 } }
let(:extra_params) { { chunked: "true", chunk_size: "100" } }
let(:response_line_1) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "access_times.service_1",
"tags" => { "code" => "200", "result" => "failure", "status" => "OK" },
"columns" => %w[time value],
"values" => [["2015-07-08T07:15:22Z", 327]] }] }] }
end
let(:response_line_2) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "access_times.service_1",
"tags" => { "code" => "500", "result" => "failure", "status" => "Internal Server Error" },
"columns" => %w[time value],
"values" => [["2015-07-08T06:15:22Z", 873]] }] }] }
end
let(:response_line_3) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "access_times.service_2",
"tags" => { "code" => "200", "result" => "failure", "status" => "OK" },
"columns" => %w[time value],
"values" => [["2015-07-08T07:15:22Z", 943]] }] }] }
end
let(:response_line_4) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "access_times.service_2",
"tags" => { "code" => "500", "result" => "failure", "status" => "Internal Server Error" },
"columns" => %w[time value],
"values" => [["2015-07-08T06:15:22Z", 606]] }] }] }
end
let(:response) do
JSON.generate(response_line_1) + "\n" + JSON.generate(response_line_2) + "\n" + JSON.generate(response_line_3) + "\n" + JSON.generate(response_line_4)
end
let(:expected_result) do
[{ "name" => "access_times.service_1",
"tags" => { "code" => "200", "result" => "failure", "status" => "OK" },
"values" => [{ "time" => "2015-07-08T07:15:22Z", "value" => 327 }] },
{ "name" => "access_times.service_1",
"tags" => { "code" => "500", "result" => "failure", "status" => "Internal Server Error" },
"values" => [{ "time" => "2015-07-08T06:15:22Z", "value" => 873 }] },
{ "name" => "access_times.service_2",
"tags" => { "code" => "200", "result" => "failure", "status" => "OK" },
"values" => [{ "time" => "2015-07-08T07:15:22Z", "value" => 943 }] },
{ "name" => "access_times.service_2",
"tags" => { "code" => "500", "result" => "failure", "status" => "Internal Server Error" },
"values" => [{ "time" => "2015-07-08T06:15:22Z", "value" => 606 }] }]
end
let(:query) { "SELECT * FROM /access_times.*/" }
it "should return array with 4 elements grouped by name and tags" do
expect(subject.query(query)).to eq(expected_result)
end
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_shard_spec.rb 0000644 0000041 0000041 00000002064 14027467176 023133 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
### TODO ###
# describe "GET #list_shards" do
# it "returns a list of shards" do
# shard_list = { "longTerm" => [], "shortTerm" => [] }
# stub_request(:get, "http://influxdb.test:9999/cluster/shards").with(
# query: { u: "username", p: "password" }
# ).to_return(body: JSON.generate(shard_list, status: 200))
# expect(subject.list_shards).to eq shard_list
# end
# end
# describe "DELETE #delete_shard" do
# it "removes shard by id" do
# shard_id = 1
# stub_request(:delete, "http://influxdb.test:9999/cluster/shards/#{shard_id}").with(
# query: { u: "username", p: "password" }
# )
# expect(subject.delete_shard(shard_id, [1, 2])).to be_a(Net::HTTPOK)
# end
# end
end
influxdb-0.8.1/spec/influxdb/cases/write_points_spec.rb 0000644 0000041 0000041 00000012533 14027467176 023335 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
let(:database) { subject.config.database }
describe "#write_point" do
let(:series) { "cpu" }
let(:data) do
{ tags: { region: 'us', host: 'server_1' },
values: { temp: 88, value: 54 } }
end
let(:body) do
InfluxDB::PointValue.new(data.merge(series: series)).dump
end
before do
stub_request(:post, "http://influxdb.test:9999/write").with(
query: { u: "username", p: "password", precision: 's', db: database },
headers: { "Content-Type" => "application/octet-stream" },
body: body
).to_return(status: 204)
end
it "should POST to add single point" do
expect(subject.write_point(series, data)).to be_a(Net::HTTPNoContent)
end
it "should not mutate data object" do
original_data = data
subject.write_point(series, data)
expect(data[:series]).to be_nil
expect(original_data).to eql(data)
end
end
describe "#write_points" do
context "with multiple series" do
let(:data) do
[{ series: 'cpu',
tags: { region: 'us', host: 'server_1' },
values: { temp: 88, value: 54 } },
{ series: 'gpu',
tags: { region: 'uk', host: 'server_5' },
values: { value: 0.5435345 } }]
end
let(:body) do
data.map do |point|
InfluxDB::PointValue.new(point).dump
end.join("\n")
end
before do
stub_request(:post, "http://influxdb.test:9999/write").with(
query: { u: "username", p: "password", precision: 's', db: database },
headers: { "Content-Type" => "application/octet-stream" },
body: body
).to_return(status: 204)
end
it "should POST multiple points" do
expect(subject.write_points(data)).to be_a(Net::HTTPNoContent)
end
end
context "with no tags" do
let(:data) do
[{ series: 'cpu',
values: { temp: 88, value: 54 } },
{ series: 'gpu',
values: { value: 0.5435345 } }]
end
let(:body) do
data.map do |point|
InfluxDB::PointValue.new(point).dump
end.join("\n")
end
before do
stub_request(:post, "http://influxdb.test:9999/write").with(
query: { u: "username", p: "password", precision: 's', db: database },
headers: { "Content-Type" => "application/octet-stream" },
body: body
).to_return(status: 204)
end
it "should POST multiple points" do
expect(subject.write_points(data)).to be_a(Net::HTTPNoContent)
end
end
context "with time precision set to milisceconds" do
let(:data) do
[{ series: 'cpu',
values: { temp: 88, value: 54 },
timestamp: (Time.now.to_f * 1000).to_i },
{ series: 'gpu',
values: { value: 0.5435345 },
timestamp: (Time.now.to_f * 1000).to_i }]
end
let(:body) do
data.map do |point|
InfluxDB::PointValue.new(point).dump
end.join("\n")
end
before do
stub_request(:post, "http://influxdb.test:9999/write").with(
query: { u: "username", p: "password", precision: 'ms', db: database },
headers: { "Content-Type" => "application/octet-stream" },
body: body
).to_return(status: 204)
end
it "should POST multiple points" do
expect(subject.write_points(data, 'ms')).to be_a(Net::HTTPNoContent)
end
end
context "with retention policy" do
let(:data) do
[{ series: 'cpu',
values: { temp: 88, value: 54 } },
{ series: 'gpu',
values: { value: 0.5435345 } }]
end
let(:body) do
data.map do |point|
InfluxDB::PointValue.new(point).dump
end.join("\n")
end
before do
stub_request(:post, "http://influxdb.test:9999/write").with(
query: { u: "username", p: "password", precision: 's', db: database, rp: 'rp_1_hour' },
headers: { "Content-Type" => "application/octet-stream" },
body: body
).to_return(status: 204)
end
it "should POST multiple points" do
expect(subject.write_points(data, nil, 'rp_1_hour')).to be_a(Net::HTTPNoContent)
end
end
context "with database" do
let(:data) do
[{ series: 'cpu',
values: { temp: 88, value: 54 } },
{ series: 'gpu',
values: { value: 0.5435345 } }]
end
let(:body) do
data.map do |point|
InfluxDB::PointValue.new(point).dump
end.join("\n")
end
before do
stub_request(:post, "http://influxdb.test:9999/write").with(
query: { u: "username", p: "password", precision: 's', db: 'overriden_db' },
headers: { "Content-Type" => "application/octet-stream" },
body: body
).to_return(status: 204)
end
it "should POST multiple points" do
expect(subject.write_points(data, nil, nil, 'overriden_db')).to be_a(Net::HTTPNoContent)
end
end
end
end
influxdb-0.8.1/spec/influxdb/cases/retry_requests_spec.rb 0000644 0000041 0000041 00000006127 14027467176 023711 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:client) do
described_class.new(
"database",
**{
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
}.merge(args)
)
end
let(:args) { {} }
let(:database) { client.config.database }
describe "retrying requests" do
let(:series) { "cpu" }
let(:data) do
{ tags: { region: 'us', host: 'server_1' },
values: { temp: 88, value: 54 } }
end
let(:body) do
InfluxDB::PointValue.new(data.merge(series: series)).dump
end
subject { client.write_point(series, data) }
before do
allow(client).to receive(:log)
stub_request(:post, "http://influxdb.test:9999/write").with(
query: { u: "username", p: "password", precision: 's', db: database },
headers: { "Content-Type" => "application/octet-stream" },
body: body
).to_raise(Timeout::Error)
end
it "raises when stopped" do
client.stop!
expect(client).not_to receive(:sleep)
expect { subject }.to raise_error(InfluxDB::ConnectionError) do |e|
expect(e.cause).to be_an_instance_of(Timeout::Error)
end
end
context "when retry is 0" do
let(:args) { { retry: 0 } }
it "raise error directly" do
expect(client).not_to receive(:sleep)
expect { subject }.to raise_error(InfluxDB::ConnectionError) do |e|
expect(e.cause).to be_an_instance_of(Timeout::Error)
end
end
end
context "when retry is 'n'" do
let(:args) { { retry: 3 } }
it "raise error after 'n' attemps" do
expect(client).to receive(:sleep).exactly(3).times
expect { subject }.to raise_error(InfluxDB::ConnectionError) do |e|
expect(e.cause).to be_an_instance_of(Timeout::Error)
end
end
end
context "when retry is -1" do
let(:args) { { retry: -1 } }
before do
stub_request(:post, "http://influxdb.test:9999/write")
.with(
query: { u: "username", p: "password", precision: 's', db: database },
headers: { "Content-Type" => "application/octet-stream" },
body: body
)
.to_raise(Timeout::Error).then
.to_raise(Timeout::Error).then
.to_raise(Timeout::Error).then
.to_raise(Timeout::Error).then
.to_return(status: 204)
end
it "keep trying until get the connection" do
expect(client).to receive(:sleep).exactly(4).times
expect { subject }.to_not raise_error
end
end
it "raise an exception if the server didn't return 200" do
stub_request(:post, "http://influxdb.test:9999/write").with(
query: { u: "username", p: "password", precision: 's', db: database },
headers: { "Content-Type" => "application/octet-stream" },
body: body
).to_return(status: 401)
expect { client.write_point(series, data) }.to raise_error(InfluxDB::AuthenticationError)
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_with_params_spec.rb 0000644 0000041 0000041 00000004527 14027467176 024356 0 ustar www-data www-data require "spec_helper"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
describe "#query with parameters" do
let(:query) { "SELECT value FROM requests_per_minute WHERE time > %{start}" }
let(:query_params) { { start: 1_437_019_900 } }
let(:query_compiled) { "SELECT value FROM requests_per_minute WHERE time > 1437019900" }
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "requests_per_minute",
"columns" => %w[time value] }] }] }
end
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { db: "database", precision: "s", u: "username", p: "password", q: query_compiled })
.to_return(body: JSON.generate(response), status: 200)
end
it "should handle responses with no values" do
# Some requests (such as trying to retrieve values from the future)
# return a result with no "values" key set.
expected_result = [{ "name" => "requests_per_minute", "tags" => nil, "values" => [] }]
expect(subject.query(query, params: query_params)).to eq(expected_result)
end
end
describe "#query_with_params" do
let(:query) { "select * from foo where bar > %{param}" }
let(:compiled_query) { subject.builder.build(query, query_params) }
context "with empty params hash" do
let(:query_params) { {} }
it { expect { compiled_query }.to raise_error ArgumentError }
end
context "with empty params array" do
let(:query_params) { [] }
it { expect { compiled_query }.to raise_error ArgumentError }
end
context "with empty params" do
let(:query_params) { nil }
it { expect { compiled_query }.to raise_error ArgumentError }
end
context "with simple params" do
let(:query_params) { { param: 42 } }
it { expect(compiled_query).to eq "select * from foo where bar > 42" }
end
context "string escaping" do
let(:query_params) { { param: "string" } }
it { expect(compiled_query).to eq "select * from foo where bar > 'string'" }
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_core_spec.rb 0000644 0000041 0000041 00000002317 14027467176 022763 0 ustar www-data www-data require "spec_helper"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
describe "#query" do
let(:query) { "SELECT value FROM requests_per_minute WHERE time > 1437019900" }
let :response do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "requests_per_minute",
"columns" => %w[time value] }] }] }
end
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { db: "database", precision: "s", u: "username", p: "password", q: query })
.to_return(body: JSON.generate(response), status: 200)
end
it "should handle responses with no values" do
# Some requests (such as trying to retrieve values from the future)
# return a result with no "values" key set.
expected_result = [{ "name" => "requests_per_minute", "tags" => nil, "values" => [] }]
expect(subject.query(query)).to eq(expected_result)
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_batch_spec.rb 0000644 0000041 0000041 00000012207 14027467176 023113 0 ustar www-data www-data require "spec_helper"
describe InfluxDB::Client do
let :client do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
describe "#batch" do
it { expect(client.batch).to be_a InfluxDB::Query::Batch }
describe "#execute" do
# it also doesn't perform a network request
it { expect(client.batch.execute).to eq [] }
end
describe "#add" do
let :queries do
[
"select * from foo",
"create user bar",
"drop measurement grok",
]
end
it "returns statement id" do
batch = client.batch
ids = queries.map { |q| batch.add(q) }
expect(ids).to eq [0, 1, 2]
expect(batch.statements.size).to be 3
end
context "block form" do
it "returns statement id" do
batch = client.batch do |b|
ids = queries.map { |q| b.add(q) }
expect(ids).to eq [0, 1, 2]
end
expect(batch.statements.size).to be 3
end
end
end
end
describe "#batch.execute" do
context "with multiple queries when there is no data for a query" do
let :queries do
[
"SELECT value FROM requests_per_minute WHERE time > 1437019900",
"SELECT value FROM requests_per_minute WHERE time > now()",
"SELECT value FROM requests_per_minute WHERE time > 1437019900",
]
end
subject do
client.batch do |b|
queries.each { |q| b.add(q) }
end
end
let :response do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "requests_per_minute",
"columns" => %w[time value],
"values" => [%w[2018-04-02T00:00:00Z 204]] }] },
{ "statement_id" => 1 },
{ "statement_id" => 2,
"series" => [{ "name" => "requests_per_minute",
"columns" => %w[time value],
"values" => [%w[2018-04-02T00:00:00Z 204]] }] }] }
end
let :expected_result do
[
[{ "name" => "requests_per_minute",
"tags" => nil,
"values" => [{ "time" => "2018-04-02T00:00:00Z",
"value" => "204" }] }],
[],
[{ "name" => "requests_per_minute",
"tags" => nil,
"values" => [{ "time" => "2018-04-02T00:00:00Z",
"value" => "204" }] }],
]
end
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: hash_including(db: "database", precision: "s", u: "username", p: "password"))
.to_return(body: JSON.generate(response), status: 200)
end
it "should return responses for all statements" do
result = subject.execute
expect(result.length).to eq(response["results"].length)
expect(result).to eq expected_result
end
end
context "with a group by tag query" do
let :queries do
["SELECT value FROM requests_per_minute WHERE time > now() - 1d GROUP BY status_code"]
end
subject do
client.batch { |b| queries.each { |q| b.add q } }
end
let :response do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "requests_per_minute",
"tags" => { "status_code" => "200" },
"columns" => %w[time value],
"values" => [%w[2018-04-02T00:00:00Z 204]] },
{ "name" => "requests_per_minute",
"tags" => { "status_code" => "500" },
"columns" => %w[time value],
"values" => [%w[2018-04-02T00:00:00Z 204]] }] }] }
end
let :expected_result do
[[{ "name" => "requests_per_minute",
"tags" => { "status_code" => "200" },
"values" => [{ "time" => "2018-04-02T00:00:00Z",
"value" => "204" }] },
{ "name" => "requests_per_minute",
"tags" => { "status_code" => "500" },
"values" => [{ "time" => "2018-04-02T00:00:00Z",
"value" => "204" }] }]]
end
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: hash_including(db: "database", precision: "s", u: "username", p: "password"))
.to_return(body: JSON.generate(response), status: 200)
end
it "should return a single result" do
result = subject.execute
expect(result.length).to eq(response["results"].length)
expect(result).to eq expected_result
end
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_continuous_query_spec.rb 0000644 0000041 0000041 00000010662 14027467176 025470 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
database: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
describe "#list_continuous_queries" do
let(:query) { "SHOW CONTINUOUS QUERIES" }
let(:database) { "testdb" }
let(:response) do
{ "results" => [{ "statement_id" => 0,
"series" => [{ "name" => "otherdb",
"columns" => %w[name query],
"values" => [["clicks_per_hour", "CREATE CONTINUOUS QUERY clicks_per_hour ON otherdb BEGIN SELECT count(name) INTO \"otherdb\".\"default\".clicksCount_1h FROM \"otherdb\".\"default\".clicks GROUP BY time(1h) END"]] },
{ "name" => "testdb",
"columns" => %w[name query],
"values" => [["event_counts", "CREATE CONTINUOUS QUERY event_counts ON testdb BEGIN SELECT count(type) INTO \"testdb\".\"default\".typeCount_10m_byType FROM \"testdb\".\"default\".events GROUP BY time(10m), type END"]] }] }] }
end
let(:expected_result) do
[{ "name" => "event_counts", "query" => "CREATE CONTINUOUS QUERY event_counts ON testdb BEGIN SELECT count(type) INTO \"testdb\".\"default\".typeCount_10m_byType FROM \"testdb\".\"default\".events GROUP BY time(10m), type END" }]
end
before do
stub_request(:get, "http://influxdb.test:9999/query").with(
query: { u: "username", p: "password", q: query }
).to_return(body: JSON.generate(response), status: 200)
end
it "should GET a list of continuous queries for specified db only" do
expect(subject.list_continuous_queries(database)).to eq(expected_result)
end
end
describe "#create_continuous_query" do
let(:name) { "event_counts_per_10m_by_type" }
let(:database) { "testdb" }
let(:query) { "SELECT COUNT(type) INTO typeCount_10m_byType FROM events GROUP BY time(10m), type" }
let(:every_interval) { nil }
let(:for_interval) { nil }
let(:clause) do
c = "CREATE CONTINUOUS QUERY #{name} ON #{database}"
if every_interval && for_interval
c << " RESAMPLE EVERY #{every_interval} FOR #{for_interval}"
elsif every_interval
c << " RESAMPLE EVERY #{every_interval}"
elsif for_interval
c << " RESAMPLE FOR #{for_interval}"
end
c << " BEGIN\n#{query}\nEND"
end
before do
stub_request(:get, "http://influxdb.test:9999/query").with(
query: { u: "username", p: "password", q: clause }
)
end
context "without resampling" do
it "should GET to create a new continuous query" do
expect(subject.create_continuous_query(name, database, query)).to be_a(Net::HTTPOK)
end
end
context "with resampling" do
context "EVERY " do
let(:every_interval) { "10m" }
it "should GET to create a new continuous query" do
expect(subject.create_continuous_query(name, database, query, resample_every: every_interval)).to be_a(Net::HTTPOK)
end
end
context "FOR " do
let(:for_interval) { "7d" }
it "should GET to create a new continuous query" do
expect(subject.create_continuous_query(name, database, query, resample_for: for_interval)).to be_a(Net::HTTPOK)
end
end
context "EVERY FOR " do
let(:every_interval) { "5m" }
let(:for_interval) { "3w" }
it "should GET to create a new continuous query" do
expect(subject.create_continuous_query(name, database, query, resample_for: for_interval, resample_every: every_interval)).to be_a(Net::HTTPOK)
end
end
end
end
describe "#delete_continuous_query" do
let(:name) { "event_counts_per_10m_by_type" }
let(:database) { "testdb" }
let(:query) { "DROP CONTINUOUS QUERY \"#{name}\" ON \"#{database}\"" }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query })
end
it "should GET to remove continuous query" do
expect(subject.delete_continuous_query(name, database)).to be_a(Net::HTTPOK)
end
end
end
influxdb-0.8.1/spec/influxdb/cases/query_user_spec.rb 0000644 0000041 0000041 00000007526 14027467176 023020 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new \
databse: "database",
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
end
let(:query) { nil }
let(:response) { { "results" => [{ "statement_id" => 0 }] } }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: query })
.to_return(body: JSON.generate(response))
end
describe "#update user password" do
let(:user) { 'useruser' }
let(:pass) { 'passpass' }
let(:query) { "SET PASSWORD FOR \"#{user}\" = '#{pass}'" }
it "should GET to update user password" do
expect(subject.update_user_password(user, pass)).to be_a(Net::HTTPOK)
end
end
describe "#grant_user_privileges" do
let(:user) { 'useruser' }
let(:perm) { :write }
let(:db) { 'foo' }
let(:query) { "GRANT #{perm.to_s.upcase} ON \"#{db}\" TO \"#{user}\"" }
it "should GET to grant privileges for a user on a database" do
expect(subject.grant_user_privileges(user, db, perm)).to be_a(Net::HTTPOK)
end
end
describe "#grant_user_admin_privileges" do
let(:user) { 'useruser' }
let(:query) { "GRANT ALL PRIVILEGES TO \"#{user}\"" }
it "should GET to grant privileges for a user on a database" do
expect(subject.grant_user_admin_privileges(user)).to be_a(Net::HTTPOK)
end
end
describe "#revoke_user_privileges" do
let(:user) { 'useruser' }
let(:perm) { :write }
let(:db) { 'foo' }
let(:query) { "REVOKE #{perm.to_s.upcase} ON \"#{db}\" FROM \"#{user}\"" }
it "should GET to revoke privileges from a user on a database" do
expect(subject.revoke_user_privileges(user, db, perm)).to be_a(Net::HTTPOK)
end
end
describe "#create_database_user" do
let(:user) { 'useruser' }
let(:pass) { 'passpass' }
let(:db) { 'foo' }
let(:query) { "CREATE user \"#{user}\" WITH PASSWORD '#{pass}'; GRANT ALL ON \"#{db}\" TO \"#{user}\"" }
context "without specifying permissions" do
it "should GET to create a new database user with all permissions" do
expect(subject.create_database_user(db, user, pass)).to be_a(Net::HTTPOK)
end
end
context "with passing permission as argument" do
let(:permission) { :read }
let(:query) { "CREATE user \"#{user}\" WITH PASSWORD '#{pass}'; GRANT #{permission.to_s.upcase} ON \"#{db}\" TO \"#{user}\"" }
it "should GET to create a new database user with permission set" do
expect(subject.create_database_user(db, user, pass, permissions: permission)).to be_a(Net::HTTPOK)
end
end
end
describe "#delete_user" do
let(:user) { 'useruser' }
let(:query) { "DROP USER \"#{user}\"" }
it "should GET to delete a user" do
expect(subject.delete_user(user)).to be_a(Net::HTTPOK)
end
end
describe "#list_users" do
let(:query) { "SHOW USERS" }
let(:response) { { "results" => [{ "statement_id" => 0, "series" => [{ "columns" => %w[user admin], "values" => [["dbadmin", true], ["foobar", false]] }] }] } }
let(:expected_result) { [{ "username" => "dbadmin", "admin" => true }, { "username" => "foobar", "admin" => false }] }
it "should GET a list of database users" do
expect(subject.list_users).to eq(expected_result)
end
end
describe "#list_user_grants" do
let(:user) { 'useruser' }
let(:list_query) { "SHOW GRANTS FOR \"#{user}\"" }
before do
stub_request(:get, "http://influxdb.test:9999/query")
.with(query: { u: "username", p: "password", q: list_query })
.to_return(status: 200, body: "", headers: {})
end
it "should GET for a user" do
expect(subject.list_user_grants(user)).to be_a(Net::HTTPOK)
end
end
end
influxdb-0.8.1/spec/influxdb/worker_spec.rb 0000644 0000041 0000041 00000001363 14027467176 021021 0 ustar www-data www-data require "spec_helper"
require 'timeout'
describe InfluxDB::Writer::Async::Worker do
let(:fake_client) { double(stopped?: false) }
let(:worker) { described_class.new(fake_client, {}) }
describe "#push" do
let(:payload1) { "responses,region=eu value=5" }
let(:payload2) { "responses,region=eu value=6" }
let(:aggregate) { "#{payload1}\n#{payload2}" }
it "writes aggregate payload to the client" do
queue = Queue.new
allow(fake_client).to receive(:write) do |data, _precision|
queue.push(data)
end
worker.push(payload1)
worker.push(payload2)
Timeout.timeout(described_class::SLEEP_INTERVAL) do
result = queue.pop
expect(result).to eq aggregate
end
end
end
end
influxdb-0.8.1/spec/influxdb/config_spec.rb 0000644 0000041 0000041 00000016610 14027467176 020756 0 ustar www-data www-data require 'spec_helper'
describe InfluxDB::Config do
after { client.stop! }
let(:client) do
kwargs = args.last.is_a?(Hash) ? args.pop : {}
InfluxDB::Client.new(*args, **kwargs)
end
let(:conf) { client.config }
let(:args) { [] }
context "with no parameters specified" do
specify { expect(conf.database).to be_nil }
specify { expect(conf.hosts).to eq ["localhost"] }
specify { expect(conf.port).to eq 8086 }
specify { expect(conf.username).to eq "root" }
specify { expect(conf.password).to eq "root" }
specify { expect(conf.use_ssl).to be_falsey }
specify { expect(conf.time_precision).to eq "s" }
specify { expect(conf.auth_method).to eq "params" }
specify { expect(conf.denormalize).to be_truthy }
specify { expect(conf).not_to be_udp }
specify { expect(conf).not_to be_async }
specify { expect(conf.epoch).to be_falsey }
specify { expect(conf.proxy_addr).to be_nil }
specify { expect(conf.proxy_port).to be_nil }
end
context "with no database specified" do
let(:args) do
[
host: "host",
port: "port",
username: "username",
password: "password",
time_precision: "m"
]
end
specify { expect(conf.database).to be_nil }
specify { expect(conf.hosts).to eq ["host"] }
specify { expect(conf.port).to eq "port" }
specify { expect(conf.username).to eq "username" }
specify { expect(conf.password).to eq "password" }
specify { expect(conf.time_precision).to eq "m" }
specify { expect(conf.epoch).to be_falsey }
end
context "with both a database and options specified" do
let(:args) do
[
"database",
host: "host",
port: "port",
username: "username",
password: "password",
time_precision: "m"
]
end
specify { expect(conf.database).to eq "database" }
specify { expect(conf.hosts).to eq ["host"] }
specify { expect(conf.port).to eq "port" }
specify { expect(conf.username).to eq "username" }
specify { expect(conf.password).to eq "password" }
specify { expect(conf.time_precision).to eq "m" }
specify { expect(conf.epoch).to be_falsey }
end
context "with ssl option specified" do
let(:args) { [use_ssl: true] }
specify { expect(conf.database).to be_nil }
specify { expect(conf.hosts).to eq ["localhost"] }
specify { expect(conf.port).to eq 8086 }
specify { expect(conf.username).to eq "root" }
specify { expect(conf.password).to eq "root" }
specify { expect(conf.use_ssl).to be_truthy }
end
context "with multiple hosts specified" do
let(:args) { [hosts: ["1.1.1.1", "2.2.2.2"]] }
specify { expect(conf.database).to be_nil }
specify { expect(conf.port).to eq 8086 }
specify { expect(conf.username).to eq "root" }
specify { expect(conf.password).to eq "root" }
specify { expect(conf.hosts).to eq ["1.1.1.1", "2.2.2.2"] }
end
context "with auth_method basic auth specified" do
let(:args) { [auth_method: 'basic_auth'] }
specify { expect(conf.database).to be_nil }
specify { expect(conf.hosts).to eq ["localhost"] }
specify { expect(conf.port).to eq 8086 }
specify { expect(conf.username).to eq "root" }
specify { expect(conf.password).to eq "root" }
specify { expect(conf.auth_method).to eq "basic_auth" }
end
context "with udp specified with params" do
let(:args) { [udp: { host: 'localhost', port: 4444 }] }
specify { expect(conf).to be_udp }
end
context "with udp specified as true" do
let(:args) { [udp: true] }
specify { expect(conf).to be_udp }
end
context "with async specified with params" do
let(:args) { [async: { max_queue: 20_000 }] }
specify { expect(conf).to be_async }
end
context "with async specified as true" do
let(:args) { [async: true] }
specify { expect(conf).to be_async }
end
context "with epoch specified as seconds" do
let(:args) { [epoch: 's'] }
specify { expect(conf.epoch).to eq 's' }
end
context "given a config URL" do
let(:url) { "https://foo:bar@influx.example.com:8765/testdb?open_timeout=42&unknown=false&denormalize=false" }
let(:args) { [url: url] }
it "applies values found in URL" do
expect(conf.database).to eq "testdb"
expect(conf.hosts).to eq ["influx.example.com"]
expect(conf.port).to eq 8765
expect(conf.username).to eq "foo"
expect(conf.password).to eq "bar"
expect(conf.use_ssl).to be true
expect(conf.denormalize).to be false
expect(conf.open_timeout).to eq 42
end
it "applies defaults" do
expect(conf.prefix).to eq ""
expect(conf.read_timeout).to be 300
expect(conf.max_delay).to be 30
expect(conf.initial_delay).to be_within(0.0001).of(0.01)
expect(conf.verify_ssl).to be true
expect(conf.ssl_ca_cert).to be false
expect(conf.epoch).to be false
expect(conf.discard_write_errors).to be false
expect(conf.retry).to be(-1)
expect(conf.chunk_size).to be nil
expect(conf).not_to be_udp
expect(conf.auth_method).to eq "params"
expect(conf).not_to be_async
end
context "with encoded values" do
let(:url) { "https://weird%24user:weird%25pass@influx.example.com:8765/testdb" }
it "decode encoded values" do
expect(conf.username).to eq "weird$user"
expect(conf.password).to eq "weird%pass"
end
end
context "UDP" do
let(:url) { "udp://test.localhost:2345?discard_write_errors=1" }
specify { expect(conf).to be_udp }
specify { expect(conf.udp[:port]).to be 2345 }
specify { expect(conf.discard_write_errors).to be true }
end
end
context "given a config URL and explicit options" do
let(:url) { "https://foo:bar@influx.example.com:8765/testdb?open_timeout=42&unknown=false&denormalize=false" }
let(:args) do
[
"primarydb",
url: url,
open_timeout: 20,
read_timeout: 30,
]
end
it "applies values found in URL" do
expect(conf.hosts).to eq ["influx.example.com"]
expect(conf.port).to eq 8765
expect(conf.username).to eq "foo"
expect(conf.password).to eq "bar"
expect(conf.use_ssl).to be true
expect(conf.denormalize).to be false
end
it "applies values found in opts hash" do
expect(conf.database).to eq "primarydb"
expect(conf.open_timeout).to eq 20
expect(conf.read_timeout).to be 30
end
it "applies defaults" do
expect(conf.prefix).to eq ""
expect(conf.max_delay).to be 30
expect(conf.initial_delay).to be_within(0.0001).of(0.01)
expect(conf.verify_ssl).to be true
expect(conf.ssl_ca_cert).to be false
expect(conf.epoch).to be false
expect(conf.discard_write_errors).to be false
expect(conf.retry).to be(-1)
expect(conf.chunk_size).to be nil
expect(conf).not_to be_udp
expect(conf.auth_method).to eq "params"
expect(conf).not_to be_async
end
end
context "given explicit proxy information" do
let(:args) do
[host: "host",
port: "port",
username: "username",
password: "password",
time_precision: "m",
proxy_addr: "my.proxy.addr",
proxy_port: 8080]
end
specify { expect(conf.proxy_addr).to eq("my.proxy.addr") }
specify { expect(conf.proxy_port).to eq(8080) }
end
end
influxdb-0.8.1/spec/influxdb/client_spec.rb 0000644 0000041 0000041 00000006371 14027467176 020772 0 ustar www-data www-data require "spec_helper"
require "json"
describe InfluxDB::Client do
let(:subject) do
described_class.new(
"database",
**{
host: "influxdb.test",
port: 9999,
username: "username",
password: "password",
time_precision: "s"
}.merge(args)
)
end
let(:args) { {} }
specify { is_expected.not_to be_stopped }
context "with basic auth" do
let(:args) { { auth_method: 'basic_auth' } }
let(:credentials) { "username:password" }
let(:auth_header) { { "Authorization" => "Basic " + Base64.encode64(credentials).chomp } }
let(:stub_url) { "http://influxdb.test:9999/" }
let(:url) { subject.send(:full_url, '/') }
it "GET" do
stub_request(:get, stub_url).with(headers: auth_header).to_return(body: '[]')
expect(subject.get(url, parse: true)).to eq []
end
it "POST" do
stub_request(:post, stub_url).with(headers: auth_header).to_return(status: 204)
expect(subject.post(url, {})).to be_a(Net::HTTPNoContent)
end
end
describe "#full_url" do
it "returns String" do
expect(subject.send(:full_url, "/unknown")).to be_a String
end
it "escapes params" do
url = subject.send(:full_url, "/unknown", value: ' !@#$%^&*()/\\_+-=?|`~')
encoded_fragment = "value=+%21%40%23%24%25%5E%26%2A%28%29%2F%5C_%2B-%3D%3F%7C%60"
encoded_fragment << (RUBY_VERSION >= "2.5.0" ? "~" : "%7E")
expect(url).to include(encoded_fragment)
end
context "with prefix" do
let(:args) { { prefix: '/dev' } }
it "returns path with prefix" do
expect(subject.send(:full_url, "/series")).to start_with("/dev")
end
end
end
describe "GET #ping" do
it "returns OK" do
stub_request(:get, "http://influxdb.test:9999/ping")
.to_return(status: 204)
expect(subject.ping).to be_a(Net::HTTPNoContent)
end
context "with prefix" do
let(:args) { { prefix: '/dev' } }
it "returns OK with prefix" do
stub_request(:get, "http://influxdb.test:9999/dev/ping")
.to_return(status: 204)
expect(subject.ping).to be_a(Net::HTTPNoContent)
end
end
end
describe "GET #version" do
it "returns 1.1.1" do
stub_request(:get, "http://influxdb.test:9999/ping")
.to_return(status: 204, headers: { 'x-influxdb-version' => '1.1.1' })
expect(subject.version).to eq('1.1.1')
end
context "with prefix" do
let(:args) { { prefix: '/dev' } }
it "returns 1.1.1 with prefix" do
stub_request(:get, "http://influxdb.test:9999/dev/ping")
.to_return(status: 204, headers: { 'x-influxdb-version' => '1.1.1' })
expect(subject.version).to eq('1.1.1')
end
end
end
describe "Load balancing" do
let(:args) { { hosts: hosts } }
let(:hosts) do
[
"influxdb.test0",
"influxdb.test1",
"influxdb.test2"
]
end
let(:cycle) { 3 }
let!(:stubs) do
hosts.map { |host| stub_request(:get, "http://#{host}:9999/ping").to_return(status: 204) }
end
it "balance requests" do
(hosts.size * cycle).times { subject.ping }
stubs.cycle(cycle) { |stub| expect(stub).to have_been_requested.times(cycle) }
end
end
end
influxdb-0.8.1/spec/influxdb/point_value_spec.rb 0000644 0000041 0000041 00000005744 14027467176 022044 0 ustar www-data www-data require "spec_helper"
describe InfluxDB::PointValue do
describe "escaping" do
let(:data) do
point = {
series: '1= ,"\\1',
tags: {
'2= ,"\\2' => '3= ,"\\3',
'4' => "5\\", # issue #225
},
values: {
'4= ,"\\4' => '5= ,"\\5',
intval: 5,
floatval: 7.0,
invalid_encoding: "a\255 b", # issue #171
non_latin: "Улан-Удэ",
backslash: "C:\\", # issue #200
}
}
point
end
it 'should escape correctly' do
point = InfluxDB::PointValue.new(data)
series = [
%(1=\\ \\,"\\1),
%(2\\=\\ \\,"\\2=3\\=\\ \\,"\\3),
%(4=5\\ ),
]
fields = [
%(4\\=\\ \\,\\"\\4="5= ,\\"\\\\5"),
%(intval=5i),
%(floatval=7.0),
%(invalid_encoding="a b"),
%(non_latin="Улан-Удэ"),
%(backslash="C:\\\\"),
]
expected = series.join(",") + " " + fields.join(",")
expect(point.dump).to eq(expected)
end
context 'with empty values' do
let(:empty_values_data) { { series: 'test_series', values: {} } }
it 'should raise an exception' do
expect { InfluxDB::PointValue.new(empty_values_data) }.to raise_error(InfluxDB::LineProtocolError)
end
end
end
describe 'dump' do
context "with all possible data passed" do
let(:expected_value) do
'responses,region=eu,status=200 value=5i,threshold=0.54 1436349652'
end
it 'should have proper form' do
point = InfluxDB::PointValue.new \
series: "responses",
values: { value: 5, threshold: 0.54 },
tags: { region: 'eu', status: 200 },
timestamp: 1_436_349_652
expect(point.dump).to eq(expected_value)
end
end
context "without tags" do
let(:expected_value) do
"responses value=5i,threshold=0.54 1436349652"
end
it 'should have proper form' do
point = InfluxDB::PointValue.new \
series: "responses",
values: { value: 5, threshold: 0.54 },
timestamp: 1_436_349_652
expect(point.dump).to eq(expected_value)
end
end
context "without tags and timestamp" do
let(:expected_value) do
"responses value=5i,threshold=0.54"
end
it 'should have proper form' do
point = InfluxDB::PointValue.new \
series: "responses",
values: { value: 5, threshold: 0.54 }
expect(point.dump).to eq(expected_value)
end
end
context "empty tag values" do
let(:expected_value) do
"responses,region=eu value=5i"
end
it "should be omitted" do
point = InfluxDB::PointValue.new \
series: "responses",
values: { value: 5 },
tags: { region: "eu", status: nil, other: "", nil => "ignored", "" => "ignored" }
expect(point.dump).to eq(expected_value)
end
end
end
end
influxdb-0.8.1/spec/influxdb/logging_spec.rb 0000644 0000041 0000041 00000003305 14027467176 021134 0 ustar www-data www-data require 'spec_helper'
require 'logger'
describe InfluxDB::Logging do
class LoggerTest # :nodoc:
include InfluxDB::Logging
def write_to_log(level, message)
log(level, message)
end
def block_log(level, &block)
log(level, &block)
end
end
around do |example|
old_logger = InfluxDB::Logging.logger
example.call
InfluxDB::Logging.logger = old_logger
end
it "has a default logger" do
expect(InfluxDB::Logging.logger).to be_a(Logger)
end
it "allows setting of a logger" do
new_logger = Logger.new(STDOUT)
InfluxDB::Logging.logger = new_logger
expect(InfluxDB::Logging.logger).to eq(new_logger)
end
it "allows disabling of a logger" do
InfluxDB::Logging.logger = false
expect(InfluxDB::Logging.logger).to eql false
end
context "when logging is disabled" do
subject { LoggerTest.new }
it "does not log" do
pending "The test doesn't work since bugfix in rspec-mocks 3.10.1 " \
"(https://github.com/rspec/rspec-mocks/pull/1357)"
InfluxDB::Logging.logger = false
expect(InfluxDB::Logging.logger).not_to receive(:debug)
subject.write_to_log(:debug, 'test')
end
end
context "when included in classes" do
subject { LoggerTest.new }
it "logs with string message" do
expect(InfluxDB::Logging.logger).to receive(:info).with(an_instance_of(String)).once
subject.write_to_log(:info, 'test')
end
it "logs with block message" do
msg = double("message")
expect(msg).to receive(:expensive_message).and_return("42")
expect(InfluxDB::Logging.logger).to receive(:info).and_yield.once
subject.block_log(:info) { msg.expensive_message }
end
end
end
influxdb-0.8.1/spec/influxdb/query_builder_spec.rb 0000644 0000041 0000041 00000003657 14027467176 022373 0 ustar www-data www-data require "spec_helper"
RSpec.describe InfluxDB::Query::Builder do
let(:builder) { described_class.new }
describe "#quote" do
subject { builder }
it "should quote parameters properly" do
expect(subject.quote(3.14)).to eq "3.14"
expect(subject.quote(14)).to eq "14"
expect(subject.quote("3.14")).to eq "'3.14'"
expect(subject.quote("Ben Hur's Carriage")).to eq "'Ben Hur\\'s Carriage'"
expect(subject.quote(true)).to eq "true"
expect(subject.quote(false)).to eq "false"
expect(subject.quote(0 || 1)).to eq "0"
expect(subject.quote(:symbol)).to eq "'symbol'"
expect { subject.quote(/regex/) }.to raise_error(ArgumentError, /Unexpected parameter type Regex/)
end
end
describe "#build" do
subject { builder.build(query, params) }
context "named parameters" do
let(:query) { "SELECT value FROM rpm WHERE f = %{f_val} group by time(%{minutes}m)" }
let(:params) { { f_val: "value", minutes: 5 } }
it { is_expected.to eq "SELECT value FROM rpm WHERE f = 'value' group by time(5m)" }
context "with string keys" do
let(:params) { { "f_val" => "value", "minutes" => 5 } }
it { is_expected.to eq "SELECT value FROM rpm WHERE f = 'value' group by time(5m)" }
end
end
context "positional parameter" do
let(:query) { "SELECT value FROM rpm WHERE time > %{1}" }
let(:params) { [1_437_019_900] }
it { is_expected.to eq "SELECT value FROM rpm WHERE time > 1437019900" }
end
context "missing parameters" do
let(:query) { "SELECT value FROM rpm WHERE time > %{1}" }
let(:params) { [] }
it { expect { subject }.to raise_error(/key.1. not found/) }
end
context "extra parameters" do
let(:query) { "SELECT value FROM rpm WHERE time > %{a}" }
let(:params) { { "a" => 0, "b" => 2 } }
it { is_expected.to eq "SELECT value FROM rpm WHERE time > 0" }
end
end
end
influxdb-0.8.1/spec/influxdb/max_queue_spec.rb 0000644 0000041 0000041 00000001263 14027467176 021500 0 ustar www-data www-data require 'spec_helper'
describe InfluxDB::MaxQueue do
specify { is_expected.to be_a(Queue) }
context "#new" do
it "allows max_depth to be set" do
expect(described_class.new(500).max).to eq 500
end
end
context "#push" do
let(:queue) { described_class.new(5) }
it "allows an item to be added if the queue is not full" do
expect(queue.size).to be_zero
queue.push(1)
expect(queue.size).to eq 1
end
it "doesn't allow items to be added if the queue is full" do
expect(queue.size).to be_zero
5.times { |n| queue.push(n) }
expect(queue.size).to eq 5
queue.push(6)
expect(queue.size).to eq 5
end
end
end
influxdb-0.8.1/spec/influxdb/time_conversion_spec.rb 0000644 0000041 0000041 00000003076 14027467176 022716 0 ustar www-data www-data require "spec_helper"
RSpec.describe InfluxDB do
describe ".convert_timestamp" do
let(:sometime) { Time.parse("2017-12-11 16:20:29.111222333 UTC") }
{
"ns" => 1_513_009_229_111_222_333,
nil => 1_513_009_229_111_222_333,
"u" => 1_513_009_229_111_222,
"ms" => 1_513_009_229_111,
"s" => 1_513_009_229,
"m" => 25_216_820,
"h" => 420_280,
}.each do |precision, converted_value|
it "should return the timestamp in #{precision.inspect}" do
expect(described_class.convert_timestamp(sometime, precision)).to eq(converted_value)
end
end
it "should raise an excpetion when precision is unrecognized" do
expect { described_class.convert_timestamp(sometime, "whatever") }
.to raise_exception(/invalid time precision.*whatever/i)
end
end
describe ".now" do
{
"ns" => [:nanosecond, 1_513_009_229_111_222_333],
nil => [:nanosecond, 1_513_009_229_111_222_333],
"u" => [:microsecond, 1_513_009_229_111_222],
"ms" => [:millisecond, 1_513_009_229_111],
"s" => [:second, 1_513_009_229],
"m" => [:second, 25_216_820, 1_513_009_229],
"h" => [:second, 420_280, 1_513_009_229],
}.each do |precision, (name, expected, stub)|
it "should return the current time in #{precision.inspect}" do
expect(Process).to receive(:clock_gettime)
.with(Process::CLOCK_REALTIME, name)
.and_return(stub || expected)
expect(described_class.now(precision)).to eq(expected)
end
end
end
end
influxdb-0.8.1/CHANGELOG.md 0000644 0000041 0000041 00000020053 14027467176 015212 0 ustar www-data www-data # Changelog
For the full commit log, [see here](https://github.com/influxdata/influxdb-ruby/commits/master).
## v0.8.1, release 2021-02-17
- Ensure workers can send data popped off the queue at shutdown (#239,
@onlynone)
- Add support for special characters in password when using url keyword (#242,
@swistak35)
- Add Ruby 3.0 support (#249, @dentarg, @paul and @track8)
- Support characters that need quoting for usernames and database names (#248, @estheruary)
## v0.8.0, released 2020-02-05
- Allow dropping of specific series from specific DBs (#233, @cantino)
- Add support for MRI 2.7 (#235, @jeffkowalski)
- Raise a LineProtocolError if attempting to write empty values as field
set is required. This adds descriptive feedback to the error "unable to
parse '{series},{tags} ': invalid field format" (#236, @todtb)
- Add support for configuring HTTP Proxy (#238, @epchris)
## v0.7.0, released 2019-01-11
- Drop support for Ruby 2.2, since Bundler dropped it and we want to use
Bundler in the development cycle as well.
- Fix issue with tag values ending in a backslash.
## v0.6.4, releases 2018-12-02
- Fix newly introduced `InfluxDB.now(precision)` for precisions larger
than "s".
## v0.6.3, released 2018-11-30
- Added `InfluxDB.now(precision)` and `InfluxDB::Client#now` as companions
to `InfluxDB.convert_timestamp`.
## v0.6.2, released 2018-11-30
- Added `InfluxDB.convert_timestamp` utility to convert a `Time` to a
timestamp with (taken from PR influxdb-rails#53 by @ChrisBr).
## v0.6.1, released 2018-08-23
- Fix `InfluxDB::Client#delete_retention_policy`: the database name
argument is now quoted (#221, #222 @AishwaryaRK).
- Add `InfluxDB::Client#list_measurements` and `#delete_measuerment`
(#220)
## v0.6.0, released 2018-07-10
- Add batch query support via `InfluxDB::Client#batch` (and using
`InfluxDB::Query::Batch`). Using multiple queries joined with `;`
will cause issues with `Client#query` in combination with either
`GROUP BY` clauses or empty results, as discussed in #217.
Initial code and PR#218 from @satyanash.
## v0.5.3, released 2018-01-19
- Fix `NoMethodError` in `InfluxDB::Client#list_retention_policies` when
the database has no RPs defined (#213, @djoos)
## v0.5.2, released 2017-11-28
- Add async option to block on full queue (#209, @davemt)
## v0.5.1, released 2017-10-31
- Add support for `SHARD DURATION` in retention policy (#203, @ljagiello)
## v0.5.0, released 2017-10-21
- Add support for precision, retention policy and database parameters
to async writer (#140, #202 @rockclimber73)
**Attention** You may need to validate that your calls to the write
API (`InfluxDB::Client#write`, `#write_point`, `#write_points`) don't
accidentally included a precision, RP, and/or database argument. These
arguments were ignored until now. This is likely if you have changed
your client instance configuration in the past, when you added a
`async: true`. **Updating might cause data inconsistencies!**
## v0.4.2, released 2017-09-26
- Bugfix in `InfluxDB::PointValue`: Properly encode backslashes (#200)
## v0.4.1, released 2017-08-30
- Bugfix in async client: Flush queue before exit (#198, #199 @onlynone)
## v0.4.0, released 2017-08-19
- **Dropped support for Ruby < 2.2.**
- Updated dependencies.
- Refactor some method declarations, to take kwargs instead of an
options hash (this shouldn't break call sites).
- Allow configuration by an URL (idea by @carlhoerberg in #188).
- Improved logging (#180).
## v0.3.17, released 2017-09-27
- (Backport from v0.4.1) Bugfix in async client: Flush queue before exit
(#198, #199 @onlynone)
- (Backport from v0.4.2) Bugfix in `InfluxDB::PointValue`: Properly
encode backslashes (#200)
## v0.3.16, released 2017-08-17
- **This is propably the last release in the 0.3.x series.**
- Typo fix in README (#196, @MichaelSp).
## v0.3.15, released 2017-07-17
- Bugfix for `InfluxDB::Client#list_series` when no series available
(#195, @skladd).
- Clarified/expanded docs (also #190, @paneq).
- Added preliminary `show_field_keys` method to `InfluxDB::Client` (note:
the API for this is not stable yet).
- Degraded dependency on "cause" from runtime to development.
## v0.3.14, released 2017-02-06
- Added option `discard_write_errors` to silently ignore errors when writing
to the server (#182, @mickey).
- Added `#list_series` and `#delete_series` to `InfluxDB::Client` (#183-186,
@wrcola).
## v0.3.13, released 2016-11-23
- You can now `InfluxDB::Client#query`, `#write_points`, `#write_point` and
`#write` now accept an additional parameter to override the database on
invokation time (#173, #176, @jfragoulis).
## v0.3.12, released 2016-11-15
- Bugfix for broken Unicode support (regression introduced in #169).
Please note, this is only properly tested on Ruby 2.1+ (#171).
## v0.3.11, released 2016-10-12
- Bugfix/Enhancement in `PointValue#escape`. Input strings are now scrubbed
of invalid UTF byte sequences (#169, @ton31337).
## v0.3.10, released 2016-10-03
- Bugfix in `Query::Builder#quote` (#168, @cthulhu666).
## v0.3.9, released 2016-09-20
- Changed retry behaviour slightly. When the server responds with an incomplete
response, we now assume a major server-side problem (insufficient resources,
e.g. out-of-memory) and cancel any retry attempts (#165, #166).
## v0.3.8, released 2016-08-31
- Added support for named and positional query parameters (#160, @retorquere).
## v0.3.7, released 2016-08-14
- Fixed `prefix` handling for `#ping` and `#version` (#157, @dimiii).
## v0.3.6, released 2016-07-24
- Added feature for JSON streaming response, via `"chunk_size"` parameter
(#155, @mhodson-qxbranch).
## v0.3.5, released 2016-06-09
- Reintroduced full dependency on "cause" (for Ruby 1.9 compat).
- Extended `Client#create_database` and `#delete_database` to fallback on `config.database` (#153, #154, @anthonator).
## v0.3.4, released 2016-06-07
- Added resample options to `Client#create_continuous_query` (#149).
- Fixed resample options to be Ruby 1.9 compatible (#150, @SebastianCoetzee).
- Mentioned in README, that 0.3.x series is the last one to support Ruby 1.9.
## v0.3.3, released 2016-06-06 (yanked)
- Added resample options to `Client#create_continuous_query` (#149).
## v0.3.2, released 2016-06-02
- Added config option to authenticate without credentials (#146, @pmenglund).
## v0.3.1, released 2016-05-26
- Fixed #130 (again). Integer values are now really written as Integers to InfluxDB.
## v0.3.0, released 2016-04-24
- Write queries are now checked against 204 No Content responses, in accordance with the official documentation (#128).
- Async options are now configurabe (#107).
## v0.2.6, released 2016-04-14
- Empty tag keys/values are now omitted (#124).
## v0.2.5, released 2016-04-14
- Async writer now behaves when stopping the client (#73).
- Update development dependencies and started enforcing Rubocop styles.
## v0.2.4, released 2016-04-12
- Added `InfluxDB::Client#version`, returning the server version (#117).
- Fixed escaping issues (#119, #121, #135).
- Integer values are now written as Integer, not as Float value (#131).
- Return all result series when querying multiple selects (#134).
- Made host cycling thread safe (#136).
## v0.2.3, released 2015-10-27
- Added `epoch` option to client constructor and write methods (#104).
- Added `#list_user_grants` (#111), `#grant_user_admin_privileges` (#112) and `#alter_retention_policy` (#114) methods.
## v0.2.2, released 2015-07-29
- Fixed issues with Async client (#101)
- Avoid usage of `gsub!` (#102)
## v0.2.1, released 2015-07-25
- Fix double quote tags escaping (#98)
## v0.2.0, released 2015-07-20
- Large library refactoring (#88, #90)
- Extract config from client
- Extract HTTP functionality to separate module
- Extract InfluxDB management functions to separate modules
- Add writer concept
- Refactor specs (add cases)
- Add 'denormalize' option to config
- Recognize SeriesNotFound error
- Update README
- Add Rubocop config
- Break support for Ruby < 2
- Added support for InfluxDB 0.9+ (#92)
## v0.1.9, released 2015-07-04
- last version to support InfluxDB 0.8.x
influxdb-0.8.1/.rubocop.yml 0000644 0000041 0000041 00000002323 14027467176 015653 0 ustar www-data www-data inherit_mode:
merge:
- Exclude
AllCops:
Include:
- 'Rakefile'
- '*.gemspec'
- 'lib/**/*.rb'
- 'spec/**/*.rb'
Exclude:
- 'bin/**/*'
- 'smoke/**/*'
- 'Gemfile'
DisplayCopNames: true
StyleGuideCopsOnly: false
TargetRubyVersion: 2.2
Rails:
Enabled: false
Layout/EmptyLinesAroundArguments:
Enabled: false
Layout/SpaceBeforeBlockBraces:
EnforcedStyleForEmptyBraces: space
Layout/AlignHash:
EnforcedColonStyle: table
EnforcedHashRocketStyle: table
Metrics/AbcSize:
Max: 20
Metrics/BlockLength:
Exclude:
- 'spec/**/*.rb'
Metrics/LineLength:
Max: 100
Exclude:
- 'spec/**/*.rb'
Metrics/ModuleLength:
CountComments: false # count full line comments?
Max: 120
Metrics/ParameterLists:
Max: 6
Naming/UncommunicativeMethodParamName:
AllowedNames: [io, id, db]
Style/FormatStringToken:
Enabled: false
Style/FrozenStringLiteralComment:
Enabled: false
Style/NumericPredicate:
Enabled: false
Style/RescueModifier:
Enabled: false
Style/StringLiterals:
Enabled: false
Style/TrailingCommaInArrayLiteral:
EnforcedStyleForMultiline: comma
Exclude:
- "spec/**/*.rb"
Style/TrailingCommaInHashLiteral:
EnforcedStyleForMultiline: comma
Exclude:
- "spec/**/*.rb"
influxdb-0.8.1/.gitignore 0000644 0000041 0000041 00000000341 14027467176 015367 0 ustar www-data www-data *.gem
*.rbc
.bundle
.config
coverage
InstalledFiles
lib/bundler/man
pkg
rdoc
spec/reports
test/tmp
test/version_tmp
tmp
Gemfile.lock
.rvmrc
.ruby-version
.ruby-gemset
# YARD artifacts
.yardoc
_yardoc
doc/
*.local
/noaa.txt
influxdb-0.8.1/Rakefile 0000644 0000041 0000041 00000001754 14027467176 015055 0 ustar www-data www-data require "rake/testtask"
require "bundler/gem_tasks"
require "rubocop/rake_task"
RuboCop::RakeTask.new
targeted_files = ARGV.drop(1)
file_pattern = targeted_files.empty? ? "spec/**/*_spec.rb" : targeted_files
require "rspec/core"
require "rspec/core/rake_task"
RSpec::Core::RakeTask.new(:spec) do |t|
t.pattern = FileList[file_pattern]
end
if ENV.key?("CI")
task default: %i[spec]
else
task default: %i[spec rubocop]
end
task :console do
lib = File.expand_path("lib", __dir__)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require "influxdb"
begin
require "pry-byebug"
Pry.start
rescue LoadError
puts \
"Could not load pry-byebug. Create a file Gemfile.local with",
"the following line, if you want to get rid of this message:",
"",
"\tgem \"pry-byebug\"",
"",
"(don't forget to run bundle afterwards). Falling back to IRB.",
""
require "irb"
require "irb/completion"
ARGV.clear
IRB.start
end
end
influxdb-0.8.1/lib/ 0000755 0000041 0000041 00000000000 14027467176 014147 5 ustar www-data www-data influxdb-0.8.1/lib/influxdb.rb 0000644 0000041 0000041 00000001276 14027467176 016315 0 ustar www-data www-data require "influxdb/version"
require "influxdb/errors"
require "influxdb/logging"
require "influxdb/max_queue"
require "influxdb/point_value"
require "influxdb/config"
require "influxdb/timestamp_conversion"
require "influxdb/writer/async"
require "influxdb/writer/udp"
require "influxdb/query/batch"
require "influxdb/query/builder"
require "influxdb/query/cluster"
require "influxdb/query/continuous_query"
require "influxdb/query/core"
require "influxdb/query/core"
require "influxdb/query/database"
require "influxdb/query/measurement"
require "influxdb/query/retention_policy"
require "influxdb/query/series"
require "influxdb/query/user"
require "influxdb/client/http"
require "influxdb/client"
influxdb-0.8.1/lib/influxdb/ 0000755 0000041 0000041 00000000000 14027467176 015762 5 ustar www-data www-data influxdb-0.8.1/lib/influxdb/client/ 0000755 0000041 0000041 00000000000 14027467176 017240 5 ustar www-data www-data influxdb-0.8.1/lib/influxdb/client/http.rb 0000644 0000041 0000041 00000010155 14027467176 020546 0 ustar www-data www-data require 'uri'
require 'cgi'
require 'net/http'
require 'net/https'
module InfluxDB
# rubocop:disable Metrics/MethodLength
# rubocop:disable Metrics/AbcSize
module HTTP # :nodoc:
def get(url, options = {})
connect_with_retry do |http|
response = do_request http, Net::HTTP::Get.new(url)
case response
when Net::HTTPSuccess
handle_successful_response(response, options)
when Net::HTTPUnauthorized
raise InfluxDB::AuthenticationError, response.body
else
resolve_error(response.body)
end
end
end
def post(url, data)
headers = { "Content-Type" => "application/octet-stream" }
connect_with_retry do |http|
response = do_request http, Net::HTTP::Post.new(url, headers), data
case response
when Net::HTTPNoContent
return response
when Net::HTTPUnauthorized
raise InfluxDB::AuthenticationError, response.body
else
resolve_error(response.body)
end
end
end
private
def connect_with_retry
host = config.next_host
delay = config.initial_delay
retry_count = 0
begin
http = build_http(host, config.port)
http.open_timeout = config.open_timeout
http.read_timeout = config.read_timeout
http = setup_ssl(http)
yield http
rescue *InfluxDB::NON_RECOVERABLE_EXCEPTIONS => e
raise InfluxDB::ConnectionError, InfluxDB::NON_RECOVERABLE_MESSAGE
rescue Timeout::Error, *InfluxDB::RECOVERABLE_EXCEPTIONS => e
retry_count += 1
unless (config.retry == -1 || retry_count <= config.retry) && !stopped?
raise InfluxDB::ConnectionError, "Tried #{retry_count - 1} times to reconnect but failed."
end
log(:warn) { "Failed to contact host #{host}: #{e.inspect} - retrying in #{delay}s." }
sleep delay
delay = [config.max_delay, delay * 2].min
retry
ensure
http.finish if http.started?
end
end
def do_request(http, req, data = nil)
req.basic_auth config.username, config.password if basic_auth?
req.body = data if data
http.request(req)
end
def basic_auth?
config.auth_method == 'basic_auth'
end
def resolve_error(response)
case response
when /Couldn\'t find series/
raise InfluxDB::SeriesNotFound, response
else
raise InfluxDB::Error, response
end
end
def handle_successful_response(response, options)
if options.fetch(:json_streaming, false)
parsed_response = response.body.each_line.with_object({}) do |line, parsed|
parsed.merge!(JSON.parse(line)) { |_key, oldval, newval| oldval + newval }
end
elsif (body = response.body) && (body != "")
parsed_response = JSON.parse(response.body)
end
errors = errors_from_response(parsed_response)
raise InfluxDB::QueryError, errors if errors
options.fetch(:parse, false) ? parsed_response : response
end
def errors_from_response(parsed_resp)
return unless parsed_resp.is_a?(Hash)
parsed_resp
.fetch('results', [])
.fetch(0, {})
.fetch('error', nil)
end
def setup_ssl(http)
http.use_ssl = config.use_ssl
http.verify_mode = OpenSSL::SSL::VERIFY_NONE unless config.verify_ssl
return http unless config.use_ssl
http.cert_store = generate_cert_store
http
end
def generate_cert_store
store = OpenSSL::X509::Store.new
store.set_default_paths
if config.ssl_ca_cert
if File.directory?(config.ssl_ca_cert)
store.add_path(config.ssl_ca_cert)
else
store.add_file(config.ssl_ca_cert)
end
end
store
end
# Builds an http instance, taking into account any configured
# proxy configuration
def build_http(host, port)
if config.proxy_addr
Net::HTTP.new(host, port, config.proxy_addr, config.proxy_port)
else
Net::HTTP.new(host, port)
end
end
end
# rubocop:enable Metrics/MethodLength
# rubocop:enable Metrics/AbcSize
end
influxdb-0.8.1/lib/influxdb/version.rb 0000644 0000041 0000041 00000000071 14027467176 017772 0 ustar www-data www-data module InfluxDB # :nodoc:
VERSION = "0.8.1".freeze
end
influxdb-0.8.1/lib/influxdb/logging.rb 0000644 0000041 0000041 00000002101 14027467176 017727 0 ustar www-data www-data require 'logger'
module InfluxDB
module Logging # :nodoc:
PREFIX = "InfluxDB".freeze
class << self
attr_writer :logger
attr_writer :log_level
def logger
return false if @logger == false
@logger ||= ::Logger.new(STDERR).tap { |logger| logger.level = Logger::INFO }
end
def log_level
@log_level || Logger::INFO
end
def log?(level)
case level
when :debug then log_level <= Logger::DEBUG
when :info then log_level <= Logger::INFO
when :warn then log_level <= Logger::WARN
when :error then log_level <= Logger::ERROR
when :fatal then log_level <= Logger::FATAL
else true
end
end
end
private
def log(level, message = nil, &block)
return unless InfluxDB::Logging.logger
return unless InfluxDB::Logging.log?(level)
if block_given?
InfluxDB::Logging.logger.send(level.to_sym, PREFIX, &block)
else
InfluxDB::Logging.logger.send(level.to_sym, PREFIX) { message }
end
end
end
end
influxdb-0.8.1/lib/influxdb/errors.rb 0000644 0000041 0000041 00000002545 14027467176 017631 0 ustar www-data www-data require "net/http"
require "zlib"
module InfluxDB # :nodoc:
Error = Class.new StandardError
AuthenticationError = Class.new Error
ConnectionError = Class.new Error
LineProtocolError = Class.new Error
SeriesNotFound = Class.new Error
JSONParserError = Class.new Error
QueryError = Class.new Error
# When executing queries via HTTP, some errors can more or less safely
# be ignored and we can retry the query again. This following
# exception classes shall be deemed as "safe".
#
# Taken from: https://github.com/lostisland/faraday/blob/master/lib/faraday/adapter/net_http.rb
RECOVERABLE_EXCEPTIONS = [
Errno::ECONNABORTED,
Errno::ECONNREFUSED,
Errno::ECONNRESET,
Errno::EHOSTUNREACH,
Errno::EINVAL,
Errno::ENETUNREACH,
Net::HTTPBadResponse,
Net::HTTPHeaderSyntaxError,
Net::ProtocolError,
SocketError,
(OpenSSL::SSL::SSLError if defined?(OpenSSL)),
].compact.freeze
# Exception classes which hint to a larger problem on the server side,
# like insuffient resources. If we encouter on of the following, wo
# _don't_ retry a query but escalate it upwards.
NON_RECOVERABLE_EXCEPTIONS = [
EOFError,
Zlib::Error,
].freeze
NON_RECOVERABLE_MESSAGE = "The server has sent incomplete data" \
" (insufficient resources are a possible cause).".freeze
end
influxdb-0.8.1/lib/influxdb/client.rb 0000644 0000041 0000041 00000005000 14027467176 017560 0 ustar www-data www-data require 'json'
module InfluxDB
# InfluxDB client class
class Client
attr_reader :config, :writer
include InfluxDB::Logging
include InfluxDB::HTTP
include InfluxDB::Query::Core
include InfluxDB::Query::Cluster
include InfluxDB::Query::Database
include InfluxDB::Query::User
include InfluxDB::Query::ContinuousQuery
include InfluxDB::Query::RetentionPolicy
include InfluxDB::Query::Series
include InfluxDB::Query::Measurement
# Initializes a new InfluxDB client
#
# === Examples:
#
# # connect to localhost using root/root
# # as the credentials and doesn't connect to a db
#
# InfluxDB::Client.new
#
# # connect to localhost using root/root
# # as the credentials and 'db' as the db name
#
# InfluxDB::Client.new 'db'
#
# # override username, other defaults remain unchanged
#
# InfluxDB::Client.new username: 'username'
#
# # override username, use 'db' as the db name
# Influxdb::Client.new 'db', username: 'username'
#
# === Valid options in hash
#
# +:host+:: the hostname to connect to
# +:port+:: the port to connect to
# +:prefix+:: the specified path prefix when building the url e.g.: /prefix/db/dbname...
# +:username+:: the username to use when executing commands
# +:password+:: the password associated with the username
# +:use_ssl+:: use ssl to connect
# +:verify_ssl+:: verify ssl server certificate?
# +:ssl_ca_cert+:: ssl CA certificate, chainfile or CA path.
# The system CA path is automatically included
# +:retry+:: number of times a failed request should be retried. Defaults to infinite.
def initialize(database = nil, **opts)
opts[:database] = database if database.is_a? String
@config = InfluxDB::Config.new(**opts)
@stopped = false
@writer = find_writer
at_exit { stop! }
end
def stop!
if @writer == self
@stopped = true
else
@writer.stop!
end
end
def stopped?
if @writer == self
@stopped
else
@writer.stopped?
end
end
def now
InfluxDB.now(config.time_precision)
end
private
def find_writer
if config.async?
InfluxDB::Writer::Async.new(self, config.async)
elsif config.udp.is_a?(Hash)
InfluxDB::Writer::UDP.new(self, **config.udp)
elsif config.udp?
InfluxDB::Writer::UDP.new(self)
else
self
end
end
end
end
influxdb-0.8.1/lib/influxdb/point_value.rb 0000644 0000041 0000041 00000004125 14027467176 020636 0 ustar www-data www-data module InfluxDB
# Convert data point to string using Line protocol
class PointValue
attr_reader :series, :values, :tags, :timestamp
def initialize(data)
@series = escape data[:series], :measurement
@values = escape_values data[:values]
@tags = escape_tags data[:tags]
@timestamp = data[:timestamp]
end
def dump
dump = @series.dup
dump << ",#{@tags}" if @tags
dump << " ".freeze if dump[-1] == "\\"
dump << " #{@values}"
dump << " #{@timestamp}" if @timestamp
dump
end
private
ESCAPES = {
measurement: [' '.freeze, ','.freeze],
tag_key: ['='.freeze, ' '.freeze, ','.freeze],
tag_value: ['='.freeze, ' '.freeze, ','.freeze],
field_key: ['='.freeze, ' '.freeze, ','.freeze, '"'.freeze],
field_value: ["\\".freeze, '"'.freeze],
}.freeze
private_constant :ESCAPES
def escape(str, type)
# rubocop:disable Layout/AlignParameters
str = str.encode "UTF-8".freeze, "UTF-8".freeze,
invalid: :replace,
undef: :replace,
replace: "".freeze
# rubocop:enable Layout/AlignParameters
ESCAPES[type].each do |ch|
str = str.gsub(ch) { "\\#{ch}" }
end
str
end
def escape_values(values)
if values.nil? || values.empty?
raise InfluxDB::LineProtocolError, "Cannot create point with empty values".freeze
end
values.map do |k, v|
key = escape(k.to_s, :field_key)
val = escape_value(v)
"#{key}=#{val}"
end.join(",".freeze)
end
def escape_value(value)
if value.is_a?(String)
'"'.freeze + escape(value, :field_value) + '"'.freeze
elsif value.is_a?(Integer)
"#{value}i"
else
value.to_s
end
end
def escape_tags(tags)
return if tags.nil?
tags = tags.map do |k, v|
key = escape(k.to_s, :tag_key)
val = escape(v.to_s, :tag_value)
"#{key}=#{val}" unless key == "".freeze || val == "".freeze
end.compact
tags.join(",") unless tags.empty?
end
end
end
influxdb-0.8.1/lib/influxdb/query/ 0000755 0000041 0000041 00000000000 14027467176 017127 5 ustar www-data www-data influxdb-0.8.1/lib/influxdb/query/cluster.rb 0000644 0000041 0000041 00000000745 14027467176 021143 0 ustar www-data www-data module InfluxDB
module Query
module Cluster # :nodoc:
def create_cluster_admin(username, password)
execute("CREATE USER \"#{username}\" WITH PASSWORD '#{password}' WITH ALL PRIVILEGES")
end
def list_cluster_admins
list_users.select { |u| u["admin".freeze] }.map { |u| u["username".freeze] }
end
def revoke_cluster_admin_privileges(username)
execute("REVOKE ALL PRIVILEGES FROM \"#{username}\"")
end
end
end
end
influxdb-0.8.1/lib/influxdb/query/continuous_query.rb 0000644 0000041 0000041 00000002016 14027467176 023106 0 ustar www-data www-data module InfluxDB
module Query
module ContinuousQuery # :nodoc:
def list_continuous_queries(database)
resp = execute("SHOW CONTINUOUS QUERIES", parse: true)
fetch_series(resp)
.select { |v| v['name'] == database }
.fetch(0, {})
.fetch('values', [])
.map { |v| { 'name' => v.first, 'query' => v.last } }
end
def create_continuous_query(name, database, query, resample_every: nil, resample_for: nil)
clause = ["CREATE CONTINUOUS QUERY", name, "ON", database]
if resample_every || resample_for
clause << "RESAMPLE".freeze
clause << "EVERY #{resample_every}" if resample_every
clause << "FOR #{resample_for}" if resample_for
end
clause = clause.join(" ".freeze) << " BEGIN\n".freeze << query << "\nEND".freeze
execute(clause)
end
def delete_continuous_query(name, database)
execute("DROP CONTINUOUS QUERY \"#{name}\" ON \"#{database}\"")
end
end
end
end
influxdb-0.8.1/lib/influxdb/query/user.rb 0000644 0000041 0000041 00000003415 14027467176 020435 0 ustar www-data www-data module InfluxDB
module Query
module User # :nodoc:
# create_database_user('testdb', 'user', 'pass') - grants all privileges by default
# create_database_user('testdb', 'user', 'pass', permissions: :read) - use [:read|:write|:all]
def create_database_user(database, username, password, options = {})
permissions = options.fetch(:permissions, :all)
execute(
"CREATE user \"#{username}\" WITH PASSWORD '#{password}'; "\
"GRANT #{permissions.to_s.upcase} ON \"#{database}\" TO \"#{username}\""
)
end
def update_user_password(username, password)
execute("SET PASSWORD FOR \"#{username}\" = '#{password}'")
end
# permission => [:all]
def grant_user_admin_privileges(username)
execute("GRANT ALL PRIVILEGES TO \"#{username}\"")
end
# permission => [:read|:write|:all]
def grant_user_privileges(username, database, permission)
execute("GRANT #{permission.to_s.upcase} ON \"#{database}\" TO \"#{username}\"")
end
def list_user_grants(username)
execute("SHOW GRANTS FOR \"#{username}\"")
end
# permission => [:read|:write|:all]
def revoke_user_privileges(username, database, permission)
execute("REVOKE #{permission.to_s.upcase} ON \"#{database}\" FROM \"#{username}\"")
end
def delete_user(username)
execute("DROP USER \"#{username}\"")
end
# => [{"username"=>"usr", "admin"=>true}, {"username"=>"justauser", "admin"=>false}]
def list_users
resp = execute("SHOW USERS".freeze, parse: true)
fetch_series(resp)
.fetch(0, {})
.fetch('values'.freeze, [])
.map { |v| { 'username' => v.first, 'admin' => v.last } }
end
end
end
end
influxdb-0.8.1/lib/influxdb/query/batch.rb 0000644 0000041 0000041 00000005221 14027467176 020535 0 ustar www-data www-data module InfluxDB
module Query
# Batch collects multiple queries and executes them together.
#
# You shouldn't use Batch directly, instead call Client.batch, which
# constructs a new batch for you.
class Batch
attr_reader :client, :statements
def initialize(client)
@client = client
@statements = []
yield self if block_given?
end
def add(query, params: nil)
statements << client.builder.build(query.chomp(";"), params)
statements.size - 1
end
def execute(
denormalize: config.denormalize,
chunk_size: config.chunk_size,
**opts,
&block
)
return [] if statements.empty?
url = full_url "/query".freeze, **query_params(statements.join(";"), **opts)
series = fetch_series get(url, parse: true, json_streaming: !chunk_size.nil?)
if denormalize
build_denormalized_result(series, &block)
else
build_result(series, &block)
end
end
private
def build_result(series)
return series.values unless block_given?
series.each do |id, statement_results|
statement_results.each do |s|
yield id, s["name".freeze], s["tags".freeze], raw_values(s)
end
# indicate empty result: yield useful amount of "nothing"
yield id, nil, {}, [] if statement_results.empty?
end
end
def build_denormalized_result(series)
return series.map { |_, s| denormalized_series_list(s) } unless block_given?
series.each do |id, statement_results|
statement_results.each do |s|
yield id, s["name".freeze], s["tags".freeze], denormalize_series(s)
end
# indicate empty result: yield useful amount of "nothing"
yield id, nil, {}, [] if statement_results.empty?
end
end
def fetch_series(response)
response.fetch("results".freeze).each_with_object({}) do |result, list|
sid = result["statement_id".freeze]
list[sid] = result.fetch("series".freeze, [])
end
end
# build simple method delegators
%i[
config
full_url
query_params
get
raw_values
denormalize_series
denormalized_series_list
].each do |method_name|
if RUBY_VERSION < "2.7"
define_method(method_name) do |*args|
client.send method_name, *args
end
else
define_method(method_name) do |*args, **kwargs|
client.send method_name, *args, **kwargs
end
end
end
end
end
end
influxdb-0.8.1/lib/influxdb/query/database.rb 0000644 0000041 0000041 00000001667 14027467176 021232 0 ustar www-data www-data module InfluxDB
module Query
module Database # :nodoc:
def create_database(name = nil)
execute("CREATE DATABASE \"#{name || config.database}\"")
end
def delete_database(name = nil)
execute("DROP DATABASE \"#{name || config.database}\"")
end
def list_databases
resp = execute("SHOW DATABASES".freeze, parse: true)
fetch_series(resp)
.fetch(0, {})
.fetch("values".freeze, [])
.flatten
.map { |v| { "name".freeze => v } }
end
def show_field_keys
query("SHOW FIELD KEYS".freeze, precision: nil).each_with_object({}) do |collection, keys|
name = collection.fetch("name")
values = collection.fetch("values", [])
keys[name] = values.each_with_object({}) do |row, types|
types[row.fetch("fieldKey")] = [row.fetch("fieldType")]
end
end
end
end
end
end
influxdb-0.8.1/lib/influxdb/query/series.rb 0000644 0000041 0000041 00000001167 14027467176 020753 0 ustar www-data www-data module InfluxDB
module Query
module Series # :nodoc:
def delete_series(name, where: nil, db: config.database)
if where
execute("DROP SERIES FROM \"#{name}\" WHERE #{where}", db: db)
else
execute("DROP SERIES FROM \"#{name}\"", db: db)
end
end
def list_series
resp = execute("SHOW SERIES".freeze, parse: true, db: config.database)
resp = fetch_series(resp)
return [] if resp.empty?
raw_values(resp[0])
.fetch('values'.freeze, [])
.map { |val| val[0].split(',')[0] }
.uniq
end
end
end
end
influxdb-0.8.1/lib/influxdb/query/builder.rb 0000644 0000041 0000041 00000002215 14027467176 021102 0 ustar www-data www-data module InfluxDB
module Query # :nodoc: all
class Builder
def build(query, params)
case params
when Array then params = params_from_array(params)
when Hash then params = params_from_hash(params)
when NilClass then params = {}
else raise ArgumentError, "Unsupported #{params.class} params"
end
query % params
rescue KeyError => e
raise ArgumentError, e.message
end
def quote(param)
case param
when String, Symbol
"'".freeze + param.to_s.gsub(/['"\\\x0]/, '\\\\\0') + "'".freeze
when Integer, Float, TrueClass, FalseClass
param.to_s
else
raise ArgumentError, "Unexpected parameter type #{param.class} (#{param.inspect})"
end
end
private
def params_from_hash(params)
params.each_with_object({}) do |(k, v), hash|
hash[k.to_sym] = quote(v)
end
end
def params_from_array(params)
params.each_with_object({}).with_index do |(param, hash), i|
hash[(i + 1).to_s.to_sym] = quote(param)
end
end
end
end
end
influxdb-0.8.1/lib/influxdb/query/measurement.rb 0000644 0000041 0000041 00000001007 14027467176 021777 0 ustar www-data www-data module InfluxDB
module Query
module Measurement # :nodoc:
def list_measurements(database = config.database)
data = execute("SHOW MEASUREMENTS", db: database, parse: true)
return nil if data.nil? || data["results"][0]["series"].nil?
data["results"][0]["series"][0]["values"].flatten
end
def delete_measurement(measurement_name, database = config.database)
execute "DROP MEASUREMENT \"#{measurement_name}\"", db: database
true
end
end
end
end
influxdb-0.8.1/lib/influxdb/query/core.rb 0000644 0000041 0000041 00000011174 14027467176 020410 0 ustar www-data www-data require_relative 'batch'
require_relative 'builder'
module InfluxDB
module Query # :nodoc: all
module Core
def builder
@builder ||= Builder.new
end
def ping
url = URI::Generic.build(path: File.join(config.prefix, '/ping')).to_s
get url
end
def version
ping.header['x-influxdb-version']
end
def query( # rubocop:disable Metrics/MethodLength
query,
params: nil,
denormalize: config.denormalize,
chunk_size: config.chunk_size,
**opts
)
query = builder.build(query, params)
url = full_url("/query".freeze, **query_params(query, **opts))
series = fetch_series(get(url, parse: true, json_streaming: !chunk_size.nil?))
if block_given?
series.each do |s|
values = denormalize ? denormalize_series(s) : raw_values(s)
yield s['name'.freeze], s['tags'.freeze], values
end
else
denormalize ? denormalized_series_list(series) : series
end
end
def batch(&block)
Batch.new self, &block
end
# Example:
# write_points([
# {
# series: 'cpu',
# tags: { host: 'server_nl', regios: 'us' },
# values: {internal: 5, external: 6},
# timestamp: 1422568543702900257
# },
# {
# series: 'gpu',
# values: {value: 0.9999},
# }
# ])
def write_points(data, precision = nil, retention_policy = nil, database = nil)
data = data.is_a?(Array) ? data : [data]
payload = generate_payload(data)
writer.write(payload, precision, retention_policy, database)
rescue StandardError => e
raise e unless config.discard_write_errors
log :error, "Cannot write data: #{e.inspect}"
end
# Example:
# write_point('cpu', tags: {region: 'us'}, values: {internal: 60})
def write_point(series, data, precision = nil, retention_policy = nil, database = nil)
write_points(data.merge(series: series), precision, retention_policy, database)
end
def write(data, precision, retention_policy = nil, database = nil)
params = {
db: database || config.database,
precision: precision || config.time_precision,
}
params[:rp] = retention_policy if retention_policy
url = full_url("/write", **params)
post(url, data)
end
private
def query_params(
query,
precision: config.time_precision,
epoch: config.epoch,
chunk_size: config.chunk_size,
database: config.database
)
params = { q: query, db: database }
params[:precision] = precision if precision
params[:epoch] = epoch if epoch
if chunk_size
params[:chunked] = 'true'.freeze
params[:chunk_size] = chunk_size
end
params
end
def denormalized_series_list(series)
series.map do |s|
{
"name".freeze => s["name".freeze],
"tags".freeze => s["tags".freeze],
"values".freeze => denormalize_series(s),
}
end
end
def fetch_series(response)
response.fetch('results'.freeze, []).flat_map do |result|
result.fetch('series'.freeze, [])
end
end
def generate_payload(data)
data.map { |point| generate_point(point) }.join("\n".freeze)
end
def generate_point(point)
InfluxDB::PointValue.new(point).dump
rescue InfluxDB::LineProtocolError => e
(log :error, "Cannot write data: #{e.inspect}") && nil
end
def execute(query, db: nil, **options)
params = { q: query }
params[:db] = db if db
url = full_url("/query".freeze, **params)
get(url, options)
end
def denormalize_series(series)
Array(series["values".freeze]).map do |values|
Hash[series["columns".freeze].zip(values)]
end
end
def raw_values(series)
series.select { |k, _| %w[columns values].include?(k) }
end
def full_url(path, **params)
if config.auth_method == "params".freeze
params[:u] = config.username
params[:p] = config.password
end
URI::Generic.build(
path: File.join(config.prefix, path),
query: cgi_escape_params(params)
).to_s
end
def cgi_escape_params(params)
params.map do |k, v|
[CGI.escape(k.to_s), "=".freeze, CGI.escape(v.to_s)].join
end.join("&".freeze)
end
end
end
end
influxdb-0.8.1/lib/influxdb/query/retention_policy.rb 0000644 0000041 0000041 00000003330 14027467176 023041 0 ustar www-data www-data module InfluxDB
module Query
module RetentionPolicy # :nodoc:
def create_retention_policy(name,
database,
duration,
replication,
default = false,
shard_duration: nil)
execute(
"CREATE RETENTION POLICY \"#{name}\" ON \"#{database}\" " \
"DURATION #{duration} REPLICATION #{replication}" \
"#{shard_duration ? " SHARD DURATION #{shard_duration}" : ''}" \
"#{default ? ' DEFAULT' : ''}"
)
end
def list_retention_policies(database)
resp = execute("SHOW RETENTION POLICIES ON \"#{database}\"", parse: true)
data = fetch_series(resp).fetch(0, {})
data.fetch("values".freeze, []).map do |policy|
policy.each.with_index.inject({}) do |hash, (value, index)|
hash.tap { |h| h[data['columns'.freeze][index]] = value }
end
end
end
def delete_retention_policy(name, database)
execute("DROP RETENTION POLICY \"#{name}\" ON \"#{database}\"")
end
def alter_retention_policy(name,
database,
duration,
replication,
default = false,
shard_duration: nil)
execute(
"ALTER RETENTION POLICY \"#{name}\" ON \"#{database}\" " \
"DURATION #{duration} REPLICATION #{replication}" \
"#{shard_duration ? " SHARD DURATION #{shard_duration}" : ''}" \
"#{default ? ' DEFAULT' : ''}"
)
end
end
end
end
influxdb-0.8.1/lib/influxdb/writer/ 0000755 0000041 0000041 00000000000 14027467176 017276 5 ustar www-data www-data influxdb-0.8.1/lib/influxdb/writer/udp.rb 0000644 0000041 0000041 00000001262 14027467176 020414 0 ustar www-data www-data module InfluxDB
module Writer
# Writes data to InfluxDB through UDP
class UDP
attr_accessor :socket
attr_reader :host, :port
def initialize(client, host: "localhost".freeze, port: 4444)
@client = client
@host = host
@port = port
end
# No-op for UDP writers
def stop!; end
def write(payload, _precision = nil, _retention_policy = nil, _database = nil)
with_socket { |sock| sock.send(payload, 0) }
end
private
def with_socket
unless socket
self.socket = UDPSocket.new
socket.connect(host, port)
end
yield socket
end
end
end
end
influxdb-0.8.1/lib/influxdb/writer/async.rb 0000644 0000041 0000041 00000013357 14027467176 020751 0 ustar www-data www-data require "net/http"
require "uri"
module InfluxDB
module Writer # :nodoc: all
class Async
attr_reader :config, :client
def initialize(client, config)
@client = client
@config = config
@stopped = false
end
def stopped?
@stopped
end
def stop!
worker.stop!
@stopped = true
end
def write(data, precision = nil, retention_policy = nil, database = nil)
data = data.is_a?(Array) ? data : [data]
data.map { |payload| worker.push(payload, precision, retention_policy, database) }
end
WORKER_MUTEX = Mutex.new
def worker
return @worker if @worker
WORKER_MUTEX.synchronize do
# this return is necessary because the previous mutex holder
# might have already assigned the @worker
return @worker if @worker
@worker = Worker.new(client, config)
end
end
class Worker # rubocop:disable Metrics/ClassLength
attr_reader :client,
:queue,
:threads,
:max_post_points,
:max_queue_size,
:num_worker_threads,
:sleep_interval,
:block_on_full_queue,
:shutdown_timeout
include InfluxDB::Logging
MAX_POST_POINTS = 1000
MAX_QUEUE_SIZE = 10_000
NUM_WORKER_THREADS = 3
SLEEP_INTERVAL = 5
BLOCK_ON_FULL_QUEUE = false
def initialize(client, config) # rubocop:disable Metrics/MethodLength
@client = client
config = config.is_a?(Hash) ? config : {}
@max_post_points = config.fetch(:max_post_points, MAX_POST_POINTS)
@max_queue_size = config.fetch(:max_queue_size, MAX_QUEUE_SIZE)
@num_worker_threads = config.fetch(:num_worker_threads, NUM_WORKER_THREADS)
@sleep_interval = config.fetch(:sleep_interval, SLEEP_INTERVAL)
@block_on_full_queue = config.fetch(:block_on_full_queue, BLOCK_ON_FULL_QUEUE)
@shutdown_timeout = config.fetch(:shutdown_timeout, 2 * @sleep_interval)
queue_class = @block_on_full_queue ? SizedQueue : InfluxDB::MaxQueue
@queue = queue_class.new max_queue_size
@should_stop = false
spawn_threads!
end
def push(payload, precision = nil, retention_policy = nil, database = nil)
queue.push([payload, precision, retention_policy, database])
end
def current_threads
@threads
end
def current_thread_count
@threads.count
end
# rubocop:disable Metrics/CyclomaticComplexity
# rubocop:disable Metrics/MethodLength
# rubocop:disable Metrics/AbcSize
def spawn_threads!
@threads = []
num_worker_threads.times do |thread_num|
log(:debug) { "Spawning background worker thread #{thread_num}." }
@threads << Thread.new do
Thread.current[:influxdb] = object_id
until @should_stop
check_background_queue(thread_num)
sleep rand(sleep_interval)
end
log(:debug) { "Exit background worker thread #{thread_num}." }
end
end
end
def check_background_queue(thread_num = -1)
log(:debug) do
"Checking background queue on thread #{thread_num} (#{current_thread_count} active)"
end
loop do
data = {}
while data.all? { |_, points| points.size < max_post_points } && !queue.empty?
begin
payload, precision, retention_policy, database = queue.pop(true)
key = {
db: database,
pr: precision,
rp: retention_policy,
}
data[key] ||= []
data[key] << payload
rescue ThreadError
next
end
end
return if data.values.flatten.empty?
begin
log(:debug) { "Found data in the queue! (#{sizes(data)}) on thread #{thread_num}" }
write(data)
rescue StandardError => e
log :error, "Cannot write data: #{e.inspect} on thread #{thread_num}"
end
break if queue.length > max_post_points
end
end
# rubocop:enable Metrics/CyclomaticComplexity
# rubocop:enable Metrics/MethodLength
# rubocop:enable Metrics/AbcSize
def stop!
log(:debug) { "Worker is being stopped, flushing queue." }
# If retry was infinite (-1), set it to zero to give the threads one
# last chance to write their data
client.config.retry = 0 if client.config.retry < 0
# Signal the background threads that they should exit.
@should_stop = true
# Wait for the threads to exit and then kill them
@threads.each do |t|
r = t.join(shutdown_timeout)
t.kill if r.nil?
end
# Flush any remaining items in the queue on the main thread
check_background_queue until queue.empty?
end
private
def write(data)
data.each do |key, points|
client.write(points.join("\n"), key[:pr], key[:rp], key[:db])
end
end
def sizes(data)
data.map do |key, points|
without_nils = key.reject { |_, v| v.nil? }
if without_nils.empty?
"#{points.size} points"
else
"#{key} => #{points.size} points"
end
end.join(', ')
end
end
end
end
end
influxdb-0.8.1/lib/influxdb/max_queue.rb 0000644 0000041 0000041 00000000457 14027467176 020306 0 ustar www-data www-data module InfluxDB
# Queue with max length limit
class MaxQueue < Queue
attr_reader :max
def initialize(max = 10_000)
raise ArgumentError, "queue size must be positive" if max <= 0
@max = max
super()
end
def push(obj)
super if length < @max
end
end
end
influxdb-0.8.1/lib/influxdb/timestamp_conversion.rb 0000644 0000041 0000041 00000003426 14027467176 022564 0 ustar www-data www-data module InfluxDB #:nodoc:
# Converts a Time to a timestamp with the given precision.
#
# === Example
#
# InfluxDB.convert_timestamp(Time.now, "ms")
# #=> 1543533308243
def self.convert_timestamp(time, precision = "s")
factor = TIME_PRECISION_FACTORS.fetch(precision) do
raise ArgumentError, "invalid time precision: #{precision}"
end
(time.to_r * factor).to_i
end
# Returns the current timestamp with the given precision.
#
# Implementation detail: This does not create an intermediate Time
# object with `Time.now`, but directly requests the CLOCK_REALTIME,
# which in general is a bit faster.
#
# This is useful, if you want or need to shave off a few microseconds
# from your measurement.
#
# === Examples
#
# InfluxDB.now("ns") #=> 1543612126401392625
# InfluxDB.now("u") #=> 1543612126401392
# InfluxDB.now("ms") #=> 1543612126401
# InfluxDB.now("s") #=> 1543612126
# InfluxDB.now("m") #=> 25726868
# InfluxDB.now("h") #=> 428781
def self.now(precision = "s")
name, divisor = CLOCK_NAMES.fetch(precision) do
raise ArgumentError, "invalid time precision: #{precision}"
end
time = Process.clock_gettime Process::CLOCK_REALTIME, name
(time / divisor).to_i
end
TIME_PRECISION_FACTORS = {
"ns" => 1e9.to_r,
nil => 1e9.to_r,
"u" => 1e6.to_r,
"ms" => 1e3.to_r,
"s" => 1.to_r,
"m" => 1.to_r / 60,
"h" => 1.to_r / 60 / 60,
}.freeze
private_constant :TIME_PRECISION_FACTORS
CLOCK_NAMES = {
"ns" => [:nanosecond, 1],
nil => [:nanosecond, 1],
"u" => [:microsecond, 1],
"ms" => [:millisecond, 1],
"s" => [:second, 1],
"m" => [:second, 60.to_r],
"h" => [:second, (60 * 60).to_r],
}.freeze
private_constant :CLOCK_NAMES
end
influxdb-0.8.1/lib/influxdb/config.rb 0000644 0000041 0000041 00000012213 14027467176 017553 0 ustar www-data www-data require "uri"
module InfluxDB
# DEFAULT_CONFIG_OPTIONS maps (most) of the configuration options to
# their default value. Each option (except for "async" and "udp") can
# be changed at runtime throug the InfluxDB::Client instance.
#
# If you need to change the writer to be asynchronuous or use UDP, you
# need to get a new InfluxDB::Client instance.
DEFAULT_CONFIG_OPTIONS = {
# HTTP connection options
port: 8086,
prefix: "".freeze,
username: "root".freeze,
password: "root".freeze,
open_timeout: 5,
read_timeout: 300,
auth_method: nil,
proxy_addr: nil,
proxy_port: nil,
# SSL options
use_ssl: false,
verify_ssl: true,
ssl_ca_cert: false,
# Database options
database: nil,
time_precision: "s".freeze,
epoch: false,
# Writer options
async: false,
udp: false,
discard_write_errors: false,
# Retry options
retry: -1,
max_delay: 30,
initial_delay: 0.01,
# Query options
chunk_size: nil,
denormalize: true,
}.freeze
# InfluxDB client configuration
class Config
# Valid values for the "auth_method" option.
AUTH_METHODS = [
"params".freeze,
"basic_auth".freeze,
"none".freeze,
].freeze
ATTR_READER = %i[async udp].freeze
private_constant :ATTR_READER
ATTR_ACCESSOR = (DEFAULT_CONFIG_OPTIONS.keys - ATTR_READER).freeze
private_constant :ATTR_ACCESSOR
attr_reader(*ATTR_READER)
attr_accessor(*ATTR_ACCESSOR)
# Creates a new instance. See `DEFAULT_CONFIG_OPTIONS` for available
# config options and their default values.
#
# If you provide a "url" option, either as String (hint: ENV) or as
# URI instance, you can override the defaults. The precedence for a
# config value is as follows (first found wins):
#
# - values given in the options hash
# - values found in URL (if given)
# - default values
def initialize(url: nil, **opts)
opts = opts_from_url(url).merge(opts) if url
DEFAULT_CONFIG_OPTIONS.each do |name, value|
set_ivar! name, opts.fetch(name, value)
end
configure_hosts! opts[:hosts] || opts[:host] || "localhost".freeze
end
def udp?
udp != false
end
def async?
async != false
end
def next_host
host = @hosts_queue.pop
@hosts_queue.push(host)
host
end
def hosts
Array.new(@hosts_queue.length) do
host = @hosts_queue.pop
@hosts_queue.push(host)
host
end
end
private
def set_ivar!(name, value)
case name
when :auth_method
value = "params".freeze unless AUTH_METHODS.include?(value)
when :retry
value = normalize_retry_option(value)
end
instance_variable_set "@#{name}", value
end
def normalize_retry_option(value)
case value
when Integer then value
when true, nil then -1
when false then 0
end
end
# load the hosts into a Queue for thread safety
def configure_hosts!(hosts)
@hosts_queue = Queue.new
Array(hosts).each do |host|
@hosts_queue.push(host)
end
end
# merges URI options into opts
def opts_from_url(url)
url = URI.parse(url) unless url.is_a?(URI)
opts_from_non_params(url).merge opts_from_params(url.query)
rescue URI::InvalidURIError
{}
end
# rubocop:disable Metrics/AbcSize
# rubocop:disable Metrics/CyclomaticComplexity
def opts_from_non_params(url)
{}.tap do |o|
o[:host] = url.host if url.host
o[:port] = url.port if url.port
o[:username] = URI.decode_www_form_component(url.user) if url.user
o[:password] = URI.decode_www_form_component(url.password) if url.password
o[:database] = url.path[1..-1] if url.path.length > 1
o[:use_ssl] = url.scheme == "https".freeze
o[:udp] = { host: o[:host], port: o[:port] } if url.scheme == "udp"
end
end
# rubocop:enable Metrics/AbcSize
# rubocop:enable Metrics/CyclomaticComplexity
OPTIONS_FROM_PARAMS = (DEFAULT_CONFIG_OPTIONS.keys - %i[
host port username password database use_ssl udp
]).freeze
private_constant :OPTIONS_FROM_PARAMS
def opts_from_params(query)
params = CGI.parse(query || "").tap { |h| h.default = [] }
OPTIONS_FROM_PARAMS.each_with_object({}) do |k, opts|
next unless params[k.to_s].size == 1
opts[k] = coerce(k, params[k.to_s].first)
end
end
def coerce(name, value)
case name
when :open_timeout, :read_timeout, :max_delay, :retry, :chunk_size
value.to_i
when :initial_delay
value.to_f
when :verify_ssl, :denormalize, :async, :discard_write_errors
%w[true 1 yes on].include?(value.downcase)
else
value
end
end
end
end
influxdb-0.8.1/Gemfile 0000644 0000041 0000041 00000000250 14027467176 014671 0 ustar www-data www-data source "https://rubygems.org"
gemspec
local_gemfile = 'Gemfile.local'
if File.exist?(local_gemfile)
eval(File.read(local_gemfile)) # rubocop:disable Lint/Eval
end
influxdb-0.8.1/.github/ 0000755 0000041 0000041 00000000000 14027467176 014741 5 ustar www-data www-data influxdb-0.8.1/.github/workflows/ 0000755 0000041 0000041 00000000000 14027467176 016776 5 ustar www-data www-data influxdb-0.8.1/.github/workflows/tests.yml 0000644 0000041 0000041 00000006565 14027467176 020677 0 ustar www-data www-data name: Tests
# workflow_dispatch enables running workflow manually
on: [push, pull_request, workflow_dispatch]
jobs:
rubocop:
name: RuboCop
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04]
ruby: [2.7]
steps:
- uses: actions/checkout@v2
- uses: ruby/setup-ruby@v1
with:
bundler-cache: true
ruby-version: ${{ matrix.ruby }}
- run: bundle exec rake rubocop
specs:
name: ${{ matrix.os }} ${{ matrix.ruby }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os:
- ubuntu-16.04
- ubuntu-18.04
- ubuntu-20.04
ruby:
- 2.3
- 2.4
- 2.5
- 2.6
- 2.7
# YAML gotcha: https://github.com/actions/runner/issues/849
- '3.0'
- head
include:
- os: ubuntu-20.04
ruby: jruby-9.1
jruby_opts: '--client -J-XX:+TieredCompilation -J-XX:TieredStopAtLevel=1 -J-Xss2m -J-Xmx256M'
allow-failure: true
- os: ubuntu-20.04
ruby: jruby-9.2
jruby_opts: '--client -J-XX:+TieredCompilation -J-XX:TieredStopAtLevel=1 -J-Xss2m -J-Xmx256M'
allow-failure: true
- os: ubuntu-20.04
ruby: jruby-head
jruby_opts: '--client -J-XX:+TieredCompilation -J-XX:TieredStopAtLevel=1 -J-Xss2m -J-Xmx256M'
allow-failure: true
steps:
- uses: actions/checkout@v2
- uses: ruby/setup-ruby@v1
with:
bundler-cache: true
ruby-version: ${{ matrix.ruby }}
- run: bundle exec rake
continue-on-error: ${{ matrix.allow-failure || false }}
smoketests:
name: smoketest with influx ${{ matrix.influx_version }}
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
include:
- { ruby: 2.7, influx_version: 1.0.2, pkghash: 88f6c30fec2c6e612e802e23b9161fdfc7c5c29f6be036f0376326445aff0037 }
- { ruby: 2.7, influx_version: 1.1.5, pkghash: 0ecb9385cc008f6e5094e6e8f8ea70522023a16d4397e401898f3973176d3b21 }
- { ruby: 2.7, influx_version: 1.2.4, pkghash: 2fac8391e04aa1bec9151e8f0d8f18df030c866af2b4963ab7d86c6ddc172182 }
- { ruby: 2.7, influx_version: 1.3.8, pkghash: 35c9cb2943bbde04aa5e94ad6d8caf5fc9b1480bdbcde7c34078de135cc4f788 }
- { ruby: 2.7, influx_version: 1.4.3, pkghash: 0477080f1d1cf8e1242dc7318280b9010c4c45cf6a415a2a5de607ae17fa0359 }
- { ruby: 2.7, influx_version: 1.5.4, pkghash: fa6f8d3196d13ffc376d533581b534692c738181ce3427c53484c138d9e6b902 }
- { ruby: 2.7, influx_version: 1.6.4, pkghash: dbfa13a0f9e38a8e7b19294c30144903bb681ac0aba0a3a8f4f349c37d5de5f9 }
- { ruby: 2.7, influx_version: 1.7.9, pkghash: 02759d70cef670d336768fd38a9cf2f046a1bf40618be78ba215e7ce75b5075f }
- { ruby: 2.7, influx_version: nightly, channel: nightlies, allow-failure: true }
env:
influx_version: ${{ matrix.influx_version }}
pkghash: ${{ matrix.pkghash }}
channel: ${{ matrix.channel }}
steps:
- uses: actions/checkout@v2
- uses: ruby/setup-ruby@v1
with:
bundler-cache: true
ruby-version: ${{ matrix.ruby }}
- run: bin/provision.sh
- run: bundle exec rake spec
continue-on-error: ${{ matrix.allow-failure || false }}
influxdb-0.8.1/LICENSE.txt 0000644 0000041 0000041 00000002054 14027467176 015225 0 ustar www-data www-data Copyright (c) 2013 Todd Persen
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.