module Sequel
Top level module for Sequel
There are some module methods that are added via metaprogramming, one for each supported adapter. For example:
DB = Sequel.sqlite # Memory database DB = Sequel.sqlite('blog.db') DB = Sequel.postgres('database_name', :user=>'user', :password=>'password', :host=>'host', :port=>5432, :max_connections=>10)
If a block is given to these methods, it is passed the opened Database object, which is closed (disconnected) when the block exits, just like a block passed to connect. For example:
Sequel.sqlite('blog.db'){|db| puts db[:users].count}
For a more expanded introduction, see the README. For a quicker introduction, see the cheat sheet.
This _pretty_table extension is only for internal use. It adds the Sequel::PrettyTable class without modifying Sequel::Dataset.
To load the extension:
Sequel.extension :_pretty_table
The arbitrary_servers extension allows you to connect to arbitrary servers/shards that were not defined when you created the database. To use it, you first load the extension into the Database object:
DB.extension :arbitrary_servers
Then you can pass arbitrary connection options for the server/shard to use as a hash:
DB[:table].server(:host=>'...', :database=>'...').all
Because Sequel can never be sure that the connection will be reused, arbitrary connections are disconnected as soon as the outermost block that uses them exits. So this example uses the same connection:
DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c| DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2| # c == c2 end end
But this example does not:
DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c| end DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2| # c != c2 end
You can use this extension in conjunction with the server_block extension:
DB.with_server(:host=>'...', :database=>'...') do DB.synchronize do # All of these use the host/database given to with_server DB[:table].insert(...) DB[:table].update(...) DB.tables DB[:table].all end end
Anyone using this extension in conjunction with the server_block extension may want to do the following to so that you don't need to call synchronize separately:
def DB.with_server(*) super{synchronize{yield}} end
Note that this extension only works with the sharded threaded connection pool. If you are using the sharded single connection pool, you need to switch to the sharded threaded connection pool before using this extension.
The columns_introspection extension attempts to introspect the selected columns for a dataset before issuing a query. If it thinks it can guess correctly at the columns the query will use, it will return the columns without issuing a database query.
This method is not fool-proof, it's possible that some databases will use column names that Sequel does not expect. Also, it may not correctly handle all cases.
To attempt to introspect columns for a single dataset:
ds = ds.extension(:columns_introspection)
To attempt to introspect columns for all datasets on a single database:
DB.extension(:columns_introspection)
The connection_validator extension modifies a database's connection pool to validate that connections checked out from the pool are still valid, before yielding them for use. If it detects an invalid connection, it removes it from the pool and tries the next available connection, creating a new connection if no available connection is valid. Example of use:
DB.extension(:connection_validator)
As checking connections for validity involves issuing a query, which is potentially an expensive operation, the validation checks are only run if the connection has been idle for longer than a certain threshold. By default, that threshold is 3600 seconds (1 hour), but it can be modified by the user, set to -1 to always validate connections on checkout:
DB.pool.connection_validation_timeout = -1
Note that if you set the timeout to validate connections on every checkout, you should probably manually control connection checkouts on a coarse basis, using Sequel::Database#synchronize. In a web application, the optimal place for that would be a rack middleware. Validating connections on every checkout without setting up coarse connection checkouts will hurt performance, in some cases significantly. Note that setting up coarse connection checkouts reduces the concurrency level acheivable. For example, in a web application, using Sequel::Database#synchronize in a rack middleware will limit the number of concurrent web requests to the number to connections in the database connection pool.
Note that this extension only affects the default threaded and the sharded threaded connection pool. The single threaded and sharded single threaded connection pools are not affected. As the only reason to use the single threaded pools is for speed, and this extension makes the connection pool slower, there's not much point in modifying this extension to work with the single threaded pools. The threaded pools work fine even in single threaded code, so if you are currently using a single threaded pool and want to use this extension, switch to using a threaded pool.
The constraint_validations extension is designed to easily create database constraints inside create_table and alter_table blocks. It also adds relevant metadata about the constraints to a separate table, which the constraint_validations model plugin uses to setup automatic validations.
To use this extension, you first need to load it into the database:
DB.extension(:constraint_validations)
Note that you should only need to do this when modifying the constraint validations (i.e. when migrating). You should probably not load this extension in general application code.
You also need to make sure to add the metadata table for the automatic validations. By default, this table is called sequel_constraint_validations.
DB.create_constraint_validations_table
This table should only be created once. For new applications, you generally want to create it first, before creating any other application tables.
Because migrations instance_eval the up and down blocks on a database, using this extension in a migration can be done via:
Sequel.migration do up do extension(:constraint_validations) # ... end down do extension(:constraint_validations) # ... end end
However, note that you cannot use change migrations with this extension, you need to use separate up/down migrations.
The API for creating the constraints with automatic validations is similar to the validation_helpers model plugin API. However, instead of having separate validates_* methods, it just adds a validate method that accepts a block to the schema generators. Like the create_table and alter_table blocks, this block is instance_evaled and offers its own DSL. Example:
DB.create_table(:table) do Integer :id String :name validate do presence :id min_length 5, :name end end
instance_eval is used in this case because create_table and alter_table already use instance_eval, so losing access to the surrounding receiver is not an issue.
Here's a breakdown of the constraints created for each constraint validation method:
- All constraints except unique unless :allow_nil is true
-
CHECK column IS NOT NULL
- presence (String column)
-
CHECK trim(column) != ''
- exact_length 5
-
CHECK char_length(column) = 5
- min_length 5
-
CHECK char_length(column) >= 5
- max_length 5
-
CHECK char_length(column) <= 5
- length_range 3..5
-
CHECK char_length(column) >= 3 AND char_length(column) <= 5
- length_range 3…5
-
CHECK char_length(column) >= 3 AND char_length(column) < 5
- format /foo\d+/
-
CHECK column ~ 'foo\d+'
- format /foo\d+/i
-
CHECK column ~* 'foo\d+'
- like 'foo%'
-
CHECK column LIKE 'foo%' ESCAPE ''
- ilike 'foo%'
-
CHECK column ILIKE 'foo%' ESCAPE ''
- includes ['a', 'b']
-
CHECK column IN ('a', 'b')
- includes [1, 2]
-
CHECK column IN (1, 2)
- includes 3..5
-
CHECK column >= 3 AND column <= 5
- includes 3…5
-
CHECK column >= 3 AND column < 5
- unique
-
UNIQUE (column)
There are some additional API differences:
-
Only the :message and :allow_nil options are respected. The :allow_blank and :allow_missing options are not respected.
-
A new option, :name, is respected, for providing the name of the constraint. It is highly recommended that you provide a name for all constraint validations, as otherwise, it is difficult to drop the constraints later.
-
The includes validation only supports an array of strings, and array of integers, and a range of integers.
-
There are like and ilike validations, which are similar to the format validation but use a case sensitive or case insensitive LIKE pattern. LIKE patters are very simple, so many regexp patterns cannot be expressed by them, but only a couple databases (PostgreSQL and MySQL) support regexp patterns.
-
When using the unique validation, column names cannot have embedded commas. For similar reasons, when using an includes validation with an array of strings, none of the strings in the array can have embedded commas.
-
The unique validation does not support an arbitrary number of columns. For a single column, just the symbol should be used, and for an array of columns, an array of symbols should be used. There is no support for creating two separate unique validations for separate columns in a single call.
-
A drop method can be called with a constraint name in a alter_table validate block to drop an existing constraint and the related validation metadata.
-
While it is allowed to create a presence constraint with :allow_nil set to true, doing so does not create a constraint unless the column has String type.
Note that this extension has the following issues on certain databases:
-
MySQL does not support check constraints (they are parsed but ignored), so using this extension does not actually set up constraints on MySQL, except for the unique constraint. It can still be used on MySQL to add the validation metadata so that the plugin can setup automatic validations.
-
On SQLite, adding constraints to a table is not supported, so it must be emulated by dropping the table and recreating it with the constraints. If you want to use this plugin on SQLite with an alter_table block, you should drop all constraint validation metadata using
drop_constraint_validations_for(:table=>'table')
, and then readd all constraints you want to use inside the alter table block, making no other changes inside the alter_table block.
The current_datetime_timestamp extension makes Sequel::Dataset#current_datetime return an object that operates like ::datetime_class.now, but will be literalized as CURRENT_TIMESTAMP.
This allows you to use the defaults_setter, timestamps, and touch model plugins and make sure that CURRENT_TIMESTAMP is used instead of a literalized timestamp value.
The reason that CURRENT_TIMESTAMP is better than a literalized version of the timestamp is that it obeys correct transactional semantics (all calls to CURRENT_TIMESTAMP in the same transaction return the same timestamp, at least on some databases).
To have current_datetime be literalized as CURRENT_TIMESTAMP for a single dataset:
ds = ds.extension(:current_datetime_timestamp)
To have current_datetime be literalized as CURRENT_TIMESTAMP for all datasets of a given database.
DB.extension(:current_datetime_timestamp)
The dataset_source_alias extension changes Sequel's default behavior of automatically aliasing datasets from using t1, t2, etc. to using an alias based on the source of the dataset. Example:
DB.from(DB.from(:a)) # default: SELECT * FROM (SELECT * FROM a) AS t1 # with extension: SELECT * FROM (SELECT * FROM a) AS a
This also works when joining:
DB[:a].join(DB[:b], [:id]) # SELECT * FROM a INNER JOIN (SELECT * FROM b) AS b USING (id)
To avoid conflicting aliases, this attempts to alias tables uniquely if it detects a conflict:
DB.from(:a, DB.from(:a)) # SELECT * FROM a, (SELECT * FROM a) AS a_0
Note that not all conflicts are correctly detected and handled. It is encouraged to alias your datasets manually instead of relying on the auto-aliasing if there would be a conflict.
In the places where Sequel cannot determine the appropriate alias to use for the dataset, it will fallback to the standard t1, t2, etc. aliasing.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:dataset_source_alias)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:dataset_source_alias)
The date_arithmetic extension adds the ability to perform database-independent addition/substraction of intervals to/from dates and timestamps.
First, you need to load the extension into the database:
DB.extension :date_arithmetic
Then you can use the Sequel.date_add and Sequel.date_sub methods to return Sequel expressions:
add = Sequel.date_add(:date_column, :years=>1, :months=>2, :days=>3) sub = Sequel.date_sub(:date_column, :hours=>1, :minutes=>2, :seconds=>3)
In addition to specifying the interval as a hash, there is also support for specifying the interval as an ActiveSupport::Duration object:
require 'active_support/all' add = Sequel.date_add(:date_column, 1.years + 2.months + 3.days) sub = Sequel.date_sub(:date_column, 1.hours + 2.minutes + 3.seconds)
These expressions can be used in your datasets, or anywhere else that Sequel expressions are allowed:
DB[:table].select(add.as(:d)).where(sub > Sequel::CURRENT_TIMESTAMP)
This changes Sequel's literalization of IN/NOT IN with an empty array value to not return NULL even if one of the referenced columns is NULL:
DB[:test].where(:name=>[]) # SELECT * FROM test WHERE (1 = 0) DB[:test].exclude(:name=>[]) # SELECT * FROM test WHERE (1 = 1)
The default Sequel behavior is to respect NULLs, so that when name is NULL, the expression returns NULL.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:empty_array_ignore_nulls)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:empty_array_ignore_nulls)
The error_sql extension adds a Sequel::DatabaseError#sql method that you can use to get the sql that caused the error to be raised.
begin DB.run "Invalid SQL" rescue => e puts e.sql # "Invalid SQL" end
On some databases, the error message contains part or all of the SQL used, but on other databases, none of the SQL used is displayed in the error message, so it can be difficult to track down what is causing the error without using a logger. This extension should hopefully make debugging easier on databases that have bad error messages.
This extension may not work correctly in the following cases:
-
log_yield is not used when executing the query.
-
The underlying exception is frozen or reused.
-
The underlying exception doesn't correctly record instance variables set on it (seems to happen on JRuby when underlying exception objects are Java exceptions).
To load the extension into the database:
DB.extension :error_sql
The eval_inspect extension changes inspect for Sequel::SQL::Expression subclasses to return a string suitable for ruby's eval, such that
eval(obj.inspect) == obj
is true. The above code is true for most of ruby's simple classes such as String, Integer, Float, and Symbol, but it's not true for classes such as Time, Date, and BigDecimal. Sequel attempts to handle situations where instances of these classes are a component of a Sequel expression.
To load the extension:
Sequel.extension :eval_inspect
The filter_having extension allows Sequel::Dataset#filter, and, or and exclude to operate on the HAVING clause if the dataset already has a HAVING clause, which was the historical behavior before Sequel 4. It is only recommended to use this for backwards compatibility.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:filter_having)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:filter_having)
The from_block extension changes Sequel::Database#from so that blocks given to it are treated as virtual rows applying to the FROM clause, instead of virtual rows applying to the WHERE clause. This will probably be made the default in the next major version of Sequel.
This makes it easier to use table returning functions:
DB.from{table_function(1)} # SELECT * FROM table_function(1)
To load the extension into the database:
DB.extension :from_block
The graph_each extension adds Dataset#graph_each and makes Sequel::Dataset#each call graph_each if the dataset has been graphed. Dataset#graph_each splits result hashes into subhashes per table:
DB[:a].graph(:b, :id=>:b_id).all # => {:a=>{:id=>1, :b_id=>2}, :b=>{:id=>2}}
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:graph_each)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:graph_each)
The hash_aliases extension allows Sequel::Dataset#select and Sequel::Dataset#from to treat a hash argument as an alias specification, with keys being the expressions and values being the aliases, which was the historical behavior before Sequel 4. It is only recommended to use this for backwards compatibility.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:hash_aliases)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:hash_aliases)
The LooserTypecasting extension loosens the default database typecasting for the following types:
- :float
-
use to_f instead of Float()
- :integer
-
use to_i instead of Integer()
- :decimal
-
don't check string conversion with Float()
- :string
-
silently allow hash and array conversion to string
To load the extension into the database:
DB.extension :looser_typecasting
The meta_def extension is designed for backwards compatibility with older Sequel code that uses the meta_def method on Database, Dataset, and Model classes and/or instances. It is not recommended for usage in new code. To load this extension:
Sequel.extension :meta_def
Adds the Sequel::Migration and Sequel::Migrator classes, which allow the user to easily group schema changes and migrate the database to a newer version or revert to a previous version.
To load the extension:
Sequel.extension :migration
The mssql_emulate_lateral_with_apply extension converts queries that use LATERAL into queries that use CROSS/OUTER APPLY, allowing code that works on databases that support LATERAL via Sequel::Dataset#lateral to run on Microsoft SQL Server and Sybase SQLAnywhere.
This is available as a separate extension instead of integrated into the Microsoft SQL Server and Sybase SQLAnywhere support because few people need it and there is a performance hit to code that doesn't use it.
It is possible there are cases where this emulation does not work. Users should probably verify that correct results are returned when using this extension.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:mssql_emulate_lateral_with_apply)
Or you can load it into all of a database's datasets:
DB.extension(:mssql_emulate_lateral_with_apply)
The null_dataset extension adds the Dataset#nullify method, which returns a cloned dataset that will never issue a query to the database. It implements the null object pattern for datasets.
The most common usage is probably in a method that must return a dataset, where the method knows the dataset shouldn't return anything. With standard Sequel, you'd probably just add a WHERE condition that is always false, but that still results in a query being sent to the database, and can be overridden using unfiltered, the OR operator, or a UNION.
Usage:
ds = DB[:items].nullify.where(:a=>:b).select(:c) ds.sql # => "SELECT c FROM items WHERE (a = b)" ds.all # => [] # no query sent to the database
Note that there is one case where a null dataset will sent a query to the database. If you call columns on a nulled dataset and the dataset doesn't have an already cached version of the columns, it will create a new dataset with the same options to get the columns.
This extension uses Object#extend at runtime, which can hurt performance.
To add the nullify method to a single dataset:
ds = ds.extension(:null_dataset)
To add the nullify method to all datasets on a single database:
DB.extension(:null_dataset)
The pagination extension adds the Sequel::Dataset#paginate and each_page methods, which return paginated (limited and offset) datasets with some helpful methods that make creating a paginated display easier.
This extension uses Object#extend at runtime, which can hurt performance.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:pagination)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:pagination)
The pg_array_ops extension adds support to Sequel's DSL to make it easier to call PostgreSQL array functions and operators.
To load the extension:
Sequel.extension :pg_array_ops
The most common usage is passing an expression to Sequel.pg_array_op:
ia = Sequel.pg_array_op(:int_array_column)
If you have also loaded the pg_array extension, you can use Sequel.pg_array as well:
ia = Sequel.pg_array(:int_array_column)
Also, on most Sequel expression objects, you can call the pg_array method:
ia = Sequel.expr(:int_array_column).pg_array
If you have loaded the core_extensions extension, or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::ArrayOpMethods#pg_array:
ia = :int_array_column.pg_array
This creates a Sequel::Postgres::ArrayOp object that can be used for easier querying:
ia[1] # int_array_column[1] ia[1][2] # int_array_column[1][2] ia.contains(:other_int_array_column) # @> ia.contained_by(:other_int_array_column) # <@ ia.overlaps(:other_int_array_column) # && ia.concat(:other_int_array_column) # || ia.push(1) # int_array_column || 1 ia.unshift(1) # 1 || int_array_column ia.any # ANY(int_array_column) ia.all # ALL(int_array_column) ia.cardinality # cardinality(int_array_column) ia.dims # array_dims(int_array_column) ia.hstore # hstore(int_array_column) ia.hstore(:a) # hstore(int_array_column, a) ia.length # array_length(int_array_column, 1) ia.length(2) # array_length(int_array_column, 2) ia.lower # array_lower(int_array_column, 1) ia.lower(2) # array_lower(int_array_column, 2) ia.join # array_to_string(int_array_column, '', NULL) ia.join(':') # array_to_string(int_array_column, ':', NULL) ia.join(':', ' ') # array_to_string(int_array_column, ':', ' ') ia.unnest # unnest(int_array_column) ia.unnest(:b) # unnest(int_array_column, b)
See the PostgreSQL array function and operator documentation for more details on what these functions and operators do.
If you are also using the pg_array extension, you should load it before loading this extension. Doing so will allow you to use PGArray#op to get an ArrayOp, allowing you to perform array operations on array literals.
In order for hstore to automatically wrap the returned value correctly in an HStoreOp, you need to load the pg_hstore_ops extension.
The pg_enum extension adds support for Sequel to handle PostgreSQL's enum types. To use this extension, first load it into your Database instance:
DB.extension :pg_enum
It allows creation of enum types using create_enum:
DB.create_enum(:type_name, %w'value1 value2 value3')
You can also add values to existing enums via add_enum_value:
DB.add_enum_value(:enum_type_name, 'value4')
If you want to drop an enum type, you can use drop_enum:
DB.drop_enum(:type_name)
Just like any user-created type, after creating the type, you can create tables that have a column of that type:
DB.create_table(:table_name) enum_type_name :column_name end
When parsing the schema, enum types are recognized, and available values returned in the schema hash:
DB.schema(:table_name) [[:column_name, {:type=>:enum, :enum_values=>['value1', 'value2']}]]
If the pg_array extension is used, arrays of enums are returned as a PGArray:
DB.create_table(:table_name) column :column_name, 'enum_type_name[]' end DB[:table_name].get(:column_name) # ['value1', 'value2']
Finally, typecasting for enums is setup to cast to strings, which allows you to use symbols in your model code. Similar, you can provide the enum values as symbols when creating enums using create_enum or add_enum_value.
The pg_hstore_ops extension adds support to Sequel's DSL to make it easier to call PostgreSQL hstore functions and operators.
To load the extension:
Sequel.extension :pg_hstore_ops
The most common usage is taking an object that represents an SQL expression (such as a :symbol), and calling Sequel.hstore_op with it:
h = Sequel.hstore_op(:hstore_column)
If you have also loaded the pg_hstore extension, you can use Sequel.hstore as well:
h = Sequel.hstore(:hstore_column)
Also, on most Sequel expression objects, you can call the hstore method:
h = Sequel.expr(:hstore_column).hstore
If you have loaded the core_extensions extension, or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::HStoreOpMethods#hstore:
h = :hstore_column.hstore
This creates a Sequel::Postgres::HStoreOp object that can be used for easier querying:
h - 'a' # hstore_column - CAST('a' AS text) h['a'] # hstore_column -> 'a' h.concat(:other_hstore_column) # || h.has_key?('a') # ? h.contain_all(:array_column) # ?& h.contain_any(:array_column) # ?| h.contains(:other_hstore_column) # @> h.contained_by(:other_hstore_column) # <@ h.defined # defined(hstore_column) h.delete('a') # delete(hstore_column, 'a') h.each # each(hstore_column) h.keys # akeys(hstore_column) h.populate(:a) # populate_record(a, hstore_column) h.record_set(:a) # (a #= hstore_column) h.skeys # skeys(hstore_column) h.slice(:a) # slice(hstore_column, a) h.svals # svals(hstore_column) h.to_array # hstore_to_array(hstore_column) h.to_matrix # hstore_to_matrix(hstore_column) h.values # avals(hstore_column)
See the PostgreSQL hstore function and operator documentation for more details on what these functions and operators do.
If you are also using the pg_hstore extension, you should load it before loading this extension. Doing so will allow you to use HStore#op to get an HStoreOp, allowing you to perform hstore operations on hstore literals.
Some of these methods will accept ruby arrays and convert them automatically to PostgreSQL arrays if you have the pg_array extension loaded. Some of these methods will accept ruby hashes and convert them automatically to PostgreSQL hstores if the pg_hstore extension is loaded. Methods representing expressions that return PostgreSQL arrays will have the returned expression automatically wrapped in a Postgres::ArrayOp if the pg_array_ops extension is loaded.
The pg_json_ops extension adds support to Sequel's DSL to make it easier to call PostgreSQL JSON functions and operators (added first in PostgreSQL 9.3). It also supports the JSONB functions and operators added in PostgreSQL 9.4).
To load the extension:
Sequel.extension :pg_json_ops
The most common usage is passing an expression to Sequel.pg_json_op or Sequel.pg_jsonb_op:
j = Sequel.pg_json_op(:json_column) jb = Sequel.pg_jsonb_op(:jsonb_column)
If you have also loaded the pg_json extension, you can use Sequel.pg_json or Sequel.pg_jsonb as well:
j = Sequel.pg_json(:json_column) jb = Sequel.pg_jsonb(:jsonb_column)
Also, on most Sequel expression objects, you can call the pg_json or pg_jsonb # method:
j = Sequel.expr(:json_column).pg_json jb = Sequel.expr(:jsonb_column).pg_jsonb
If you have loaded the core_extensions extension, or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::JSONOpMethods#pg_json or Sequel::Postgres::JSONOpMethods#pg_jsonb:
j = :json_column.pg_json jb = :jsonb_column.pg_jsonb
This creates a Sequel::Postgres::JSONOp or Sequel::Postgres::JSONBOp object that can be used for easier querying:
j[1] # (json_column -> 1) j[%w'a b'] # (json_column #> ARRAY['a','b']) j.get_text(1) # (json_column ->> 1) j.get_text(%w'a b') # (json_column #>> ARRAY['a','b']) j.extract('a', 'b') # json_extract_path(json_column, 'a', 'b') j.extract_text('a', 'b') # json_extract_path_text(json_column, 'a', 'b') j.array_length # json_array_length(json_column) j.array_elements # json_array_elements(json_column) j.array_elements_text # json_array_elements_text(json_column) j.each # json_each(json_column) j.each_text # json_each_text(json_column) j.keys # json_object_keys(json_column) j.typeof # json_typeof(json_column) j.populate(:a) # json_populate_record(:a, json_column) j.populate_set(:a) # json_populate_recordset(:a, json_column) j.to_record # json_to_record(json_column) j.to_recordset # json_to_recordset(json_column)
If you are also using the pg_json extension, you should load it before loading this extension. Doing so will allow you to use the op method on JSONHash, JSONHarray, JSONBHash, and JSONBArray, allowing you to perform json/jsonb operations on json/jsonb literals.
In order to get the automatic conversion from a ruby array to a PostgreSQL array (as shown in the [] and get_text examples above), you need to load the pg_array extension.
The pg_loose_count extension looks at the table statistics in the PostgreSQL system tables to get a fast approximate count of the number of rows in a given table:
DB.loose_count(:table) # => 123456
It can also support schema qualified tables:
DB.loose_count(:schema__table) # => 123456
How accurate this count is depends on the number of rows added/deleted from the table since the last time it was analyzed.
To load the extension into the database:
DB.extension :pg_loose_count
The pg_range_ops extension adds support to Sequel's DSL to make it easier to call PostgreSQL range functions and operators.
To load the extension:
Sequel.extension :pg_range_ops
The most common usage is passing an expression to Sequel.pg_range_op:
r = Sequel.pg_range_op(:range)
If you have also loaded the pg_range extension, you can use Sequel.pg_range as well:
r = Sequel.pg_range(:range)
Also, on most Sequel expression objects, you can call the pg_range method:
r = Sequel.expr(:range).pg_range
If you have loaded the core_extensions extension, or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::RangeOpMethods#pg_range:
r = :range.pg_range
This creates a Sequel::Postgres::RangeOp object that can be used for easier querying:
r.contains(:other) # range @> other r.contained_by(:other) # range <@ other r.overlaps(:other) # range && other r.left_of(:other) # range << other r.right_of(:other) # range >> other r.starts_after(:other) # range &> other r.ends_before(:other) # range &< other r.adjacent_to(:other) # range -|- other r.lower # lower(range) r.upper # upper(range) r.isempty # isempty(range) r.lower_inc # lower_inc(range) r.upper_inc # upper_inc(range) r.lower_inf # lower_inf(range) r.upper_inf # upper_inf(range)
See the PostgreSQL range function and operator documentation for more details on what these functions and operators do.
If you are also using the pg_range extension, you should load it before loading this extension. Doing so will allow you to use PGArray#op to get an RangeOp, allowing you to perform range operations on range literals.
The pg_row_ops extension adds support to Sequel's DSL to make it easier to deal with PostgreSQL row-valued/composite types.
To load the extension:
Sequel.extension :pg_row_ops
The most common usage is passing an expression to Sequel.pg_row_op:
r = Sequel.pg_row_op(:row_column)
If you have also loaded the pg_row extension, you can use Sequel.pg_row as well:
r = Sequel.pg_row(:row_column)
Also, on most Sequel expression objects, you can call the pg_row method:
r = Sequel.expr(:row_column).pg_row
If you have loaded the core_extensions extension, or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::PGRowOp::ExpressionMethods#pg_row:
r = :row_column.pg_row
There's only fairly basic support currently. You can use the [] method to access a member of the composite type:
r[:a] # (row_column).a
This can be chained:
r[:a][:b] # ((row_column).a).b
If you've loaded the pg_array_ops extension, you there is also support for composite types that include arrays, or arrays of composite types:
r[1][:a] # (row_column[1]).a r[:a][1] # (row_column).a[1]
The only other support is the splat method:
r.splat # (row_column.*)
The splat method is necessary if you are trying to reference a table's type when the table has the same name as one of it's columns. For example:
DB.create_table(:a){Integer :a; Integer :b}
Let's say you want to reference the composite type for the table:
a = Sequel.pg_row_op(:a) DB[:a].select(a[:b]) # SELECT (a).b FROM a
Unfortunately, that doesn't work, as it references the integer column, not the table. The splat method works around this:
DB[:a].select(a.splat[:b]) # SELECT (a.*).b FROM a
Splat also takes an argument which is used for casting. This is necessary if you want to return the composite type itself, instead of the columns in the composite type. For example:
DB[:a].select(a.splat).first # SELECT (a.*) FROM a # => {:a=>1, :b=>2}
By casting the expression, you can get a composite type returned:
DB[:a].select(a.splat).first # SELECT (a.*)::a FROM a # => {:a=>"(1,2)"} # or {:a=>{:a=>1, :b=>2}} if the "a" type has been registered # with the pg_row extension
This feature is mostly useful for a different way to graph tables:
DB[:a].join(:b, :id=>:b_id).select(Sequel.pg_row_op(:a).splat(:a), Sequel.pg_row_op(:b).splat(:b)) # SELECT (a.*)::a, (b.*)::b FROM a INNER JOIN b ON (b.id = a.b_id) # => {:a=>{:id=>1, :b_id=>2}, :b=>{:id=>2}}
The pg_static_cache_updater extension is designed to automatically update the caches in the models using the static_cache plugin when changes to the underlying tables are detected.
Before using the extension in production, you have to add triggers to the tables for the classes where you want the caches updated automatically. You would generally do this during a migration:
Sequel.migration do up do extension :pg_static_cache_updater create_static_cache_update_function create_static_cache_update_trigger(:table_1) create_static_cache_update_trigger(:table_2) end down do extension :pg_static_cache_updater drop_trigger(:table_2, default_static_cache_update_name) drop_trigger(:table_1, default_static_cache_update_name) drop_function(default_static_cache_update_name) end end
After the triggers have been added, in your application process, after setting up your models, you need to listen for changes to the underlying tables:
class Model1 < Sequel::Model(:table_1) plugin :static_cache end class Model2 < Sequel::Model(:table_2) plugin :static_cache end DB.extension :pg_static_cache_updater DB.listen_for_static_cache_updates([Model1, Model2])
When an INSERT/UPDATE/DELETE happens on the underlying table, the trigger will send a notification with the table's OID. The application(s) listening on that channel will receive the notification, check the oid to see if it matches one for the model tables it is interested in, and tell that model to reload the cache if there is a match.
Note that listen_for_static_cache_updates spawns a new thread which will reserve its own database connection. This thread runs until the application process is shutdown.
Also note that PostgreSQL does not send notifications to channels until after the transaction including the changes is committed. Also, because a separate thread is used to listen for notifications, there may be a slight delay between when the transaction is committed and when the cache is reloaded.
Requirements:
-
PostgreSQL 9.0+
-
Listening Database object must be using the postgres adapter with the pg driver (the model classes do not have to use the same Database).
-
Must be using a thread-safe connection pool (the default).
The pretty_table extension adds Sequel::Dataset#print and the Sequel::PrettyTable class for creating nice-looking plain-text tables. Example:
+--+-------+ |id|name | |--+-------| |1 |fasdfas| |2 |test | +--+-------+
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:pretty_table)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:pretty_table)
The query extension adds a query method which allows a different way to construct queries instead of the usual method chaining:
dataset = DB[:items].query do select :x, :y, :z filter{(x > 1) & (y > 2)} reverse :z end
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:query)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:query)
The query_literals extension changes Sequel's default behavior of the select, order and group methods so that if the first argument is a regular string, it is treated as a literal string, with the rest of the arguments (if any) treated as placeholder values. This allows you to write code such as:
DB[:table].select('a, b, ?', 2).group('a, b').order('c')
The default Sequel behavior would literalize that as:
SELECT 'a, b, ?', 2 FROM table GROUP BY 'a, b' ORDER BY 'c'
Using this extension changes the literalization to:
SELECT a, b, 2, FROM table GROUP BY a, b ORDER BY c
This extension makes select, group, and order methods operate like filter methods, which support the same interface.
There are very few places where Sequel's default behavior is desirable in this area, but for backwards compatibility, the defaults won't be changed until the next major release.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:query_literals)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:query_literals)
The schema_caching extension adds a few methods to Sequel::Database that make it easy to dump the parsed schema information to a file, and load it from that file. Loading the schema information from a dumped file is faster than parsing it from the database, so this can save bootup time for applications with large numbers of models.
Basic usage in application code:
DB = Sequel.connect('...') DB.extension :schema_caching DB.load_schema_cache('/path/to/schema.dump') # load model files
Then, whenever the database schema is modified, write a new cached file.
You can do that with bin/sequel
's -S option:
bin/sequel -S /path/to/schema.dump postgres://...
Alternatively, if you don't want to dump the schema information for all tables, and you don't worry about race conditions, you can choose to use the following in your application code:
DB = Sequel.connect('...') DB.extension :schema_caching DB.load_schema_cache?('/path/to/schema.dump') # load model files DB.dump_schema_cache?('/path/to/schema.dump')
With this method, you just have to delete the schema dump file if the schema is modified, and the application will recreate it for you using just the tables that your models use.
Note that it is up to the application to ensure that the dumped cached schema reflects the current state of the database. Sequel does no checking to ensure this, as checking would take time and the purpose of this code is to take a shortcut.
The cached schema is dumped in Marshal format, since it is the fastest and it handles all ruby objects used in the schema hash. Because of this, you should not attempt to load the schema from a untrusted file.
The select_remove extension adds Sequel::Dataset#select_remove for removing existing selected columns from a dataset. It's not part of Sequel core as it is rarely needed and has some corner cases where it can't work correctly.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:select_remove)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:select_remove)
This adds the following dataset methods:
- []=
-
filter with the first argument, update with the second
- insert_multiple
-
insert multiple rows at once
- set
-
alias for update
- to_csv
-
return string in csv format for the dataset
- db=
-
change the dataset's database
- opts=
-
change the dataset's opts
It is only recommended to use this for backwards compatibility.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:sequel_3_dataset_methods)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:sequel_3_dataset_methods)
The server_block extension adds the Database#with_server method, which takes a shard argument and a block, and makes it so that access inside the block will use the specified shard by default.
First, you need to enable it on the database object:
DB.extension :server_block
Then you can call with_server:
DB.with_server(:shard1) do DB[:a].all # Uses shard1 DB[:a].server(:shard2).all # Uses shard2 end DB[:a].all # Uses default
You can even nest calls to with_server:
DB.with_server(:shard1) do DB[:a].all # Uses shard1 DB.with_server(:shard2) do DB[:a].all # Uses shard2 end DB[:a].all # Uses shard1 end DB[:a].all # Uses default
Note that if you pass the nil, :default, or :read_only server/shard names to Sequel::Dataset#server inside a with_server block, they will be ignored and the server/shard given to with_server will be used:
DB.with_server(:shard1) do DB[:a].all # Uses shard1 DB[:a].server(:shard2).all # Uses shard2 DB[:a].server(nil).all # Uses shard1 DB[:a].server(:default).all # Uses shard1 DB[:a].server(:read_only).all # Uses shard1 end
The set_overrides extension adds the Dataset#set_overrides and Dataset#set_defaults methods which provide a crude way to control the values used in INSERT/UPDATE statements if a hash of values is passed to Sequel::Dataset#insert or Sequel::Dataset#update. It is only recommended to use this for backwards compatibility.
You can load this extension into specific datasets:
ds = DB[:table] ds = ds.extension(:set_overrides)
Or you can load it into all of a database's datasets, which is probably the desired behavior if you are using this extension:
DB.extension(:set_overrides)
The split_array_nil extension overrides Sequel's default handling of IN/NOT IN with arrays of values to do specific nil checking. For example,
ds = DB[:table].where(:column=>[1, nil])
By default, that produces the following SQL:
SELECT * FROM table WHERE (column IN (1, NULL))
However, because NULL = NULL is not true in SQL (it is NULL), this will not return rows in the table where the column is NULL. This extension allows for an alternative behavior more similar to ruby, which will return rows in the table where the column is NULL, using a query like:
SELECT * FROM table WHERE ((column IN (1)) OR (column IS NULL)))
Similarly, for NOT IN queries:
ds = DB[:table].exclude(:column=>[1, nil]) # Default: # SELECT * FROM table WHERE (column NOT IN (1, NULL)) # with split_array_nils extension: # SELECT * FROM table WHERE ((column NOT IN (1)) AND (column IS NOT NULL)))
To use this extension with a single dataset:
ds = ds.extension(:split_array_nil)
To use this extension for all of a database's datasets:
DB.extension(:split_array_nil)
The thread_local_timezones extension allows you to set a per-thread timezone that will override the default global timezone while the thread is executing. The main use case is for web applications that execute each request in its own thread, and want to set the timezones based on the request.
To load the extension:
Sequel.extension :thread_local_timezones
The most common example is having the database always store time in UTC, but have the application deal with the timezone of the current user. That can be done with:
Sequel.database_timezone = :utc # In each thread: Sequel.thread_application_timezone = current_user.timezone
This extension is designed to work with the named_timezones extension.
This extension adds the thread_application_timezone=, thread_database_timezone=, and thread_typecast_timezone= methods to the Sequel module. It overrides the application_timezone, database_timezone, and typecast_timezone methods to check the related thread local timezone first, and use it if present. If the related thread local timezone is not present, it falls back to the default global timezone.
There is one special case of note. If you have a default global timezone and you want to have a nil thread local timezone, you have to set the thread local value to :nil instead of nil:
Sequel.application_timezone = :utc Sequel.thread_application_timezone = nil Sequel.application_timezone # => :utc Sequel.thread_application_timezone = :nil Sequel.application_timezone # => nil
This adds a Sequel::Dataset#to_dot
method. The
to_dot
method returns a string that can be processed by
graphviz's dot
program in order to get a visualization of
the dataset. Basically, it shows a version of the dataset's abstract
syntax tree.
To load the extension:
Sequel.extension :to_dot
Constants
- ADAPTER_MAP
Hash of adapters that have been used. The key is the adapter scheme symbol, and the value is the Database subclass.
- BeforeHookFailed
Exception class raised when
raise_on_save_failure
is set and a before hook returns false or an around hook doesn't call super or yield.- COLUMN_REF_RE1
- COLUMN_REF_RE2
- COLUMN_REF_RE3
- DATABASES
Array of all databases to which Sequel has connected. If you are developing an application that can connect to an arbitrary number of databases, delete the database objects from this or they will not get garbage collected.
- DEFAULT_INFLECTIONS_PROC
Proc that is instance evaled to create the default inflections for both the model inflector and the inflector extension.
- MAJOR
The major version of Sequel. Only bumped for major changes.
- MINOR
The minor version of Sequel. Bumped for every non-patch level release, generally around once a month.
- OPTS
Frozen hash used as the default options hash for most options.
- SPLIT_SYMBOL_CACHE
- TINY
The tiny version of Sequel. Usually 0, only bumped for bugfix releases that fix regressions from previous versions.
- VERSION
The version of Sequel you are using, as a string (e.g. “2.11.0”)
- VIRTUAL_ROW
Attributes
Whether to cache the anonymous models created by Sequel::Model(). This is required for reloading them correctly (avoiding the superclass mismatch). True by default for backwards compatibility.
Sequel converts two digit years in
Date
s and DateTime
s by default, so 01/02/03 is
interpreted at January 2nd, 2003, and 12/13/99 is interpreted as December
13, 1999. You can override this to treat those dates as January 2nd, 0003
and December 13, 0099, respectively, by:
Sequel.convert_two_digit_years = false
Sequel can use either Time
or
DateTime
for times returned from the database. It defaults to
Time
. To change it to DateTime
:
Sequel.datetime_class = DateTime
For ruby versions less than 1.9.2, Time
has a limited range
(1901 to 2038), so if you use datetimes out of that range, you need to
switch to DateTime
. Also, before 1.9.2, Time
can
only handle local and UTC times, not other timezones. Note that
Time
and DateTime
objects have a different API,
and in cases where they implement the same methods, they often implement
them differently (e.g. + using seconds on Time
and days on
DateTime
).
Public Class Methods
Lets you create a Model subclass with its
dataset already set. source
should be an instance of one of
the following classes:
- Database
-
Sets the database for this model to
source
. Generally only useful when subclassing directly from the returned class, where the name of the subclass sets the table name (which is combined with theDatabase
insource
to create the dataset to use) - Dataset
-
Sets the dataset for this model to
source
. - other
-
Sets the table name for this model to
source
. The class will use the default database for model classes in order to create the dataset.
The purpose of this method is to set the dataset/database automatically for a model class, if the table name doesn't match the implicit name. This is neater than using set_dataset inside the class, doesn't require a bogus query for the schema.
# Using a symbol class Comment < Sequel::Model(:something) table_name # => :something end # Using a dataset class Comment < Sequel::Model(DB1[:something]) dataset # => DB1[:something] end # Using a database class Comment < Sequel::Model(DB1) dataset # => DB1[:comments] end
# File lib/sequel/model.rb, line 37 def self.Model(source) if cache_anonymous_models && (klass = Model::ANONYMOUS_MODEL_CLASSES_MUTEX.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source]}) return klass end klass = if source.is_a?(Database) c = Class.new(Model) c.db = source c else Class.new(Model).set_dataset(source) end Model::ANONYMOUS_MODEL_CLASSES_MUTEX.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source] = klass} if cache_anonymous_models klass end
Returns true if the passed object could be a specifier of conditions, false otherwise. Currently, Sequel considers hashes and arrays of two element arrays as condition specifiers.
Sequel.condition_specifier?({}) # => true Sequel.condition_specifier?([[1, 2]]) # => true Sequel.condition_specifier?([]) # => false Sequel.condition_specifier?([1]) # => false Sequel.condition_specifier?(1) # => false
# File lib/sequel/core.rb, line 62 def self.condition_specifier?(obj) case obj when Hash true when Array !obj.empty? && !obj.is_a?(SQL::ValueList) && obj.all?{|i| i.is_a?(Array) && (i.length == 2)} else false end end
Creates a new database object based on the supplied connection string and optional arguments. The specified scheme determines the database class used, and the rest of the string specifies the connection options. For example:
DB = Sequel.connect('sqlite:/') # Memory database DB = Sequel.connect('sqlite://blog.db') # ./blog.db DB = Sequel.connect('sqlite:///blog.db') # /blog.db DB = Sequel.connect('postgres://user:password@host:port/database_name') DB = Sequel.connect('sqlite:///blog.db', :max_connections=>10)
If a block is given, it is passed the opened Database
object,
which is closed when the block exits. For example:
Sequel.connect('sqlite://blog.db'){|db| puts db[:users].count}
For details, see the “Connecting to a Database” guide. To set up a master/slave or sharded database connection, see the “Master/Slave Databases and Sharding” guide.
# File lib/sequel/core.rb, line 94 def self.connect(*args, &block) Database.connect(*args, &block) end
Convert the exception
to the given class. The given class
should be Sequel::Error
or a subclass. Returns an instance of
klass
with the message and backtrace of
exception
.
# File lib/sequel/core.rb, line 107 def self.convert_exception_class(exception, klass) return exception if exception.is_a?(klass) e = klass.new("#{exception.class}: #{exception.message}") e.wrapped_exception = exception e.set_backtrace(exception.backtrace) e end
Assume the core extensions are not loaded by default, if the core_extensions extension is loaded, this will be overridden.
# File lib/sequel/core.rb, line 100 def self.core_extensions? false end
Load all Sequel extensions given. Extensions are
just files that exist under sequel/extensions
in the load
path, and are just required. Generally, extensions modify the behavior of
Database
and/or Dataset
, but Sequel ships with some extensions that modify other
classes that exist for backwards compatibility. In some cases, requiring an
extension modifies classes directly, and in others, it just loads a module
that you can extend other classes with. Consult the documentation for each
extension you plan on using for usage.
Sequel.extension(:schema_dumper) Sequel.extension(:pagination, :query)
# File lib/sequel/core.rb, line 125 def self.extension(*extensions) extensions.each{|e| Kernel.require "sequel/extensions/#{e}"} end
Set the method to call on identifiers going into the database. This affects the literalization of identifiers by calling this method on them before they are input. Sequel upcases identifiers in all SQL strings for most databases, so to turn that off:
Sequel.identifier_input_method = nil
to downcase instead:
Sequel.identifier_input_method = :downcase
Other String instance methods work as well.
# File lib/sequel/core.rb, line 140 def self.identifier_input_method=(value) Database.identifier_input_method = value end
Set the method to call on identifiers coming out of the database. This affects the literalization of identifiers by calling this method on them when they are retrieved from the database. Sequel downcases identifiers retrieved for most databases, so to turn that off:
Sequel.identifier_output_method = nil
to upcase instead:
Sequel.identifier_output_method = :upcase
Other String instance methods work as well.
# File lib/sequel/core.rb, line 156 def self.identifier_output_method=(value) Database.identifier_output_method = value end
Yield the Inflections module if a block is given, and return the Inflections module.
# File lib/sequel/model/inflections.rb, line 4 def self.inflections yield Inflections if block_given? Inflections end
The exception classed raised if there is an error parsing JSON. This can be overridden to use an alternative json implementation.
# File lib/sequel/core.rb, line 162 def self.json_parser_error_class JSON::ParserError end
The preferred method for writing Sequel migrations, using a DSL:
Sequel.migration do up do create_table(:artists) do primary_key :id String :name end end down do drop_table(:artists) end end
Designed to be used with the Migrator
class, part of the
migration
extension.
# File lib/sequel/extensions/migration.rb, line 280 def self.migration(&block) MigrationDSL.create(&block) end
Convert given object to json and return the result. This can be overridden to use an alternative json implementation.
# File lib/sequel/core.rb, line 168 def self.object_to_json(obj, *args) obj.to_json(*args) end
Parse the string as JSON and return the result. This can be overridden to use an alternative json implementation.
# File lib/sequel/core.rb, line 174 def self.parse_json(json) JSON.parse(json, :create_additions=>false) end
Convert each item in the array to the correct type, handling multi-dimensional arrays. For each element in the array or subarrays, call the converter, unless the value is nil.
# File lib/sequel/core.rb, line 189 def self.recursive_map(array, converter) array.map do |i| if i.is_a?(Array) recursive_map(i, converter) elsif i converter.call(i) end end end
Require all given files
which should be in the same or a
subdirectory of this file. If a subdir
is given, assume all
files
are in that subdir. This is used to ensure that the
files loaded are from the same version of Sequel
as this file.
# File lib/sequel/core.rb, line 203 def self.require(files, subdir=nil) Array(files).each{|f| super("#{File.dirname(__FILE__).untaint}/#{"#{subdir}/" if subdir}#{f}")} end
Set whether Sequel is being used in single threaded mode. By default, Sequel uses a thread-safe connection pool, which isn't as fast as the single threaded connection pool, and also has some additional thread safety checks. If your program will only have one thread, and speed is a priority, you should set this to true:
Sequel.single_threaded = true
# File lib/sequel/core.rb, line 214 def self.single_threaded=(value) @single_threaded = value Database.single_threaded = value end
Splits the symbol into three parts. Each part will either be a string or nil.
For columns, these parts are the table, column, and alias. For tables, these parts are the schema, table, and alias.
# File lib/sequel/core.rb, line 229 def self.split_symbol(sym) unless v = Sequel.synchronize{SPLIT_SYMBOL_CACHE[sym]} v = case s = sym.to_s when COLUMN_REF_RE1 [$1.freeze, $2.freeze, $3.freeze].freeze when COLUMN_REF_RE2 [nil, $1.freeze, $2.freeze].freeze when COLUMN_REF_RE3 [$1.freeze, $2.freeze, nil].freeze else [nil, s.freeze, nil].freeze end Sequel.synchronize{SPLIT_SYMBOL_CACHE[sym] = v} end v end
Converts the given string
into a Date
object.
Sequel.string_to_date('2010-09-10') # Date.civil(2010, 09, 10)
# File lib/sequel/core.rb, line 249 def self.string_to_date(string) begin Date.parse(string, Sequel.convert_two_digit_years) rescue => e raise convert_exception_class(e, InvalidValue) end end
Converts the given string
into a Time
or
DateTime
object, depending on the value of
Sequel.datetime_class
.
Sequel.string_to_datetime('2010-09-10 10:20:30') # Time.local(2010, 09, 10, 10, 20, 30)
# File lib/sequel/core.rb, line 261 def self.string_to_datetime(string) begin if datetime_class == DateTime DateTime.parse(string, convert_two_digit_years) else datetime_class.parse(string) end rescue => e raise convert_exception_class(e, InvalidValue) end end
Converts the given string
into a Sequel::SQLTime
object.
v = Sequel.string_to_time('10:20:30') # Sequel::SQLTime.parse('10:20:30') DB.literal(v) # => '10:20:30'
# File lib/sequel/core.rb, line 277 def self.string_to_time(string) begin SQLTime.parse(string) rescue => e raise convert_exception_class(e, InvalidValue) end end
Unless in single threaded mode, protects access to any mutable global data structure in Sequel. Uses a non-reentrant mutex, so calling code should be careful.
# File lib/sequel/core.rb, line 293 def self.synchronize(&block) @single_threaded ? yield : @data_mutex.synchronize(&block) end
Uses a transaction on all given databases with the given options. This:
Sequel.transaction([DB1, DB2, DB3]){...}
is equivalent to:
DB1.transaction do DB2.transaction do DB3.transaction do ... end end end
except that if Sequel::Rollback is raised by the block, the transaction is rolled back on all databases instead of just the last one.
Note that this method cannot guarantee that all databases will commit or rollback. For example, if DB3 commits but attempting to commit on DB2 fails (maybe because foreign key checks are deferred), there is no way to uncommit the changes on DB3. For that kind of support, you need to have two-phase commit/prepared transactions (which Sequel supports on some databases).
# File lib/sequel/core.rb, line 328 def self.transaction(dbs, opts=OPTS, &block) unless opts[:rollback] rescue_rollback = true opts = opts.merge(:rollback=>:reraise) end pr = dbs.reverse.inject(block){|bl, db| proc{db.transaction(opts, &bl)}} if rescue_rollback begin pr.call rescue Sequel::Rollback nil end else pr.call end end
The version of Sequel you are using, as a string (e.g. “2.11.0”)
# File lib/sequel/version.rb, line 15 def self.version VERSION end
If the supplied block takes a single argument, yield an
SQL::VirtualRow
instance to the block argument. Otherwise,
evaluate the block in the context of a SQL::VirtualRow
instance.
Sequel.virtual_row{a} # Sequel::SQL::Identifier.new(:a) Sequel.virtual_row{|o| o.a{}} # Sequel::SQL::Function.new(:a)
# File lib/sequel/core.rb, line 352 def self.virtual_row(&block) vr = VIRTUAL_ROW case block.arity when -1, 0 vr.instance_exec(&block) else block.call(vr) end end
Private Class Methods
Helper method that the database adapter class methods that are added to Sequel via metaprogramming use to parse arguments.
# File lib/sequel/core.rb, line 366 def self.adapter_method(adapter, *args, &block) options = args.last.is_a?(Hash) ? args.pop : {} opts = {:adapter => adapter.to_sym} opts[:database] = args.shift if args.first.is_a?(String) if args.any? raise ::Sequel::Error, "Wrong format of arguments, either use (), (String), (Hash), or (String, Hash)" end connect(opts.merge(options), &block) end