Release: 1.1.0b1 | Release Date: not released

SQLAlchemy 1.1 Documentation

Column and Data Types

SQLAlchemy provides abstractions for most common database data types, and a mechanism for specifying your own custom data types.

The methods and attributes of type objects are rarely used directly. Type objects are supplied to Table definitions and can be supplied as type hints to functions for occasions where the database driver returns an incorrect type.

>>> users = Table('users', metadata,
...               Column('id', Integer, primary_key=True)
...               Column('login', String(32))
...              )

SQLAlchemy will use the Integer and String(32) type information when issuing a CREATE TABLE statement and will use it again when reading back rows SELECTed from the database. Functions that accept a type (such as Column()) will typically accept a type class or instance; Integer is equivalent to Integer() with no construction arguments in this case.

Generic Types

Generic types specify a column that can read, write and store a particular type of Python data. SQLAlchemy will choose the best database column type available on the target database when issuing a CREATE TABLE statement. For complete control over which column type is emitted in CREATE TABLE, such as VARCHAR see `SQL Standard Types`_ and the other sections of this chapter.

class sqlalchemy.types.BigInteger

Bases: sqlalchemy.types.Integer

A type for bigger int integers.

Typically generates a BIGINT in DDL, and otherwise acts like a normal Integer on the Python side.

class sqlalchemy.types.Boolean(create_constraint=True, name=None, _create_events=True)

Bases: sqlalchemy.types.TypeEngine, sqlalchemy.types.SchemaType

A bool datatype.

Boolean typically uses BOOLEAN or SMALLINT on the DDL side, and on the Python side deals in True or False.

__init__(create_constraint=True, name=None, _create_events=True)

Construct a Boolean.

매개 변수:
  • create_constraint – defaults to True. If the boolean is generated as an int/smallint, also create a CHECK constraint on the table that ensures 1 or 0 as a value.
  • name – if a CHECK constraint is generated, specify the name of the constraint.
class sqlalchemy.types.Date

Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine

A type for datetime.date() objects.

class sqlalchemy.types.DateTime(timezone=False)

Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine

A type for datetime.datetime() objects.

Date and time types return objects from the Python datetime module. Most DBAPIs have built in support for the datetime module, with the noted exception of SQLite. In the case of SQLite, date and time types are stored as strings which are then converted back to datetime objects when rows are returned.

__init__(timezone=False)

Construct a new DateTime.

매개 변수:timezone – boolean. If True, and supported by the backend, will produce ‘TIMESTAMP WITH TIMEZONE’. For backends that don’t support timezone aware timestamps, has no effect.
class sqlalchemy.types.Enum(*enums, **kw)

Bases: sqlalchemy.types.String, sqlalchemy.types.SchemaType

Generic Enum Type.

The Enum type provides a set of possible string values which the column is constrained towards.

The Enum type will make use of the backend’s native “ENUM” type if one is available; otherwise, it uses a VARCHAR datatype and produces a CHECK constraint. Use of the backend-native enum type can be disabled using the Enum.native_enum flag, and the production of the CHECK constraint is configurable using the Enum.create_constraint flag.

The Enum type also provides in-Python validation of both input values and database-returned values. A LookupError is raised for any Python value that’s not located in the given list of possible values.

버전 1.1으로 변경: the Enum type now provides in-Python validation of input values as well as on data being returned by the database.

The source of enumerated values may be a list of string values, or alternatively a PEP-435-compliant enumerated class. For the purposes of the Enum datatype, this class need only provide a __members__ method.

When using an enumerated class, the enumerated objects are used both for input and output, rather than strings as is the case with a plain-string enumerated type:

import enum
class MyEnum(enum.Enum):
    one = "one"
    two = "two"
    three = "three"


t = Table(
    'data', MetaData(),
    Column('value', Enum(MyEnum))
)

connection.execute(t.insert(), {"value": MyEnum.two})
assert connection.scalar(t.select()) is MyEnum.two

버전 1.1에 추가: - support for PEP-435-style enumerated classes.

더 보기

ENUM - PostgreSQL-specific type, which has additional functionality.

__init__(*enums, **kw)

Construct an enum.

Keyword arguments which don’t apply to a specific backend are ignored by that backend.

매개 변수:
  • *enums

    either exactly one PEP-435 compliant enumerated type or one or more string or unicode enumeration labels. If unicode labels are present, the convert_unicode flag is auto-enabled.

    버전 1.1에 추가: a PEP-435 style enumerated class may be passed.

  • convert_unicode – Enable unicode-aware bind parameter and result-set processing for this Enum’s data. This is set automatically based on the presence of unicode label strings.
  • create_constraint

    defaults to True. When creating a non-native enumerated type, also build a CHECK constraint on the database against the valid values.

    버전 1.1에 추가: - added Enum.create_constraint which provides the option to disable the production of the CHECK constraint for a non-native enumerated type.

  • metadata – Associate this type directly with a MetaData object. For types that exist on the target database as an independent schema construct (Postgresql), this type will be created and dropped within create_all() and drop_all() operations. If the type is not associated with any MetaData object, it will associate itself with each Table in which it is used, and will be created when any of those individual tables are created, after a check is performed for its existence. The type is only dropped when drop_all() is called for that Table object’s metadata, however.
  • name – The name of this type. This is required for Postgresql and any future supported database which requires an explicitly named type, or an explicitly named constraint in order to generate the type and/or a table that uses it. If a PEP-435 enumerated class was used, its name (converted to lower case) is used by default.
  • native_enum – Use the database’s native ENUM type when available. Defaults to True. When False, uses VARCHAR + check constraint for all backends.
  • schema

    Schema name of this type. For types that exist on the target database as an independent schema construct (Postgresql), this parameter specifies the named schema in which the type is present.

    주석

    The schema of the Enum type does not by default make use of the schema established on the owning Table. If this behavior is desired, set the inherit_schema flag to True.

  • quote – Set explicit quoting preferences for the type’s name.
  • inherit_schema – When True, the “schema” from the owning Table will be copied to the “schema” attribute of this Enum, replacing whatever value was passed for the schema attribute. This also takes effect when using the Table.tometadata() operation.
create(bind=None, checkfirst=False)
inherited from the create() method of SchemaType

Issue CREATE ddl for this type, if applicable.

drop(bind=None, checkfirst=False)
inherited from the drop() method of SchemaType

Issue DROP ddl for this type, if applicable.

class sqlalchemy.types.Float(precision=None, asdecimal=False, decimal_return_scale=None, **kwargs)

Bases: sqlalchemy.types.Numeric

Type representing floating point types, such as FLOAT or REAL.

This type returns Python float objects by default, unless the Float.asdecimal flag is set to True, in which case they are coerced to decimal.Decimal objects.

주석

The Float type is designed to receive data from a database type that is explicitly known to be a floating point type (e.g. FLOAT, REAL, others) and not a decimal type (e.g. DECIMAL, NUMERIC, others). If the database column on the server is in fact a Numeric type, such as DECIMAL or NUMERIC, use the Numeric type or a subclass, otherwise numeric coercion between float/Decimal may or may not function as expected.

__init__(precision=None, asdecimal=False, decimal_return_scale=None, **kwargs)

Construct a Float.

매개 변수:
  • precision – the numeric precision for use in DDL CREATE TABLE.
  • asdecimal – the same flag as that of Numeric, but defaults to False. Note that setting this flag to True results in floating point conversion.
  • decimal_return_scale

    Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specfiying this value will override that length. Note that the MySQL float types, which do include “scale”, will use “scale” as the default for decimal_return_scale, if not otherwise specified.

    버전 0.9.0에 추가.

  • **kwargs – deprecated. Additional arguments here are ignored by the default Float type. For database specific floats that support additional arguments, see that dialect’s documentation for details, such as sqlalchemy.dialects.mysql.FLOAT.
class sqlalchemy.types.Integer

Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine

A type for int integers.

class sqlalchemy.types.Interval(native=True, second_precision=None, day_precision=None)

Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeDecorator

A type for datetime.timedelta() objects.

The Interval type deals with datetime.timedelta objects. In PostgreSQL, the native INTERVAL type is used; for others, the value is stored as a date which is relative to the “epoch” (Jan. 1, 1970).

Note that the Interval type does not currently provide date arithmetic operations on platforms which do not support interval types natively. Such operations usually require transformation of both sides of the expression (such as, conversion of both sides into integer epoch values first) which currently is a manual procedure (such as via func).

__init__(native=True, second_precision=None, day_precision=None)

Construct an Interval object.

매개 변수:
  • native – when True, use the actual INTERVAL type provided by the database, if supported (currently Postgresql, Oracle). Otherwise, represent the interval data as an epoch value regardless.
  • second_precision – For native interval types which support a “fractional seconds precision” parameter, i.e. Oracle and Postgresql
  • day_precision – for native interval types which support a “day precision” parameter, i.e. Oracle.
coerce_compared_value(op, value)

See TypeEngine.coerce_compared_value() for a description.

impl

alias of DateTime

class sqlalchemy.types.LargeBinary(length=None)

Bases: sqlalchemy.types._Binary

A type for large binary byte data.

The LargeBinary type corresponds to a large and/or unlengthed binary type for the target platform, such as BLOB on MySQL and BYTEA for Postgresql. It also handles the necessary conversions for the DBAPI.

__init__(length=None)

Construct a LargeBinary type.

매개 변수:length – optional, a length for the column for use in DDL statements, for those binary types that accept a length, such as the MySQL BLOB type.
class sqlalchemy.types.MatchType(create_constraint=True, name=None, _create_events=True)

Bases: sqlalchemy.types.Boolean

Refers to the return type of the MATCH operator.

As the ColumnOperators.match() is probably the most open-ended operator in generic SQLAlchemy Core, we can’t assume the return type at SQL evaluation time, as MySQL returns a floating point, not a boolean, and other backends might do something different. So this type acts as a placeholder, currently subclassing Boolean. The type allows dialects to inject result-processing functionality if needed, and on MySQL will return floating-point values.

버전 1.0.0에 추가.

class sqlalchemy.types.Numeric(precision=None, scale=None, decimal_return_scale=None, asdecimal=True)

Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine

A type for fixed precision numbers, such as NUMERIC or DECIMAL.

This type returns Python decimal.Decimal objects by default, unless the Numeric.asdecimal flag is set to False, in which case they are coerced to Python float objects.

주석

The Numeric type is designed to receive data from a database type that is explicitly known to be a decimal type (e.g. DECIMAL, NUMERIC, others) and not a floating point type (e.g. FLOAT, REAL, others). If the database column on the server is in fact a floating-point type type, such as FLOAT or REAL, use the Float type or a subclass, otherwise numeric coercion between float/Decimal may or may not function as expected.

주석

The Python decimal.Decimal class is generally slow performing; cPython 3.3 has now switched to use the cdecimal library natively. For older Python versions, the cdecimal library can be patched into any application where it will replace the decimal library fully, however this needs to be applied globally and before any other modules have been imported, as follows:

import sys
import cdecimal
sys.modules["decimal"] = cdecimal

Note that the cdecimal and decimal libraries are not compatible with each other, so patching cdecimal at the global level is the only way it can be used effectively with various DBAPIs that hardcode to import the decimal library.

__init__(precision=None, scale=None, decimal_return_scale=None, asdecimal=True)

Construct a Numeric.

매개 변수:
  • precision – the numeric precision for use in DDL CREATE TABLE.
  • scale – the numeric scale for use in DDL CREATE TABLE.
  • asdecimal – default True. Return whether or not values should be sent as Python Decimal objects, or as floats. Different DBAPIs send one or the other based on datatypes - the Numeric type will ensure that return values are one or the other across DBAPIs consistently.
  • decimal_return_scale

    Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specfiying this value will override that length. Types which do include an explicit ”.scale” value, such as the base Numeric as well as the MySQL float types, will use the value of ”.scale” as the default for decimal_return_scale, if not otherwise specified.

    버전 0.9.0에 추가.

When using the Numeric type, care should be taken to ensure that the asdecimal setting is apppropriate for the DBAPI in use - when Numeric applies a conversion from Decimal->float or float-> Decimal, this conversion incurs an additional performance overhead for all result columns received.

DBAPIs that return Decimal natively (e.g. psycopg2) will have better accuracy and higher performance with a setting of True, as the native translation to Decimal reduces the amount of floating- point issues at play, and the Numeric type itself doesn’t need to apply any further conversions. However, another DBAPI which returns floats natively will incur an additional conversion overhead, and is still subject to floating point data loss - in which case asdecimal=False will at least remove the extra conversion overhead.

class sqlalchemy.types.PickleType(protocol=2, pickler=None, comparator=None)

Bases: sqlalchemy.types.TypeDecorator

Holds Python objects, which are serialized using pickle.

PickleType builds upon the Binary type to apply Python’s pickle.dumps() to incoming objects, and pickle.loads() on the way out, allowing any pickleable Python object to be stored as a serialized binary field.

To allow ORM change events to propagate for elements associated with PickleType, see Mutation Tracking.

__init__(protocol=2, pickler=None, comparator=None)

Construct a PickleType.

매개 변수:
  • protocol – defaults to pickle.HIGHEST_PROTOCOL.
  • pickler – defaults to cPickle.pickle or pickle.pickle if cPickle is not available. May be any object with pickle-compatible dumps` and ``loads methods.
  • comparator – a 2-arg callable predicate used to compare values of this type. If left as None, the Python “equals” operator is used to compare values.
impl

alias of LargeBinary

class sqlalchemy.types.SchemaType(name=None, schema=None, metadata=None, inherit_schema=False, quote=None, _create_events=True)

Bases: sqlalchemy.sql.expression.SchemaEventTarget

Mark a type as possibly requiring schema-level DDL for usage.

Supports types that must be explicitly created/dropped (i.e. PG ENUM type) as well as types that are complimented by table or schema level constraints, triggers, and other rules.

SchemaType classes can also be targets for the DDLEvents.before_parent_attach() and DDLEvents.after_parent_attach() events, where the events fire off surrounding the association of the type object with a parent Column.

더 보기

Enum

Boolean

adapt(impltype, **kw)
bind
copy(**kw)
create(bind=None, checkfirst=False)

Issue CREATE ddl for this type, if applicable.

drop(bind=None, checkfirst=False)

Issue DROP ddl for this type, if applicable.

class sqlalchemy.types.SmallInteger

Bases: sqlalchemy.types.Integer

A type for smaller int integers.

Typically generates a SMALLINT in DDL, and otherwise acts like a normal Integer on the Python side.

class sqlalchemy.types.String(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Bases: sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine

The base for all string and character types.

In SQL, corresponds to VARCHAR. Can also take Python unicode objects and encode to the database’s encoding in bind params (and the reverse for result sets.)

The length field is usually required when the String type is used within a CREATE TABLE statement, as VARCHAR requires a length on most databases.

__init__(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Create a string-holding type.

매개 변수:
  • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.
  • collation

    Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and Postgresql. E.g.:

    >>> from sqlalchemy import cast, select, String
    >>> print select([cast('some string', String(collation='utf8'))])
    SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

    버전 0.8에 추가: Added support for COLLATE to all string types.

  • convert_unicode

    When set to True, the String type will assume that input is to be passed as Python unicode objects, and results returned as Python unicode objects. If the DBAPI in use does not support Python unicode (which is fewer and fewer these days), SQLAlchemy will encode/decode the value, using the value of the encoding parameter passed to create_engine() as the encoding.

    When using a DBAPI that natively supports Python unicode objects, this flag generally does not need to be set. For columns that are explicitly intended to store non-ASCII data, the Unicode or UnicodeText types should be used regardless, which feature the same behavior of convert_unicode but also indicate an underlying column type that directly supports unicode, such as NVARCHAR.

    For the extremely rare case that Python unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python unicode, the value force can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.

  • unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard library’s string.decode() functions. This flag requires that convert_unicode is set to force - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.
class sqlalchemy.types.Text(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Bases: sqlalchemy.types.String

A variably sized string type.

In SQL, usually corresponds to CLOB or TEXT. Can also take Python unicode objects and encode to the database’s encoding in bind params (and the reverse for result sets.) In general, TEXT objects do not have a length; while some databases will accept a length argument here, it will be rejected by others.

class sqlalchemy.types.Time(timezone=False)

Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine

A type for datetime.time() objects.

class sqlalchemy.types.Unicode(length=None, **kwargs)

Bases: sqlalchemy.types.String

A variable length Unicode string type.

The Unicode type is a String subclass that assumes input and output as Python unicode data, and in that regard is equivalent to the usage of the convert_unicode flag with the String type. However, unlike plain String, it also implies an underlying column type that is explicitly supporting of non-ASCII data, such as NVARCHAR on Oracle and SQL Server. This can impact the output of CREATE TABLE statements and CAST functions at the dialect level, and can also affect the handling of bound parameters in some specific DBAPI scenarios.

The encoding used by the Unicode type is usually determined by the DBAPI itself; most modern DBAPIs feature support for Python unicode objects as bound values and result set values, and the encoding should be configured as detailed in the notes for the target DBAPI in the Dialects section.

For those DBAPIs which do not support, or are not configured to accommodate Python unicode objects directly, SQLAlchemy does the encoding and decoding outside of the DBAPI. The encoding in this scenario is determined by the encoding flag passed to create_engine().

When using the Unicode type, it is only appropriate to pass Python unicode objects, and not plain str. If a plain str is passed under Python 2, a warning is emitted. If you notice your application emitting these warnings but you’re not sure of the source of them, the Python warnings filter, documented at http://docs.python.org/library/warnings.html, can be used to turn these warnings into exceptions which will illustrate a stack trace:

import warnings
warnings.simplefilter('error')

For an application that wishes to pass plain bytestrings and Python unicode objects to the Unicode type equally, the bytestrings must first be decoded into unicode. The recipe at Coercing Encoded Strings to Unicode illustrates how this is done.

See also:

UnicodeText - unlengthed textual counterpart to Unicode.
__init__(length=None, **kwargs)

Create a Unicode object.

Parameters are the same as that of String, with the exception that convert_unicode defaults to True.

class sqlalchemy.types.UnicodeText(length=None, **kwargs)

Bases: sqlalchemy.types.Text

An unbounded-length Unicode string type.

See Unicode for details on the unicode behavior of this object.

Like Unicode, usage the UnicodeText type implies a unicode-capable type being used on the backend, such as NCLOB, NTEXT.

__init__(length=None, **kwargs)

Create a Unicode-converting Text type.

Parameters are the same as that of Text, with the exception that convert_unicode defaults to True.

SQL Standard and Multiple Vendor Types

This category of types refers to types that are either part of the SQL standard, or are potentially found within a subset of database backends. Unlike the “generic” types, the SQL standard/multi-vendor types have no guarantee of working on all backends, and will only work on those backends that explicitly support them by name. That is, the type will always emit its exact name in DDL with CREATE TABLE is issued.

class sqlalchemy.types.ARRAY(item_type, as_tuple=False, dimensions=None, zero_indexes=False)

Bases: sqlalchemy.types.Indexable, sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine

Represent a SQL Array type.

주석

This type serves as the basis for all ARRAY operations. However, currently only the Postgresql backend has support for SQL arrays in SQLAlchemy. It is recommended to use the postgresql.ARRAY type directly when using ARRAY types with PostgreSQL, as it provides additional operators specific to that backend.

types.ARRAY is part of the Core in support of various SQL standard functions such as array_agg which explicitly involve arrays; however, with the exception of the PostgreSQL backend and possibly some third-party dialects, no other SQLAlchemy built-in dialect has support for this type.

An types.ARRAY type is constructed given the “type” of element:

mytable = Table("mytable", metadata,
        Column("data", ARRAY(Integer))
    )

The above type represents an N-dimensional array, meaning a supporting backend such as Postgresql will interpret values with any number of dimensions automatically. To produce an INSERT construct that passes in a 1-dimensional array of integers:

connection.execute(
        mytable.insert(),
        data=[1,2,3]
)

The types.ARRAY type can be constructed given a fixed number of dimensions:

mytable = Table("mytable", metadata,
        Column("data", ARRAY(Integer, dimensions=2))
    )

Sending a number of dimensions is optional, but recommended if the datatype is to represent arrays of more than one dimension. This number is used:

  • When emitting the type declaration itself to the database, e.g. INTEGER[][]

  • When translating Python values to database values, and vice versa, e.g. an ARRAY of Unicode objects uses this number to efficiently access the string values inside of array structures without resorting to per-row type inspection

  • When used with the Python getitem accessor, the number of dimensions serves to define the kind of type that the [] operator should return, e.g. for an ARRAY of INTEGER with two dimensions:

    >>> expr = table.c.column[5]  # returns ARRAY(Integer, dimensions=1)
    >>> expr = expr[6]  # returns Integer

For 1-dimensional arrays, an types.ARRAY instance with no dimension parameter will generally assume single-dimensional behaviors.

SQL expressions of type types.ARRAY have support for “index” and “slice” behavior. The Python [] operator works normally here, given integer indexes or slices. Arrays default to 1-based indexing. The operator produces binary expression constructs which will produce the appropriate SQL, both for SELECT statements:

select([mytable.c.data[5], mytable.c.data[2:7]])

as well as UPDATE statements when the Update.values() method is used:

mytable.update().values({
    mytable.c.data[5]: 7,
    mytable.c.data[2:7]: [1, 2, 3]
})

The types.ARRAY type also provides for the operators types.ARRAY.Comparator.any() and types.ARRAY.Comparator.all(). The PostgreSQL-specific version of types.ARRAY also provides additional operators.

버전 1.1.0에 추가.

더 보기

postgresql.ARRAY

class Comparator(expr)

Bases: sqlalchemy.types.Comparator, sqlalchemy.types.Comparator

Define comparison operations for types.ARRAY.

More operators are available on the dialect-specific form of this type. See postgresql.ARRAY.Comparator.

all(other, operator=None)

Return other operator ALL (array) clause.

Argument places are switched, because ALL requires array expression to be on the right hand-side.

E.g.:

from sqlalchemy.sql import operators

conn.execute(
    select([table.c.data]).where(
            table.c.data.all(7, operator=operators.lt)
        )
)
매개 변수:
  • other – expression to be compared
  • operator – an operator object from the sqlalchemy.sql.operators package, defaults to operators.eq().
any(other, operator=None)

Return other operator ANY (array) clause.

Argument places are switched, because ANY requires array expression to be on the right hand-side.

E.g.:

from sqlalchemy.sql import operators

conn.execute(
    select([table.c.data]).where(
            table.c.data.any(7, operator=operators.lt)
        )
)
매개 변수:
  • other – expression to be compared
  • operator – an operator object from the sqlalchemy.sql.operators package, defaults to operators.eq().
ARRAY.__init__(item_type, as_tuple=False, dimensions=None, zero_indexes=False)

Construct an types.ARRAY.

E.g.:

Column('myarray', ARRAY(Integer))

Arguments are:

매개 변수:
  • item_type – The data type of items of this array. Note that dimensionality is irrelevant here, so multi-dimensional arrays like INTEGER[][], are constructed as ARRAY(Integer), not as ARRAY(ARRAY(Integer)) or such.
  • as_tuple=False – Specify whether return results should be converted to tuples from lists. This parameter is not generally needed as a Python list corresponds well to a SQL array.
  • dimensions – if non-None, the ARRAY will assume a fixed number of dimensions. This impacts how the array is declared on the database, how it goes about interpreting Python and result values, as well as how expression behavior in conjunction with the “getitem” operator works. See the description at types.ARRAY for additional detail.
  • zero_indexes=False – when True, index values will be converted between Python zero-based and SQL one-based indexes, e.g. a value of one will be added to all index values before passing to the database.
ARRAY.comparator_factory

alias of Comparator

ARRAY.zero_indexes = False

if True, Python zero-based indexes should be interpreted as one-based on the SQL expression side.

class sqlalchemy.types.BIGINT

Bases: sqlalchemy.types.BigInteger

The SQL BIGINT type.

class sqlalchemy.types.BINARY(length=None)

Bases: sqlalchemy.types._Binary

The SQL BINARY type.

class sqlalchemy.types.BLOB(length=None)

Bases: sqlalchemy.types.LargeBinary

The SQL BLOB type.

class sqlalchemy.types.BOOLEAN(create_constraint=True, name=None, _create_events=True)

Bases: sqlalchemy.types.Boolean

The SQL BOOLEAN type.

class sqlalchemy.types.CHAR(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Bases: sqlalchemy.types.String

The SQL CHAR type.

class sqlalchemy.types.CLOB(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Bases: sqlalchemy.types.Text

The CLOB type.

This type is found in Oracle and Informix.

class sqlalchemy.types.DATE

Bases: sqlalchemy.types.Date

The SQL DATE type.

class sqlalchemy.types.DATETIME(timezone=False)

Bases: sqlalchemy.types.DateTime

The SQL DATETIME type.

class sqlalchemy.types.DECIMAL(precision=None, scale=None, decimal_return_scale=None, asdecimal=True)

Bases: sqlalchemy.types.Numeric

The SQL DECIMAL type.

class sqlalchemy.types.FLOAT(precision=None, asdecimal=False, decimal_return_scale=None, **kwargs)

Bases: sqlalchemy.types.Float

The SQL FLOAT type.

sqlalchemy.types.INT

alias of INTEGER

class sqlalchemy.types.JSON(none_as_null=False)

Bases: sqlalchemy.types.Indexable, sqlalchemy.types.TypeEngine

Represent a SQL JSON type.

주석

types.JSON is provided as a facade for vendor-specific JSON types. Since it supports JSON SQL operations, it only works on backends that have an actual JSON type, currently Postgresql as well as certain versions of MySQL.

types.JSON is part of the Core in support of the growing popularity of native JSON datatypes.

The types.JSON type stores arbitrary JSON format data, e.g.:

data_table = Table('data_table', metadata,
    Column('id', Integer, primary_key=True),
    Column('data', JSON)
)

with engine.connect() as conn:
    conn.execute(
        data_table.insert(),
        data = {"key1": "value1", "key2": "value2"}
    )

The base types.JSON provides these two operations:

  • Keyed index operations:

    data_table.c.data['some key']
  • Integer index operations:

    data_table.c.data[3]
  • Path index operations:

    data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')]

Additional operations are available from the dialect-specific versions of types.JSON, such as postgresql.JSON and postgresql.JSONB, each of which offer more operators than just the basic type.

Index operations return an expression object whose type defaults to JSON by default, so that further JSON-oriented instructions may be called upon the result type.

The JSON type, when used with the SQLAlchemy ORM, does not detect in-place mutations to the structure. In order to detect these, the sqlalchemy.ext.mutable extension must be used. This extension will allow “in-place” changes to the datastructure to produce events which will be detected by the unit of work. See the example at HSTORE for a simple example involving a dictionary.

When working with NULL values, the JSON type recommends the use of two specific constants in order to differentiate between a column that evaluates to SQL NULL, e.g. no value, vs. the JSON-encoded string of "null". To insert or select against a value that is SQL NULL, use the constant null():

from sqlalchemy import null
conn.execute(table.insert(), json_value=null())

To insert or select against a value that is JSON "null", use the constant JSON.NULL:

conn.execute(table.insert(), json_value=JSON.NULL)

The JSON type supports a flag JSON.none_as_null which when set to True will result in the Python constant None evaluating to the value of SQL NULL, and when set to False results in the Python constant None evaluating to the value of JSON "null". The Python value None may be used in conjunction with either JSON.NULL and null() in order to indicate NULL values, but care must be taken as to the value of the JSON.none_as_null in these cases.

버전 1.1에 추가.

class Comparator(expr)

Bases: sqlalchemy.types.Comparator, sqlalchemy.types.Comparator

Define comparison operations for types.JSON.

class JSON.JSONIndexType

Bases: sqlalchemy.types.TypeEngine

Placeholder for the datatype of a JSON index value.

This allows execution-time processing of JSON index values for special syntaxes.

class JSON.JSONPathType

Bases: sqlalchemy.types.TypeEngine

Placeholder type for JSON path operations.

This allows execution-time processing of a path-based index value into a specific SQL syntax.

JSON.NULL = symbol('JSON_NULL')

Describe the json value of NULL.

This value is used to force the JSON value of "null" to be used as the value. A value of Python None will be recognized either as SQL NULL or JSON "null", based on the setting of the JSON.none_as_null flag; the JSON.NULL constant can be used to always resolve to JSON "null" regardless of this setting. This is in contrast to the sql.null() construct, which always resolves to SQL NULL. E.g.:

from sqlalchemy import null
from sqlalchemy.dialects.postgresql import JSON

obj1 = MyObject(json_value=null())  # will *always* insert SQL NULL
obj2 = MyObject(json_value=JSON.NULL)  # will *always* insert JSON string "null"

session.add_all([obj1, obj2])
session.commit()
JSON.__init__(none_as_null=False)

Construct a types.JSON type.

매개 변수:none_as_null=False

if True, persist the value None as a SQL NULL value, not the JSON encoding of null. Note that when this flag is False, the null() construct can still be used to persist a NULL value:

from sqlalchemy import null
conn.execute(table.insert(), data=null())

더 보기

types.JSON.NULL

JSON.comparator_factory

alias of Comparator

class sqlalchemy.types.INTEGER

Bases: sqlalchemy.types.Integer

The SQL INT or INTEGER type.

class sqlalchemy.types.NCHAR(length=None, **kwargs)

Bases: sqlalchemy.types.Unicode

The SQL NCHAR type.

class sqlalchemy.types.NVARCHAR(length=None, **kwargs)

Bases: sqlalchemy.types.Unicode

The SQL NVARCHAR type.

class sqlalchemy.types.NUMERIC(precision=None, scale=None, decimal_return_scale=None, asdecimal=True)

Bases: sqlalchemy.types.Numeric

The SQL NUMERIC type.

class sqlalchemy.types.REAL(precision=None, asdecimal=False, decimal_return_scale=None, **kwargs)

Bases: sqlalchemy.types.Float

The SQL REAL type.

class sqlalchemy.types.SMALLINT

Bases: sqlalchemy.types.SmallInteger

The SQL SMALLINT type.

class sqlalchemy.types.TEXT(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Bases: sqlalchemy.types.Text

The SQL TEXT type.

class sqlalchemy.types.TIME(timezone=False)

Bases: sqlalchemy.types.Time

The SQL TIME type.

class sqlalchemy.types.TIMESTAMP(timezone=False)

Bases: sqlalchemy.types.DateTime

The SQL TIMESTAMP type.

class sqlalchemy.types.VARBINARY(length=None)

Bases: sqlalchemy.types._Binary

The SQL VARBINARY type.

class sqlalchemy.types.VARCHAR(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Bases: sqlalchemy.types.String

The SQL VARCHAR type.

Vendor-Specific Types

Database-specific types are also available for import from each database’s dialect module. See the Dialects reference for the database you’re interested in.

For example, MySQL has a BIGINT type and PostgreSQL has an INET type. To use these, import them from the module explicitly:

from sqlalchemy.dialects import mysql

table = Table('foo', metadata,
    Column('id', mysql.BIGINT),
    Column('enumerates', mysql.ENUM('a', 'b', 'c'))
)

Or some PostgreSQL types:

from sqlalchemy.dialects import postgresql

table = Table('foo', metadata,
    Column('ipaddress', postgresql.INET),
    Column('elements', postgresql.ARRAY(String))
)

Each dialect provides the full set of typenames supported by that backend within its __all__ collection, so that a simple import * or similar will import all supported types as implemented for that backend:

from sqlalchemy.dialects.postgresql import *

t = Table('mytable', metadata,
           Column('id', INTEGER, primary_key=True),
           Column('name', VARCHAR(300)),
           Column('inetaddr', INET)
)

Where above, the INTEGER and VARCHAR types are ultimately from sqlalchemy.types, and INET is specific to the Postgresql dialect.

Some dialect level types have the same name as the SQL standard type, but also provide additional arguments. For example, MySQL implements the full range of character and string types including additional arguments such as collation and charset:

from sqlalchemy.dialects.mysql import VARCHAR, TEXT

table = Table('foo', meta,
    Column('col1', VARCHAR(200, collation='binary')),
    Column('col2', TEXT(charset='latin1'))
)