Modules and Classes
Version: 0.1.7 Last Updated: 05/05/06 13:53:27
View: Paged  |  One Page
Module sqlalchemy.schema

the schema module provides the building blocks for database metadata. This means all the entities within a SQL database that we might want to look at, modify, or create and delete are described by these objects, in a database-agnostic way.

A structure of SchemaItems also provides a "visitor" interface which is the primary method by which other methods operate upon the schema. The SQL package extends this structure with its own clause-specific objects as well as the visitor interface, so that the schema package "plugs in" to the SQL package.

Class Column(ColumnClause)

represents a column in a database table. this is a subclass of sql.ColumnClause and represents an actual existing table in the database, in a similar fashion as TableClause/Table.

def __init__(self, name, type, *args, **kwargs)

constructs a new Column object. Arguments are: name : the name of this column. this should be the identical name as it appears, or will appear, in the database. type : this is the type of column. This can be any subclass of types.TypeEngine, including the database-agnostic types defined in the types module, database-specific types defined within specific database modules, or user-defined types. *args : ForeignKey and Sequence objects should be added as list values. **kwargs : keyword arguments include: key=None : an optional "alias name" for this column. The column will then be identified everywhere in an application, including the column list on its Table, by this key, and not the given name. Generated SQL, however, will still reference the column by its actual name. primary_key=False : True if this column is a primary key column. Multiple columns can have this flag set to specify composite primary keys. nullable=True : True if this column should allow nulls. Defaults to True unless this column is a primary key column. default=None : a scalar, python callable, or ClauseElement representing the "default value" for this column, which will be invoked upon insert if this column is not present in the insert list or is given a value of None.

hidden=False : indicates this column should not be listed in the table's list of columns. Used for the "oid" column, which generally isnt in column lists.

index=None : True or index name. Indicates that this column is indexed. Pass true to autogenerate the index name. Pass a string to specify the index name. Multiple columns that specify the same index name will all be included in the index, in the order of their creation.

unique=None : True or index name. Indicates that this column is indexed in a unique index . Pass true to autogenerate the index name. Pass a string to specify the index name. Multiple columns that specify the same index name will all be included in the index, in the order of their creation.

def accept_schema_visitor(self, visitor)

traverses the given visitor to this Column's default and foreign key object, then calls visit_column on the visitor.

def append_item(self, item)

columns = property()

def copy(self)

creates a copy of this Column, unitialized

engine = property()

original = property()

parent = property()

back to section top
Class ColumnDefault(DefaultGenerator)

A plain default value on a column. this could correspond to a constant, a callable function, or a SQL clause.

def __init__(self, arg, **kwargs)

def accept_schema_visitor(self, visitor)

calls the visit_column_default method on the given visitor.

back to section top
Class ForeignKey(SchemaItem)

defines a ForeignKey constraint between two columns. ForeignKey is specified as an argument to a Column object.

def __init__(self, column)

Constructs a new ForeignKey object. "column" can be a schema.Column object representing the relationship, or just its string name given as "tablename.columnname". schema can be specified as "schemaname.tablename.columnname"

def accept_schema_visitor(self, visitor)

calls the visit_foreign_key method on the given visitor.

column = property()

def copy(self)

produces a copy of this ForeignKey object.

def references(self, table)

returns True if the given table is referenced by this ForeignKey.

back to section top
Class Index(SchemaItem)

Represents an index of columns from a database table

def __init__(self, name, *columns, **kw)

Constructs an index object. Arguments are:

name : the name of the index

*columns : columns to include in the index. All columns must belong to the same table, and no column may appear more than once.

**kw : keyword arguments include:

unique=True : create a unique index

def accept_schema_visitor(self, visitor)

def append_column(self, column)

def create(self)

def drop(self)

def execute(self)

back to section top
Class PassiveDefault(DefaultGenerator)

a default that takes effect on the database side

def __init__(self, arg, **kwargs)

def accept_schema_visitor(self, visitor)

back to section top
Class SchemaEngine(AbstractEngine)

a factory object used to create implementations for schema objects. This object is the ultimate base class for the engine.SQLEngine class.

def __init__(self)

def reflecttable(self, table)

given a table, will query the database and populate its Column and ForeignKey objects.

def schemadropper(self, **params)

def schemagenerator(self, **params)

back to section top
Class SchemaItem(object)

base class for items that define a database schema.

back to section top
Class SchemaVisitor(ClauseVisitor)

defines the visiting for SchemaItem objects

def visit_column(self, column)

visit a Column.

def visit_column_default(self, default)

visit a ColumnDefault.

def visit_column_onupdate(self, onupdate)

visit a ColumnDefault with the "for_update" flag set.

def visit_foreign_key(self, join)

visit a ForeignKey.

def visit_index(self, index)

visit an Index.

def visit_passive_default(self, default)

visit a passive default

def visit_schema(self, schema)

visit a generic SchemaItem

def visit_sequence(self, sequence)

visit a Sequence.

def visit_table(self, table)

visit a Table.

back to section top
Class Sequence(DefaultGenerator)

represents a sequence, which applies to Oracle and Postgres databases.

def __init__(self, name, start=None, increment=None, optional=False, **kwargs)

def accept_schema_visitor(self, visitor)

calls the visit_seauence method on the given visitor.

def create(self)

def drop(self)

back to section top
Class Table(TableClause)

represents a relational database table. This subclasses sql.TableClause to provide a table that is "wired" to an engine. Whereas TableClause represents a table as its used in a SQL expression, Table represents a table as its created in the database. Be sure to look at sqlalchemy.sql.TableImpl for additional methods defined on a Table.

def __init__(self, name, engine, **kwargs)

Table objects can be constructed directly. The init method is actually called via the TableSingleton metaclass. Arguments are: name : the name of this table, exactly as it appears, or will appear, in the database. This property, along with the "schema", indicates the "singleton identity" of this table. Further tables constructed with the same name/schema combination will return the same Table instance. engine : a SchemaEngine instance to provide services to this table. Usually a subclass of sql.SQLEngine. *args : should contain a listing of the Column objects for this table. **kwargs : options include: schema=None : the "schema name" for this table, which is required if the table resides in a schema other than the default selected schema for the engine's database connection. autoload=False : the Columns for this table should be reflected from the database. Usually there will be no Column objects in the constructor if this property is set. redefine=False : if this Table has already been defined in the application, clear out its columns and redefine with new arguments. mustexist=False : indicates that this Table must already have been defined elsewhere in the application, else an exception is raised. useexisting=False : indicates that if this Table was already defined elsewhere in the application, disregard the rest of the constructor arguments. If this flag and the "redefine" flag are not set, constructing the same table twice will result in an exception.

def accept_schema_visitor(self, visitor)

traverses the given visitor across the Column objects inside this Table, then calls the visit_table method on the visitor.

def append_column(self, column)

def append_index(self, index)

def append_index_column(self, column, index=None, unique=None)

Add an index or a column to an existing index of the same name.

def append_item(self, item)

appends a Column item or other schema item to this Table.

def create(self, **params)

def deregister(self)

removes this table from it's engines table registry. this does not issue a SQL DROP statement.

def drop(self, **params)

def reload_values(self, *args)

clears out the columns and other properties of this Table, and reloads them from the given argument list. This is used with the "redefine" keyword argument sent to the metaclass constructor.

def toengine(self, engine, schema=None)

returns a singleton instance of this Table with a different engine

back to section top
Module sqlalchemy.engine

Defines the SQLEngine class, which serves as the primary "database" object used throughout the sql construction and object-relational mapper packages. A SQLEngine is a facade around a single connection pool corresponding to a particular set of connection parameters, and provides thread-local transactional methods and statement execution methods for Connection objects. It also provides a facade around a Cursor object to allow richer column selection for result rows as well as type conversion operations, known as a ResultProxy.

A SQLEngine is provided to an application as a subclass that is specific to a particular type of DBAPI, and is the central switching point for abstracting different kinds of database behavior into a consistent set of behaviors. It provides a variety of factory methods to produce everything specific to a certain kind of database, including a Compiler, schema creation/dropping objects.

The term "database-specific" will be used to describe any object or function that has behavior corresponding to a particular vendor, such as mysql-specific, sqlite-specific, etc.

Module Functions
def create_engine(name, opts=None, **kwargs)

creates a new SQLEngine instance. There are two forms of calling this method. In the first, the "name" argument is the type of engine to load, i.e. 'sqlite', 'postgres', 'oracle', 'mysql'. "opts" is a dictionary of options to be sent to the underlying DBAPI module to create a connection, usually including a hostname, username, password, etc. In the second, the "name" argument is a URL in the form <enginename>://opt1=val1&opt2=val2. Where <enginename> is the name as above, and the contents of the option dictionary are spelled out as a URL encoded string. The "opts" argument is not used. In both cases, **kwargs represents options to be sent to the SQLEngine itself. A possibly partial listing of those options is as follows: pool=None : an instance of sqlalchemy.pool.DBProxy or sqlalchemy.pool.Pool to be used as the underlying source for connections (DBProxy/Pool is described in the previous section). If None, a default DBProxy will be created using the engine's own database module with the given arguments. echo=False : if True, the SQLEngine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout. A SQLEngine instances' "echo" data member can be modified at any time to turn logging on and off. If set to the string 'debug', result rows will be printed to the standard output as well. logger=None : a file-like object where logging output can be sent, if echo is set to True. This defaults to sys.stdout.

module=None : used by Oracle and Postgres, this is a reference to a DBAPI2 module to be used instead of the engine's default module. For Postgres, the default is psycopg2, or psycopg1 if 2 cannot be found. For Oracle, its cx_Oracle. For mysql, MySQLdb.

use_ansi=True : used only by Oracle; when False, the Oracle driver attempts to support a particular "quirk" of some Oracle databases, that the LEFT OUTER JOIN SQL syntax is not supported, and the "Oracle join" syntax of using <column1>(+)=<column2> must be used in order to achieve a LEFT OUTER JOIN. Its advised that the Oracle database be configured to have full ANSI support instead of using this feature.

def engine_descriptors()

provides a listing of all the database implementations supported. this data is provided as a list of dictionaries, where each dictionary contains the following key/value pairs: name : the name of the engine, suitable for use in the create_engine function

description: a plain description of the engine.

arguments : a dictionary describing the name and description of each parameter used to connect to this engine's underlying DBAPI. This function is meant for usage in automated configuration tools that wish to query the user for database and connection information.

back to section top
Class SQLSession(object)

represents a a handle to the SQLEngine's connection pool. the default SQLSession maintains a distinct connection during transactions, otherwise returns connections newly retrieved from the pool each time. the Pool is usually configured to have use_threadlocal=True so if a particular connection is already checked out, youll get that same connection in the same thread. There can also be a "unique" SQLSession pushed onto the engine, which returns a connection via the unique_connection() method on Pool; this allows nested transactions to take place, or other operations upon more than one connection at a time.`

def __init__(self, engine, parent=None)

def begin(self)

begins a transaction on this SQLSession's connection. repeated calls to begin() will increment a counter that must be decreased by corresponding commit() statements before an actual commit occurs. this is to provide "nested" behavior of transactions so that different functions in a particular call stack can call begin()/commit() independently of each other without knowledge of an existing transaction.

def commit(self)

commits the transaction started by begin(). If begin() was called multiple times, a counter will be decreased for each call to commit(), with the actual commit operation occuring when the counter reaches zero. this is to provide "nested" behavior of transactions so that different functions in a particular call stack can call begin()/commit() independently of each other without knowledge of an existing transaction.

connection = property()

the connection represented by this SQLSession. The connection is late-connecting, meaning the call to the connection pool only occurs when it is first called (and the pool will typically only connect the first time it is called as well)

def is_begun(self)

def pop(self)

def rollback(self)

rolls back the transaction on this SQLSession's connection. this can be called regardless of the "begin" counter value, i.e. can be called from anywhere inside a callstack. the "begin" counter is cleared.

back to section top
Class SQLEngine(SchemaEngine)

The central "database" object used by an application. Subclasses of this object is used by the schema and SQL construction packages to provide database-specific behaviors, as well as an execution and thread-local transaction context. SQLEngines are constructed via the create_engine() function inside this package.

def __init__(self, pool=None, echo=False, logger=None, default_ordering=False, echo_pool=False, echo_uow=False, convert_unicode=False, encoding='utf-8', **params)

constructs a new SQLEngine. SQLEngines should be constructed via the create_engine() function which will construct the appropriate subclass of SQLEngine.

def begin(self)

begins a transaction on the current thread SQLSession.

def commit(self)

def compile(self, statement, parameters, **kwargs)

given a sql.ClauseElement statement plus optional bind parameters, creates a new instance of this engine's SQLCompiler, compiles the ClauseElement, and returns the newly compiled object.

def compiler(self, statement, parameters)

returns a sql.ClauseVisitor which will produce a string representation of the given ClauseElement and parameter dictionary. This object is usually a subclass of ansisql.ANSICompiler. compiler is called within the context of the compile() method.

def connect_args(self)

subclasses override this method to provide a two-item tuple containing the *args and **kwargs used to establish a connection.

def connection(self)

returns a managed DBAPI connection from this SQLEngine's connection pool.

def create(self, entity, **params)

creates a table or index within this engine's database connection given a schema.Table object.

def dbapi(self)

subclasses override this method to provide the DBAPI module used to establish connections.

def defaultrunner(self, proxy=None)

Returns a schema.SchemaVisitor instance that can execute the default values on a column. The base class for this visitor is the DefaultRunner class inside this module. This visitor will typically only receive schema.DefaultGenerator schema objects. The given proxy is a callable that takes a string statement and a dictionary of bind parameters to be executed. For engines that require positional arguments, the dictionary should be an instance of OrderedDict which returns its bind parameters in the proper order. defaultrunner is called within the context of the execute_compiled() method.

def dispose(self)

disposes of the underlying pool manager for this SQLEngine.

def do_begin(self, connection)

implementations might want to put logic here for turning autocommit on/off, etc.

def do_commit(self, connection)

implementations might want to put logic here for turning autocommit on/off, etc.

def do_rollback(self, connection)

implementations might want to put logic here for turning autocommit on/off, etc.

def drop(self, entity, **params)

drops a table or index within this engine's database connection given a schema.Table object.

def execute(self, statement, parameters=None, connection=None, cursor=None, echo=None, typemap=None, commit=False, return_raw=False, **kwargs)

executes the given string-based SQL statement with the given parameters.

The parameters can be a dictionary or a list, or a list of dictionaries or lists, depending on the paramstyle of the DBAPI. If the current thread has specified a transaction begin() for this engine, the statement will be executed in the context of the current transactional connection. Otherwise, a commit() will be performed immediately after execution, since the local pooled connection is returned to the pool after execution without a transaction set up.

In all error cases, a rollback() is immediately performed on the connection before propagating the exception outwards.

Other options include:

connection - a DBAPI connection to use for the execute. If None, a connection is pulled from this engine's connection pool.

echo - enables echo for this execution, which causes all SQL and parameters to be dumped to the engine's logging output before execution.

typemap - a map of column names mapped to sqlalchemy.types.TypeEngine objects. These will be passed to the created ResultProxy to perform post-processing on result-set values.

commit - if True, will automatically commit the statement after completion.

def execute_compiled(self, compiled, parameters, connection=None, cursor=None, echo=None, **kwargs)

executes the given compiled statement object with the given parameters.

The parameters can be a dictionary of key/value pairs, or a list of dictionaries for an executemany() style of execution. Engines that use positional parameters will convert the parameters to a list before execution.

If the current thread has specified a transaction begin() for this engine, the statement will be executed in the context of the current transactional connection. Otherwise, a commit() will be performed immediately after execution, since the local pooled connection is returned to the pool after execution without a transaction set up.

In all error cases, a rollback() is immediately performed on the connection before propigating the exception outwards.

Other options include:

connection - a DBAPI connection to use for the execute. If None, a connection is pulled from this engine's connection pool.

echo - enables echo for this execution, which causes all SQL and parameters to be dumped to the engine's logging output before execution.

typemap - a map of column names mapped to sqlalchemy.types.TypeEngine objects. These will be passed to the created ResultProxy to perform post-processing on result-set values.

commit - if True, will automatically commit the statement after completion.

func = property()

def get_default_schema_name(self)

returns the currently selected schema in the current connection.

def hash_key(self)

ischema = property()

returns an ISchema object for this engine, which allows access to information_schema tables (if supported)

def last_inserted_ids(self)

returns a thread-local list of the primary key values for the last insert statement executed. This does not apply to straight textual clauses; only to sql.Insert objects compiled against a schema.Table object, which are executed via statement.execute(). The order of items in the list is the same as that of the Table's 'primary_key' attribute. In some cases, this method may invoke a query back to the database to retrieve the data, based on the "lastrowid" value in the cursor.

def last_inserted_params(self)

returns a dictionary of the full parameter dictionary for the last compiled INSERT statement, including any ColumnDefaults or Sequences that were pre-executed. this value is thread-local.

def last_updated_params(self)

returns a dictionary of the full parameter dictionary for the last compiled UPDATE statement, including any ColumnDefaults that were pre-executed. this value is thread-local.

def lastrow_has_defaults(self)

returns True if the last row INSERTED via a compiled insert statement contained PassiveDefaults, indicating that the database inserted data beyond that which we gave it. this value is thread-local.

def log(self, msg)

logs a message using this SQLEngine's logger stream.

def multi_transaction(self, tables, func)

provides a transaction boundary across tables which may be in multiple databases. If you have three tables, and a function that operates upon them, providing the tables as a list and the function will result in a begin()/commit() pair invoked for each distinct engine represented within those tables, and the function executed within the context of that transaction. any exceptions will result in a rollback(). clearly, this approach only goes so far, such as if database A commits, then database B commits and fails, A is already committed. Any failure conditions have to be raised before anyone commits for this to be useful.

name = property()

def oid_column_name(self)

returns the oid column name for this engine, or None if the engine cant/wont support OID/ROWID.

paramstyle = property()

def pop_session(self, s=None)

restores the current thread's SQLSession to that before the last push_session. Returns the restored SQLSession object. Raises an exception if there is no SQLSession pushed onto the stack.

def post_exec(self, proxy, compiled, parameters, **kwargs)

called by execute_compiled after the compiled statement is executed.

def pre_exec(self, proxy, compiled, parameters, **kwargs)

called by execute_compiled before the compiled statement is executed.

def proxy(self, statement=None, parameters=None)

returns a callable which will execute the given statement string and parameter object. the parameter object is expected to be the result of a call to compiled.get_params(). This callable is a generic version of a connection/cursor-specific callable that is produced within the execute_compiled method, and is used for objects that require this style of proxy when outside of an execute_compiled method, primarily the DefaultRunner.

def push_session(self)

pushes a new SQLSession onto this engine, temporarily replacing the previous one for the current thread. The previous session can be restored by calling pop_session(). this allows the usage of a new connection and possibly transaction within a particular block, superceding the existing one, including any transactions that are in progress. Returns the new SQLSession object.

def reflecttable(self, table)

given a Table object, reflects its columns and properties from the database.

def rollback(self)

rolls back the transaction on the current thread's SQLSession.

def schemadropper(self, **params)

returns a schema.SchemaVisitor instance that can drop schemas, when it is invoked to traverse a set of schema objects. schemagenerator is called via the drop() method.

def schemagenerator(self, **params)

returns a schema.SchemaVisitor instance that can generate schemas, when it is invoked to traverse a set of schema objects. schemagenerator is called via the create() method.

session = property()

returns the current thread's SQLSession

def supports_sane_rowcount(self)

Provided to indicate when MySQL is being used, which does not have standard behavior for the "rowcount" function on a statement handle.

def text(self, text, *args, **kwargs)

returns a sql.text() object for performing literal queries.

def transaction(self, func, *args, **kwargs)

executes the given function within a transaction boundary. this is a shortcut for explicitly calling begin() and commit() and optionally rollback() when execptions are raised. The given *args and **kwargs will be passed to the function as well, which could be handy in constructing decorators.

def type_descriptor(self, typeobj)

provides a database-specific TypeEngine object, given the generic object which comes from the types module. Subclasses will usually use the adapt_type() method in the types module to make this job easy.

def unique_connection(self)

returns a DBAPI connection from this SQLEngine's connection pool that is distinct from the current thread's connection.

back to section top
Class ResultProxy

wraps a DBAPI cursor object to provide access to row columns based on integer position, case-insensitive column name, or by schema.Column object. e.g.: row = fetchone()

col1 = row[0] # access via integer position

col2 = row['col2'] # access via name

col3 = row[mytable.c.mycol] # access via Column object. ResultProxy also contains a map of TypeEngine objects and will invoke the appropriate convert_result_value() method before returning columns.

def __init__(self, cursor, engine, typemap=None)

ResultProxy objects are constructed via the execute() method on SQLEngine.

def fetchall(self)

fetches all rows, just like DBAPI cursor.fetchall().

def fetchone(self)

fetches one row, just like DBAPI cursor.fetchone().

def last_inserted_ids(self)

def last_inserted_params(self)

def last_updated_params(self)

def lastrow_has_defaults(self)

def supports_sane_rowcount(self)

back to section top
Class RowProxy

proxies a single cursor row for a parent ResultProxy.

def __init__(self, parent, row)

RowProxy objects are constructed by ResultProxy objects.

def items(self)

def keys(self)

def values(self)

back to section top
Module sqlalchemy.sql

defines the base components of SQL expression trees.

Module Functions
def alias(*args, **params)

def and_(*clauses)

joins a list of clauses together by the AND operator. the & operator can be used as well.

def asc(column)

returns an ascending ORDER BY clause element, e.g.: order_by = [asc(table1.mycol)]

def between_(ctest, cleft, cright)

returns BETWEEN predicate clause (clausetest BETWEEN clauseleft AND clauseright)

def bindparam(key, value=None, type=None)

creates a bind parameter clause with the given key. An optional default value can be specified by the value parameter, and the optional type parameter is a sqlalchemy.types.TypeEngine object which indicates bind-parameter and result-set translation for this bind parameter.

def cast(clause, totype, **kwargs)

returns CAST function CAST(clause AS totype) Use with a sqlalchemy.types.TypeEngine object, i.e cast(table.c.unit_price * table.c.qty, Numeric(10,4)) or cast(table.c.timestamp, DATE)

def column(text, table=None, type=None)

returns a textual column clause, relative to a table. this is also the primitive version of a schema.Column which is a subclass.

def delete(table, whereclause=None, **kwargs)

returns a DELETE clause element. This can also be called from a table directly via the table's delete() method. 'table' is the table to be updated. 'whereclause' is a ClauseElement describing the WHERE condition of the UPDATE statement.

def desc(column)

returns a descending ORDER BY clause element, e.g.: order_by = [desc(table1.mycol)]

def exists(*args, **params)

def insert(table, values=None, **kwargs)

returns an INSERT clause element. This can also be called from a table directly via the table's insert() method. 'table' is the table to be inserted into. 'values' is a dictionary which specifies the column specifications of the INSERT, and is optional. If left as None, the column specifications are determined from the bind parameters used during the compile phase of the INSERT statement. If the bind parameters also are None during the compile phase, then the column specifications will be generated from the full list of table columns.

If both 'values' and compile-time bind parameters are present, the compile-time bind parameters override the information specified within 'values' on a per-key basis.

The keys within 'values' can be either Column objects or their string identifiers. Each key may reference one of: a literal data value (i.e. string, number, etc.), a Column object, or a SELECT statement. If a SELECT statement is specified which references this INSERT statement's table, the statement will be correlated against the INSERT statement.

def join(left, right, onclause=None, **kwargs)

returns a JOIN clause element (regular inner join), given the left and right hand expressions, as well as the ON condition's expression. To chain joins together, use the resulting Join object's "join()" or "outerjoin()" methods.

def literal(value, type=None)

returns a literal clause, bound to a bind parameter. literal clauses are created automatically when used as the right-hand side of a boolean or math operation against a column object. use this function when a literal is needed on the left-hand side (and optionally on the right as well). the optional type parameter is a sqlalchemy.types.TypeEngine object which indicates bind-parameter and result-set translation for this literal.

def not_(clause)

returns a negation of the given clause, i.e. NOT(clause). the ~ operator can be used as well.

def null()

returns a Null object, which compiles to NULL in a sql statement.

def or_(*clauses)

joins a list of clauses together by the OR operator. the | operator can be used as well.

def outerjoin(left, right, onclause=None, **kwargs)

returns an OUTER JOIN clause element, given the left and right hand expressions, as well as the ON condition's expression. To chain joins together, use the resulting Join object's "join()" or "outerjoin()" methods.

def select(columns=None, whereclause=None, from_obj=[], **kwargs)

returns a SELECT clause element. this can also be called via the table's select() method. 'columns' is a list of columns and/or selectable items to select columns from 'whereclause' is a text or ClauseElement expression which will form the WHERE clause 'from_obj' is an list of additional "FROM" objects, such as Join objects, which will extend or override the default "from" objects created from the column list and the whereclause. **kwargs - additional parameters for the Select object.

def subquery(alias, *args, **kwargs)

def table(name, *columns)

returns a table clause. this is a primitive version of the schema.Table object, which is a subclass of this object.

def text(text, engine=None, *args, **kwargs)

creates literal text to be inserted into a query. When constructing a query from a select(), update(), insert() or delete(), using plain strings for argument values will usually result in text objects being created automatically. Use this function when creating textual clauses outside of other ClauseElement objects, or optionally wherever plain text is to be used. Arguments include:

text - the text of the SQL statement to be created. use :<param> to specify bind parameters; they will be compiled to their engine-specific format.

engine - an optional engine to be used for this text query. Alternatively, call the text() method off the engine directly.

bindparams - a list of bindparam() instances which can be used to define the types and/or initial values for the bind parameters within the textual statement; the keynames of the bindparams must match those within the text of the statement. The types will be used for pre-processing on bind values.

typemap - a dictionary mapping the names of columns represented in the SELECT clause of the textual statement to type objects, which will be used to perform post-processing on columns within the result set (for textual statements that produce result sets).

def union(*selects, **params)

def union_all(*selects, **params)

def update(table, whereclause=None, values=None, **kwargs)

returns an UPDATE clause element. This can also be called from a table directly via the table's update() method. 'table' is the table to be updated. 'whereclause' is a ClauseElement describing the WHERE condition of the UPDATE statement. 'values' is a dictionary which specifies the SET conditions of the UPDATE, and is optional. If left as None, the SET conditions are determined from the bind parameters used during the compile phase of the UPDATE statement. If the bind parameters also are None during the compile phase, then the SET conditions will be generated from the full list of table columns.

If both 'values' and compile-time bind parameters are present, the compile-time bind parameters override the information specified within 'values' on a per-key basis.

The keys within 'values' can be either Column objects or their string identifiers. Each key may reference one of: a literal data value (i.e. string, number, etc.), a Column object, or a SELECT statement. If a SELECT statement is specified which references this UPDATE statement's table, the statement will be correlated against the UPDATE statement.

back to section top
Class ClauseParameters(OrderedDict)

represents a dictionary/iterator of bind parameter key names/values. Includes parameters compiled with a Compiled object as well as additional arguments passed to the Compiled object's get_params() method. Parameter values will be converted as per the TypeEngine objects present in the bind parameter objects. The non-converted value can be retrieved via the get_original method. For Compiled objects that compile positional parameters, the values() iteration of the object will return the parameter values in the correct order.

def __init__(self, engine=None)

def get_original(self, key)

def get_original_dict(self)

def get_raw_dict(self)

def set_parameter(self, key, value, bindparam)

def values(self)

back to section top
Class Compiled(ClauseVisitor)

represents a compiled SQL expression. the __str__ method of the Compiled object should produce the actual text of the statement. Compiled objects are specific to the database library that created them, and also may or may not be specific to the columns referenced within a particular set of bind parameters. In no case should the Compiled object be dependent on the actual values of those bind parameters, even though it may reference those values as defaults.

def __init__(self, statement, parameters, engine=None)

constructs a new Compiled object. statement - ClauseElement to be compiled parameters - optional dictionary indicating a set of bind parameters specified with this Compiled object. These parameters are the "default" values corresponding to the ClauseElement's BindParamClauses when the Compiled is executed. In the case of an INSERT or UPDATE statement, these parameters will also result in the creation of new BindParamClause objects for each key and will also affect the generated column list in an INSERT statement and the SET clauses of an UPDATE statement. The keys of the parameter dictionary can either be the string names of columns or ColumnClause objects. engine - optional SQLEngine to compile this statement against

def compile(self)

def execute(self, *multiparams, **params)

executes this compiled object using the AbstractEngine it is bound to.

def get_params(self, **params)

returns the bind params for this compiled object. Will start with the default parameters specified when this Compiled object was first constructed, and will override those values with those sent via **params, which are key/value pairs. Each key should match one of the BindParamClause objects compiled into this object; either the "key" or "shortname" property of the BindParamClause.

def scalar(self, *multiparams, **params)

executes this compiled object via the execute() method, then returns the first column of the first row. Useful for executing functions, sequences, rowcounts, etc.

back to section top
Class ClauseElement(object)

base class for elements of a programmatically constructed SQL expression.

def accept_visitor(self, visitor)

accepts a ClauseVisitor and calls the appropriate visit_xxx method.

def compare(self, other)

compares this ClauseElement to the given ClauseElement. Subclasses should override the default behavior, which is a straight identity comparison.

def compile(self, engine=None, parameters=None, typemap=None, compiler=None)

compiles this SQL expression using its underlying SQLEngine to produce a Compiled object. If no engine can be found, an ANSICompiler is used with no engine. bindparams is a dictionary representing the default bind parameters to be used with the statement.

def copy_container(self)

should return a copy of this ClauseElement, iff this ClauseElement contains other ClauseElements. Otherwise, it should be left alone to return self. This is used to create copies of expression trees that still reference the same "leaf nodes". The new structure can then be restructured without affecting the original.

engine = property()

attempts to locate a SQLEngine within this ClauseElement structure, or returns None if none found.

def execute(self, *multiparams, **params)

def is_selectable(self)

returns True if this ClauseElement is Selectable, i.e. it contains a list of Column objects and can be used as the target of a select statement.

def scalar(self, *multiparams, **params)

def using(self, abstractengine)

back to section top
Class TableClause(FromClause)

def __init__(self, name, *columns)

def accept_visitor(self, visitor)

def alias(self, name=None)

def append_column(self, c)

c = property()

columns = property()

def count(self, whereclause=None, **params)

def delete(self, whereclause=None)

foreign_keys = property()

indexes = property()

def insert(self, values=None)

def join(self, right, *args, **kwargs)

original_columns = property()

def outerjoin(self, right, *args, **kwargs)

primary_key = property()

def select(self, whereclause=None, **params)

def update(self, whereclause=None, values=None)

back to section top
Class ColumnClause(ColumnElement)

represents a textual column clause in a SQL statement. May or may not be bound to an underlying Selectable.

def __init__(self, text, selectable=None, type=None)

def accept_visitor(self, visitor)

def to_selectable(self, selectable)

given a Selectable, returns this column's equivalent in that Selectable, if any. for example, this could translate the column "name" from a Table object to an Alias of a Select off of that Table object.

back to section top
Module sqlalchemy.pool

provides a connection pool implementation, which optionally manages connections on a thread local basis. Also provides a DBAPI2 transparency layer so that pools can be managed automatically, based on module type and connect arguments, simply by calling regular DBAPI connect() methods.

Module Functions
def clear_managers()

removes all current DBAPI2 managers. all pools and connections are disposed.

def manage(module, **params)

given a DBAPI2 module and pool management parameters, returns a proxy for the module that will automatically pool connections. Options are delivered to an underlying DBProxy object.

Arguments: module : a DBAPI2 database module. Options: echo=False : if set to True, connections being pulled and retrieved from/to the pool will be logged to the standard output, as well as pool sizing information.

use_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the same connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using.

poolclass=QueuePool : the default class used by the pool module to provide pooling. QueuePool uses the Python Queue.Queue class to maintain a list of available connections.

pool_size=5 : used by QueuePool - the size of the pool to be maintained. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain.

max_overflow=10 : the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.

back to section top
Class DBProxy(object)

proxies a DBAPI2 connect() call to a pooled connection keyed to the specific connect parameters.

def __init__(self, module, poolclass=, **params)

module is a DBAPI2 module poolclass is a Pool class, defaulting to QueuePool. other parameters are sent to the Pool object's constructor.

def close(self)

def connect(self, *args, **params)

connects to a database using this DBProxy's module and the given connect arguments. if the arguments match an existing pool, the connection will be returned from the pool's current thread-local connection instance, or if there is no thread-local connection instance it will be checked out from the set of pooled connections. If the pool has no available connections and allows new connections to be created, a new database connection will be made.

def dispose(self, *args, **params)

disposes the connection pool referenced by the given connect arguments.

def get_pool(self, *args, **params)

back to section top
Class Pool(object)

def __init__(self, echo=False, use_threadlocal=True, logger=None)

def connect(self)

def do_get(self)

def do_return_conn(self, conn)

def do_return_invalid(self)

def get(self)

def log(self, msg)

def return_conn(self, conn)

def return_invalid(self)

def status(self)

def unique_connection(self)

back to section top
Class QueuePool(Pool)

uses Queue.Queue to maintain a fixed-size list of connections.

def __init__(self, creator, pool_size=5, max_overflow=10, **params)

def checkedin(self)

def checkedout(self)

def do_get(self)

def do_return_conn(self, conn)

def do_return_invalid(self)

def overflow(self)

def size(self)

def status(self)

back to section top
Class SingletonThreadPool(Pool)

Maintains one connection per each thread, never moving to another thread. this is used for SQLite and other databases with a similar restriction.

def __init__(self, creator, **params)

def do_get(self)

def do_return_conn(self, conn)

def do_return_invalid(self)

def status(self)

back to section top
Module sqlalchemy.mapping

the mapper package provides object-relational functionality, building upon the schema and sql packages and tying operations to class properties and constructors.

Module Functions
def assign_mapper(class_, *args, **params)

def backref(name, **kwargs)

def cascade_mappers(*classes_or_mappers)

given a list of classes and/or mappers, identifies the foreign key relationships between the given mappers or corresponding class mappers, and creates relation() objects representing those relationships, including a backreference. Attempts to find the "secondary" table in a many-to-many relationship as well. The names of the relations will be a lowercase version of the related class. In the case of one-to-many or many-to-many, the name will be "pluralized", which currently is based on the English language (i.e. an 's' or 'es' added to it).

def class_mapper(class_, entity_name=None)

given a ClassKey, returns the primary Mapper associated with the key.

def clear_mappers()

removes all mappers that have been created thus far. when new mappers are created, they will be assigned to their classes as their primary mapper.

def defer(name, **kwargs)

returns a MapperOption that will convert the column property of the given name into a deferred load. Used with mapper.options()

def deferred(*columns, **kwargs)

returns a DeferredColumnProperty, which indicates this object attributes should only be loaded from its corresponding table column when first accessed.

def eagerload(name, **kwargs)

returns a MapperOption that will convert the property of the given name into an eager load. Used with mapper.options()

def extension(ext)

returns a MapperOption that will add the given MapperExtension to the mapper returned by mapper.options().

def lazyload(name, **kwargs)

returns a MapperOption that will convert the property of the given name into a lazy load. Used with mapper.options()

def mapper(class_, table=None, *args, **params)

returns a new or already cached Mapper object.

def noload(name, **kwargs)

returns a MapperOption that will convert the property of the given name into a non-load. Used with mapper.options()

def object_mapper(object)

given an object, returns the primary Mapper associated with the object or the object's class.

def relation(*args, **kwargs)

provides a relationship of a primary Mapper to a secondary Mapper, which corresponds to a parent-child or associative table relationship.

def undefer(name, **kwargs)

returns a MapperOption that will convert the column property of the given name into a non-deferred (regular column) load. Used with mapper.options.

back to section top
Class Mapper(object)

Persists object instances to and from schema.Table objects via the sql package. Instances of this class should be constructed through this package's mapper() or relation() function.

def __init__(self, class_, table, primarytable=None, properties=None, primary_key=None, is_primary=False, inherits=None, inherit_condition=None, extension=None, order_by=False, allow_column_override=False, entity_name=None, always_refresh=False, version_id_col=None, construct_new=False, **kwargs)

def add_property(self, key, prop)

adds an additional property to this mapper. this is the same as if it were specified within the 'properties' argument to the constructor. if the named property already exists, this will replace it. Useful for circular relationships, or overriding the parameters of auto-generated properties such as backreferences.

def compile(self, whereclause=None, **options)

works like select, except returns the SQL statement object without compiling or executing it

def copy(self, **kwargs)

def count(self, whereclause=None, params=None, **kwargs)

calls count() on this mapper's default Query object.

def count_by(self, *args, **params)

calls count_by() on this mapper's default Query object.

def delete_obj(self, objects, uow)

called by a UnitOfWork object to delete objects, which involves a DELETE statement for each table used by this mapper, for each object in the list.

def get(self, *ident, **kwargs)

calls get() on this mapper's default Query object.

def get_by(self, *args, **params)

calls get_by() on this mapper's default Query object.

def has_eager(self)

returns True if one of the properties attached to this Mapper is eager loading

def identity(self, instance)

returns the identity (list of primary key values) for the given instance. The list of values can be fed directly into the get() method as mapper.get(*key).

def identity_key(self, *primary_key)

returns the instance key for the given identity value. this is a global tracking object used by the objectstore, and is usually available off a mapped object as instance._instance_key.

def instance_key(self, instance)

returns the instance key for the given instance. this is a global tracking object used by the objectstore, and is usually available off a mapped object as instance._instance_key.

def instances(self, cursor, *mappers, **kwargs)

given a cursor (ResultProxy) from an SQLEngine, returns a list of object instances corresponding to the rows in the cursor.

def is_assigned(self, instance)

returns True if this mapper is the primary mapper for the given instance. this is dependent not only on class assignment but the optional "entity_name" parameter as well.

def options(self, *options, **kwargs)

uses this mapper as a prototype for a new mapper with different behavior. *options is a list of options directives, which include eagerload(), lazyload(), and noload()

def populate_instance(self, session, instance, row, identitykey, imap, isnew, frommapper=None)

query = property()

returns an instance of sqlalchemy.mapping.query.Query, which implements all the query-constructing methods such as get(), select(), select_by(), etc. The default Query object uses the global thread-local Session from the objectstore package. To get a Query object for a specific Session, call the using(session) method.

def register_deleted(self, obj, uow)

def register_dependencies(self, uowcommit, *args, **kwargs)

called by an instance of objectstore.UOWTransaction to register which mappers are dependent on which, as well as DependencyProcessor objects which will process lists of objects in between saves and deletes.

def save_obj(self, objects, uow, postupdate=False)

called by a UnitOfWork object to save objects, which involves either an INSERT or an UPDATE statement for each table used by this mapper, for each element of the list.

def select(self, arg=None, **kwargs)

calls select() on this mapper's default Query object.

def select_by(self, *args, **params)

calls select_by() on this mapper's default Query object.

def select_statement(self, statement, **params)

calls select_statement() on this mapper's default Query object.

def select_text(self, text, **params)

def select_whereclause(self, whereclause=None, params=None, **kwargs)

calls select_whereclause() on this mapper's default Query object.

def selectfirst(self, *args, **params)

calls selectfirst() on this mapper's default Query object.

def selectfirst_by(self, *args, **params)

calls selectfirst_by() on this mapper's default Query object.

def selectone(self, *args, **params)

calls selectone() on this mapper's default Query object.

def selectone_by(self, *args, **params)

calls selectone_by() on this mapper's default Query object.

def set_property(self, key, prop)

def translate_row(self, tomapper, row)

attempts to take a row and translate its values to a row that can be understood by another mapper. breaks the column references down to their bare keynames to accomplish this. So far this works for the various polymorphic examples.

def using(self, session)

returns a new Query object with the given Session.

back to section top
Class MapperExtension(object)

def __init__(self)

def after_insert(self, mapper, instance)

called after an object instance has been INSERTed

def after_update(self, mapper, instance)

called after an object instnace is UPDATED

def append_result(self, mapper, row, imap, result, instance, isnew, populate_existing=False)

called when an object instance is being appended to a result list. If this method returns True, it is assumed that the mapper should do the appending, else if this method returns False, it is assumed that the append was handled by this method.

mapper - the mapper doing the operation row - the result row from the database imap - a dictionary that is storing the running set of objects collected from the current result set result - an instance of util.HistoryArraySet(), which may be an attribute on an object if this is a related object load (lazy or eager). use result.append_nohistory(value) to append objects to this list. instance - the object instance to be appended to the result isnew - indicates if this is the first time we have seen this object instance in the current result set. if you are selecting from a join, such as an eager load, you might see the same object instance many times in the same result set. populate_existing - usually False, indicates if object instances that were already in the main identity map, i.e. were loaded by a previous select(), get their attributes overwritten

def before_delete(self, mapper, instance)

called before an object instance is DELETEed

def before_insert(self, mapper, instance)

called before an object instance is INSERTed into its table. this is a good place to set up primary key values and such that arent handled otherwise.

def before_update(self, mapper, instance)

called before an object instnace is UPDATED

def chain(self, ext)

def create_instance(self, mapper, row, imap, class_)

called when a new object instance is about to be created from a row. the method can choose to create the instance itself, or it can return None to indicate normal object creation should take place. mapper - the mapper doing the operation row - the result row from the database imap - a dictionary that is storing the running set of objects collected from the current result set class_ - the class we are mapping.

def populate_instance(self, mapper, session, instance, row, identitykey, imap, isnew)

called right before the mapper, after creating an instance from a row, passes the row to its MapperProperty objects which are responsible for populating the object's attributes. If this method returns True, it is assumed that the mapper should do the appending, else if this method returns False, it is assumed that the append was handled by this method. Essentially, this method is used to have a different mapper populate the object: def populate_instance(self, mapper, session, instance, row, identitykey, imap, isnew): othermapper.populate_instance(session, instance, row, identitykey, imap, isnew, frommapper=mapper) return True

def select(self, query, *args, **kwargs)

overrides the select method of the Query object

def select_by(self, query, *args, **kwargs)

overrides the select_by method of the Query object

back to section top
Module sqlalchemy.mapping.query

Class Query(object)

encapsulates the object-fetching operations provided by Mappers.

def __init__(self, mapper, **kwargs)

def count(self, whereclause=None, params=None, **kwargs)

def count_by(self, *args, **params)

returns the count of instances based on the given clauses and key/value criterion. The criterion is constructed in the same way as the select_by() method.

def get(self, *ident, **kwargs)

returns an instance of the object based on the given identifier, or None if not found. The *ident argument is a list of primary key columns in the order of the table def's primary key columns.

def get_by(self, *args, **params)

returns a single object instance based on the given key/value criterion. this is either the first value in the result list, or None if the list is empty.

the keys are mapped to property or column names mapped by this mapper's Table, and the values are coerced into a WHERE clause separated by AND operators. If the local property/column names dont contain the key, a search will be performed against this mapper's immediate list of relations as well, forming the appropriate join conditions if a matching property is located.

e.g. u = usermapper.get_by(user_name = 'fred')

def instances(self, *args, **kwargs)

props = property()

def select(self, arg=None, **kwargs)

selects instances of the object from the database.

arg can be any ClauseElement, which will form the criterion with which to load the objects.

For more advanced usage, arg can also be a Select statement object, which will be executed and its resulting rowset used to build new object instances. in this case, the developer must insure that an adequate set of columns exists in the rowset with which to build new object instances.

def select_by(self, *args, **params)

returns an array of object instances based on the given clauses and key/value criterion.

*args is a list of zero or more ClauseElements which will be connected by AND operators. **params is a set of zero or more key/value parameters which are converted into ClauseElements. the keys are mapped to property or column names mapped by this mapper's Table, and the values are coerced into a WHERE clause separated by AND operators. If the local property/column names dont contain the key, a search will be performed against this mapper's immediate list of relations as well, forming the appropriate join conditions if a matching property is located.

e.g. result = usermapper.select_by(user_name = 'fred')

def select_statement(self, statement, **params)

def select_text(self, text, **params)

def select_whereclause(self, whereclause=None, params=None, **kwargs)

def selectfirst(self, *args, **params)

works like select(), but only returns the first result by itself, or None if no objects returned.

def selectfirst_by(self, *args, **params)

works like select_by(), but only returns the first result by itself, or None if no objects returned. Synonymous with get_by()

def selectone(self, *args, **params)

works like selectfirst(), but throws an error if not exactly one result was returned.

def selectone_by(self, *args, **params)

works like selectfirst_by(), but throws an error if not exactly one result was returned.

session = property()

table = property()

back to section top
Module sqlalchemy.mapping.objectstore

provides the Session object and a function-oriented convenience interface. This is the "front-end" to the Unit of Work system in unitofwork.py. Issues of "scope" are dealt with here, primarily through an important function "get_session()", which is where mappers and units of work go to get a handle on the current threa-local context.

Module Functions
def begin()

deprecated. use s = Session(new_imap=False).

def class_mapper(class_)

def clear()

removes all current UnitOfWorks and IdentityMaps for this thread and establishes a new one. It is probably a good idea to discard all current mapped object instances, as they are no longer in the Identity Map.

def commit(*obj)

deprecated; use flush(*obj)

def delete(*obj)

registers the given objects as to be deleted upon the next commit

def expire(*obj)

invalidates the data in the given objects and sets them to refresh themselves the next time they are requested.

def expunge(*obj)

def flush(*obj)

flushes the current UnitOfWork transaction. if a transaction was begun via begin(), flushes only those objects that were created, modified, or deleted since that begin statement. otherwise flushes all objects that have been changed.

if individual objects are submitted, then only those objects are committed, and the begin/commit cycle is not affected.

def get_id_key(ident, class_, entity_name=None)

def get_row_key(row, class_, primary_key, entity_name=None)

def get_session(obj=None)

def get_session(obj=None)

def has_instance(instance)

returns True if the current thread-local IdentityMap contains the given instance

def has_key(key)

returns True if the current thread-local IdentityMap contains the given instance key

def import_instance(instance)

def instance_key(instance)

returns the IdentityMap key for the given instance

def is_dirty(obj)

returns True if the given object is in the current UnitOfWork's new or dirty list, or if its a modified list attribute on an object.

def mapper(*args, **params)

def object_mapper(obj)

def pop_session()

def push_session(sess)

def refresh(*obj)

reloads the state of this object from the database, and cancels any in-memory changes.

def using_session(sess, func)

back to section top
Class LegacySession(Session)

def __init__(self, nest_on=None, hash_key=None, **kwargs)

def begin(self)

begins a new UnitOfWork transaction and returns a tranasaction-holding object. commit() or rollback() should be called on the returned object. commit() on the Session will do nothing while a transaction is pending, and further calls to begin() will return no-op transactional objects.

def commit(self, *objects)

commits the current UnitOfWork transaction. called with no arguments, this is only used for "implicit" transactions when there was no begin(). if individual objects are submitted, then only those objects are committed, and the begin/commit cycle is not affected.

def was_popped(self)

def was_pushed(self)

back to section top
Class SessionTrans(object)

returned by Session.begin(), denotes a transactionalized UnitOfWork instance. call commit() on this to commit the transaction.

def __init__(self, parent, uow, isactive)

def begin(self)

calls begin() on the underlying Session object, returning a new no-op SessionTrans object.

def commit(self)

commits the transaction noted by this SessionTrans object.

isactive = property()

True if this SessionTrans is the 'active' transaction marker, else its a no-op.

parent = property()

returns the parent Session of this SessionTrans object.

def rollback(self)

rolls back the current UnitOfWork transaction, in the case that begin() has been called. The changes logged since the begin() call are discarded.

uow = property()

returns the parent UnitOfWork corresponding to this transaction.

back to section top
Module sqlalchemy.exceptions

Class ArgumentError

raised for all those conditions where invalid arguments are sent to constructed objects. This error generally corresponds to construction time state errors.

back to section top
Class AssertionError

corresponds to internal state being detected in an invalid state

back to section top
Class CommitError

raised when an invalid condition is detected upon a commit()

back to section top
Class DBAPIError

something weird happened with a particular DBAPI version

back to section top
Class InvalidRequestError

sqlalchemy was asked to do something it cant do, return nonexistent data, etc. This error generally corresponds to runtime state errors.

back to section top
Class SQLAlchemyError

generic error class

back to section top
Class SQLError

raised when the execution of a SQL statement fails. includes accessors for the underlying exception, as well as the SQL and bind parameters

def __init__(self, statement, params, orig)

back to section top
Module sqlalchemy.ext.proxy

Module Functions
def create_engine(name, opts=None, **kwargs)

creates a new SQLEngine instance. There are two forms of calling this method. In the first, the "name" argument is the type of engine to load, i.e. 'sqlite', 'postgres', 'oracle', 'mysql'. "opts" is a dictionary of options to be sent to the underlying DBAPI module to create a connection, usually including a hostname, username, password, etc. In the second, the "name" argument is a URL in the form <enginename>://opt1=val1&opt2=val2. Where <enginename> is the name as above, and the contents of the option dictionary are spelled out as a URL encoded string. The "opts" argument is not used. In both cases, **kwargs represents options to be sent to the SQLEngine itself. A possibly partial listing of those options is as follows: pool=None : an instance of sqlalchemy.pool.DBProxy or sqlalchemy.pool.Pool to be used as the underlying source for connections (DBProxy/Pool is described in the previous section). If None, a default DBProxy will be created using the engine's own database module with the given arguments. echo=False : if True, the SQLEngine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout. A SQLEngine instances' "echo" data member can be modified at any time to turn logging on and off. If set to the string 'debug', result rows will be printed to the standard output as well. logger=None : a file-like object where logging output can be sent, if echo is set to True. This defaults to sys.stdout.

module=None : used by Oracle and Postgres, this is a reference to a DBAPI2 module to be used instead of the engine's default module. For Postgres, the default is psycopg2, or psycopg1 if 2 cannot be found. For Oracle, its cx_Oracle. For mysql, MySQLdb.

use_ansi=True : used only by Oracle; when False, the Oracle driver attempts to support a particular "quirk" of some Oracle databases, that the LEFT OUTER JOIN SQL syntax is not supported, and the "Oracle join" syntax of using <column1>(+)=<column2> must be used in order to achieve a LEFT OUTER JOIN. Its advised that the Oracle database be configured to have full ANSI support instead of using this feature.

back to section top
Class AutoConnectEngine(BaseProxyEngine)

An SQLEngine proxy that automatically connects when necessary.

def __init__(self, dburi, opts=None, **kwargs)

def get_engine(self)

back to section top
Class BaseProxyEngine(SchemaEngine)

Basis for all proxy engines

def compiler(self, *args, **kwargs)

engine = property()

def execute_compiled(self, *args, **kwargs)

def get_engine(self)

def hash_key(self)

def oid_column_name(self)

def reflecttable(self, table)

def schemadropper(self, *args, **kwargs)

def schemagenerator(self, *args, **kwargs)

def set_engine(self, engine)

back to section top
Class ProxyEngine(BaseProxyEngine)

SQLEngine proxy. Supports lazy and late initialization by delegating to a real engine (set with connect()), and using proxy classes for TypeEngine.

def __init__(self, **kwargs)

def connect(self, uri, opts=None, **kwargs)

Establish connection to a real engine.

def get_engine(self)

def set_engine(self, engine)

back to section top
Class TypeEngine(AbstractType)

def __init__(self, *args, **params)

def adapt(self, cls)

def convert_bind_param(self, value, engine)

def convert_result_value(self, value, engine)

def engine_impl(self, engine)

def get_col_spec(self)

back to section top
Previous: The Types System