English

Google App Engine

Functions

The google.appengine.ext.db package provides the following functions:

allocate_ids(model, count)

Allocates a batch of IDs in the datastore for a datastore kind and parent combination.

IDs allocated in this manner will not be used by the datastore's automatic ID sequence generator and may be used in Keys without conflict..

Arguments:

model
The model key for which to allocate an ID batch. This is a regular Key but only the parent and kind of the key are necessary to determine which ID sequence to use.
count
The number of IDs to allocate.

Returns a tuple of the first and last IDs that it allocates. For example, if you allocated 10 IDs using this function you would get a return in the format (1, 10), not a full list of created IDs.

Example of allocating and using IDs:

# allocate for MyModel without an instance
handmade_key = db.Key.from_path('MyModel', 1)
first_batch = db.allocate_ids(handmade_key, 10)
first_range = range(first_batch[0], first_batch[1] + 1)

# or allocate using an existing key
model_instance = MyModel.all().get()
second_batch = db.allocate_ids(model_instance.key(), 10)
second_range = range(second_batch[0], second_batch[1] + 1)

# and then use them! woo!
my_id = second_range.pop(0)
new_key = db.Key.from_path('MyModel', my_id)
new_instance = MyModel(key=new_key)
new_instance.put()
assert new_instance.key().id() == my_id

# the datastore will not assign ids in first_batch or second_batch
another_instance = MyModel()
another_instance.put()
assert another_instance.key().id() not in first_range
assert another_instance.key().id() not in second_range
    
allocate_ids_async(model, count)

Asynchronously allocates a batch of IDs in the datastore for a datastore kind and parent combination. Identical to allocate_ids(), but returns an asynchronous object. You can call get_result() on the return value to block on the call and return the result.

Arguments:

model
A db.Model instance, Key, or string to serve as a template specifying the ID sequence in which to allocate IDs. Returned ids should only be used in entities with the same parent (if any) and kind as this key.
count
The number of IDs to allocate.

Returns a tuple of the first and last IDs that it allocates. For example, if you allocated 10 IDs using this function you would get a return value in the format (1, 10), not a full list of created IDs.

allocate_id_range(model, start, end, **kwargs)

Allocates a range of IDs with specific endpoints. Once these IDs have been allocated, you can manually assign them to newly created entities.

The datastore's automatic ID allocator never assigns a key belonging to an existing entity to a new entity. As a result, entities written to the given key range will never be overwritten. However, writing entities with manually assigned keys in this range may overwrite existing entities (or new entities written by a separate request), depending on the key range state returned.

Use this method only if you have an existing numeric id range that you want to reserve (for example, bulk loading entities that already have IDs). If you don't care about which IDs you receive, use allocate_ids() instead.

Arguments:

model
A db.Model instance, Key, or string to serve as a template specifying the ID sequence in which to allocate IDs. Returned ids should only be used in entities with the same parent (if any) and kind as this key.
start
The first ID to allocate, a number.
end
The last ID to allocate, a number.

Returns one of (KEY_RANGE_EMPTY, KEY_RANGE_CONTENTION, KEY_RANGE_COLLISION). If not KEY_RANGE_EMPTY, this represents a potential issue with using the allocated key range.

create_config(deadline=None, on_completion=None, read_policy=STRONG_CONSISTENCY)

Creates a configuration object for setting the read policy and datastore call deadline for API calls. You can pass this configuration to most datastore calls using the config=... argument, for example:

# Faster get but may retrive stale data.
entity = db.get(key, config=db.create_config(read_policy=db.EVENTUAL_CONSISTENCY))

The datastore call deadline specifies an amount of time the application runtime environment will wait for the datastore to return a result before aborting with an error. By default, the runtime environment waits until the request handler deadline has elapsed. You can specify a shorter time to wait so your app can return a faster response to the user, retry the operation, try a different operation, or add the operation to a task queue.

The read policy determines whether read operations use strong consistency (the default) or eventual consistency. Note that the high replication datastore (HRD) cannot deliver strong consistency for queries across entity groups. For example, queries across entity groups may return stale results. In order to return strongly consistent query results in the HRD, use ancestor queries.

A read operation with eventual consistency may return sooner than one with strong consistency in the case of failure, and may be appropriate for some uses. However, if you use eventual consistency, recent writes may not immediately appear in query results.

See the function descriptions in this API reference, as well as methods of The Model Class and The Query Class, for information on which functions accept the config argument. See also Queries and Indexes: Setting the Read Policy and Datastore Call Deadline. Note that, in most cases, the config argument is accepted as a keyword argument only, and does not appear explicitly in the function signature.

Arguments:

deadline=None

An optional datastore call deadline, specified as a number of seconds. Accepts a float. If None, the call uses no deadline, and is only interrupted by the request handler deadline.

on_completion=None
An optional callback function. This method defaults to None, but if specified, it will be called with a UserRPC object as an argument when an RPC completes.
read_policy
The read policy, either db.STRONG_CONSISTENCY or db.EVENTUAL_CONSISTENCY. The default is db.STRONG_CONSISTENCY, and strong consistency is used if a read operation is not passed an RPC object.
create_transaction_options(xg=false, retries=None)

Creates a TransactionOptions instance, to set options on transaction execution.Notice that for cross-group (XG) transaction, the xg param must be set to true. You supply the object returned by this function as the first argument to the run_in_transaction_options() function.

Arguments:

xg
Boolean specifying whether XG transactions are allowed. Must be true to run XG transactions. (Note that XG transactions are supported only for apps using HRD.) Raises a BadArgumentError if a non Boolean is used.
retries
Integer specifying the number of retries to attempt upon a failure from a transaction commit. If no default is supplied, the default datastore retries will be used; currently this is set to 3 retries.

Returns a TransactionOptions instance.

Example of creating the options for a subsequent cross-group transaction:

from google.appengine.ext import db

xg_on = db.create_transaction_options(xg=True)

def my_txn():
    x = MyModel(a=3)
    x.put()
    y = MyModel(a=7)
    y.put()

db.run_in_transaction_options(xg_on, my_txn)
    
delete(models)

Deletes one or more model instances from the datastore.

Arguments:

models
A model instance, a Key for an entity, or a list (or other iterable) of model instances or keys of entities to delete.
config
datastore_rpc.Configuration to use for this request, specified as a keyword argument.

As with Model.put(), if multiple keys are given, they may be in more than one entity group. See Keys and Entity Groups.

An exception will always be raised if any error occurs during the operation, even if some of the entities actually were deleted. If the call returns without raising an exception, then all of the entities were deleted successfully.

Note: Entities belonging to a single entity group may not be deleted in a single transaction, unless the delete is performed inside a datastore transaction.

delete_async(models)

Asynchronously deletes one or more model instances from the datastore. This method is identical to db.delete(), except it returns an asynchronous object. Call get_result() in the return value to block on the call.

Arguments:

models
A model instance, a Key for an entity, or a list (or other iterable) of model instances or keys of entities to delete.
config
datastore_rpc.Configuration to use for this request, specified as a keyword argument.

As with Model.put(), if multiple keys are given, they may be in more than one entity group. See Keys and Entity Groups.

This method returns an object that lets you block on the result of the call.

An exception will always be raised if any error occurs during the operation, even if some of the entities actually were deleted. If the call returns without raising an exception, then all of the entities were deleted successfully.

Note: Entities belonging to a single entity group may not be deleted in a single transaction, unless the delete_async is performed inside a datastore transaction.

get(keys)

Fetch the specific Model instance with the given key from the datastore. We support Key objects and string keys (we convert them to Key objects automatically).

Arguments:

keys
Key you wish to find within the datastore entity collection; or a string key; or a list of Keys or string keys.
config
The datastore_rpc.Configuration to use for this request.

If a single key was given, this method returns a Model instance associated with the key if the key exists in the datastore, otherwise None. If a list of keys was given, this method returns a list whose items are either a Model instance or None.

See also Model.get().

get_async(keys)

Asynchronously fetches the specified Model instance(s) from the datastore. Identical to db.get(), except it returns an asynchronous object. You can call get_result() on the return value to block and the call and get the results.

Arguments:

keys
A Key object or a list of Key objects.
config
The datastore_rpc.Configuration to use for this request.

If one Key is provided, the return value is an instance of the appropriate Model class, or None if no entity exists with the given Key. If a list of Keys is provided, the return value is a corresponding list of model instances, with None values when no entity exists for a corresponding Key.

See also Model.get().

get_indexes()

Returns a list of composite indexes belonging to the calling application.

Example of getting and using the indexes:

def get_index_state_as_string(index_state):
    return {db.Index.BUILDING:'BUILDING', db.Index.SERVING:'SERVING',
            db.Index.DELETING:'DELETING', db.Index.ERROR:'ERROR'} [index_state]

def get_sort_direction_as_string(sort_direction):
    return {db.Index.ASCENDING:'ASCENDING',
            db.Index.DESCENDING:'DESCENDING'}[sort_direction]


def dump_indexes():
    for index, state in db.get_indexes():
        print "Kind: %s" % index.kind()
        print "State: %s" % get_index_state_as_string(state)
        print "Is ancestor: %s" % index.has_ancestor()
        for property_name, sort_direction in index.properties():
            print "  %s:%s" % (property_name, get_sort_direction_as_string(sort_direction)) 
get_indexes_async()

Asynchronously returns a list of composite indexes belonging to the calling application.

is_in_transaction()

Returns a boolean indicating whether the current scope is executing in a transaction.

model_to_protobuf(model_instance)

Creates the "protocol buffer" serialization of a Model instance. A protocol buffer is Google's serialization format used for remote procedure calls, and can be useful for serializing datastore objects for backup and restore purposes.

Note: This method uses a different (older) format for protobuffers than the open sourced protocol buffer format. It is not compatible with the open source implementation.

Arguments:

model_instance
The instance of the Model class (or a subclass) to serialize.

Returns the protobuffer serialization of the object, as a byte string.

model_from_protobuf(pb)

Creates a Model instance based on a "protocol buffer" serialization, as returned by model_to_protobuf(). See that method for more information about protocol buffers.

Arguments:

pb
The protobuffer serialization, as returned by model_to_protobuf().

Returns an object of the appropriate kind class. If the kind class does not exist, raises a db.KindError. If the object is not valid according to the model, raises a db.BadValueError.

You can save the new object to the datastore just like any other Model instance, such as by calling its put() method. The object retains the key it had when the protocol buffer was created. If an object with that key already exists in the datastore, saving the deserialized object overwrites the existing object.

Note: If the object's key uses a system-assigned ID and that ID has not already been allocated for the given path and kind, the save will succeed, but the ID is not reserved. An object created in the future may be assigned that ID, and would overwrite the earlier object. For safety, only restore objects in the same application where they existed when they were serialized.

put(models)

Puts one or more model instances into the datastore.

Arguments:

models
A model instance or a list of model instances to store.
config
The datastore_rpc.Configuration to use for this request, specified as a keyword argument.

If multiple model instances are given, they may be in more than one entity group. See Keys and Entity Groups for more information.

An exception will always be raised if any error occurs during the operation, even if some of the entities actually were written. If the call returns without raising an exception, then all of the entities were written successfully.

Returns the Key object (if one model instance is given) or a list of Key objects (if a list of instances is given) that correspond with the stored model instances.

Note: Entities belonging to a single entity group may not be written in a single transaction, unless the put is performed inside a datastore transaction.

put_async(models)

Puts one or more model instances into the datastore. Identical to db.put(), except it returns an asynchronous object. You can call get_result() on the return value to block on the call and get the results.

Arguments:

models
A model instance or a list of model instances to store.
config
The datastore_rpc.Configuration to use for this request, specified as a keyword argument.

If multiple model instances are given, they may be in more than one entity group. See Keys and Entity Groups.

An exception will always be raised if any error occurs during the operation, even if some of the entities actually were written. If the call returns without raising an exception, then all of the entities were written successfully.

This method returns an asynchronous object containing the result of calling get_result().

Note: Entities belonging to a single entity group may not be written in a single transaction, unless the put_async is performed inside a datastore transaction.

query_descendants(model_instance)

Returns a query for all the descendants of a model instance.

Arguments:

model_instance
The model instance whose descendants you want to find.
run_in_transaction(function, *args, **kwargs)

Runs a function containing datastore updates in a single transaction. If any code raises an exception during the transaction, all datastore updates made in the transaction are rolled back.

Arguments:

function
The function to run in a datastore transaction.
*args
Positional arguments to pass to the function.
**kwargs
Keyword arguments to pass to the function.

If the function returns a value, run_in_transaction() returns the value to the caller.

If the function raises an exception, the transaction is rolled back. If the function raises a Rollback exception, the exception is not re-raised. For any other exception, the exception is re-raised to the caller.

The datastore uses optimistic locking and retries for transactions. If the transaction prepared by the function cannot be committed, run_in_transaction() calls the function again, retrying the transaction up to 3 times. (To use a different number of retries, use db.run_in_transaction_custom_retries().) Because the transaction function may be called more than once for a single transaction, the function should not have side effects, including modifications to arguments.

If the transaction cannot be committed, such as due to a high rate of contention, a TransactionFailedError is raised.

For more information about transactions, see Transactions.

from google.appengine.ext import db

class Counter(db.Model):
    name = db.StringProperty()
    count = db.IntegerProperty(default=0)

def decrement(key, amount=1):
    counter = db.get(key)
    counter.count -= amount
    if counter.count < 0:    # don't let the counter go negative
        raise db.Rollback()
    db.put(counter)

q = db.GqlQuery("SELECT * FROM Counter WHERE name = :1", "foo")
counter = q.get()
db.run_in_transaction(decrement, counter.key(), amount=5)
run_in_transaction_custom_retries(retries, function, *args, **kwargs)

Runs a function containing datastore updates in a single transaction, retrying the transaction the given number of times in the event of contention. If any code raises an exception during the transaction, all datastore updates made in the transaction are rolled back.

Arguments:

retries
The maximum number of times to call the transaction function in the event of contention in the entity group (more than one user attempting to modify the group simultaneously).
function
The function to run in a datastore transaction.
*args
Positional arguments to pass to the function.
**kwargs
Keyword arguments to pass to the function.

Other than the ability to specify the number of retries, this function behaves identically to db.run_in_transaction().

run_in_transaction_options(options, function, *args, **kwargs)

Runs a function containing datastore updates in a single transaction using the passed-in TransactionOptions object. For cross-group (XG) transactions you must specify a TransactionOptions object with its XG parameter set to true, and your app must use the High Replication Datastore (HRD). If any code raises an exception during the transaction, all datastore updates made in the transaction are rolled back.

Arguments:

options
The TransactionOptions object containing the settings used by this transaction. To enable XG transactions, its xg parameter must be set to true.
function
The function to run in a datastore transaction.
*args
Positional arguments to pass to the function.
**kwargs
Keyword arguments to pass to the function.

If the function returns a value, run_in_transaction_options() returns the value to the caller.

If the function raises an exception, the transaction is rolled back. If the function raises a Rollback exception, the exception is not re-raised. For any other exception, the exception is re-raised to the caller.

The datastore uses optimistic locking and retries for transactions. If the transaction prepared by the function cannot be committed, run_in_transaction_options() calls the function again, retrying the transaction up to up to the number of retries specified by the TransactionOptions object. Because the transaction function may be called more than once for a single transaction, the function should not have side effects, including modifications to arguments. If the transaction cannot be committed, such as due to a high rate of contention, a TransactionFailedError is raised.

For more information about transactions, see Transactions.

This snippet shows how to use this function to run a cross-group transaction.

from google.appengine.ext import db

xg_on = db.create_transaction_options(xg=True)

def my_txn():
    x = MyModel(a=3)
    x.put()
    y = MyModel(a=7)
    y.put()

db.run_in_transaction_options(xg_on, my_txn)
    
to_dict(model_instance, dictionary=None)

Creates and returns a dict representation of the passed Model instance.

Arguments:

model_instance
The model instance to copy.
dictionary
If passed, the model's data will be "merged" into this dictionary. Model values clobber values in the dictionary; but dictionary entries that do not have corresponding fields in the Model instance are preserved.