JBoss.orgCommunity Documentation
Abstract
The Narayana Project Documentation contains information on how to use Narayana to develop applications that use transaction technology to manage business processes.
save_state
and
restore_state
methods
This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes the Liberation Fonts set by default.
Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight keycaps and key combinations. For example:
To see the contents of the file
my_next_bestselling_novel
in your current working directory, enter the cat my_next_bestselling_novel command at the shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a keycap, all presented in mono-spaced bold and all distinguishable thanks to context.
Key combinations can be distinguished from keycaps by the hyphen connecting each part of a key combination. For example:
Press Enter to execute the command.
Press Ctrl+Alt+F2 to switch to the first virtual terminal. Press Ctrl+Alt+F1 to return to your X-Windows session.
The first paragraph highlights the particular keycap to press. The second highlights two key combinations (each a set of three keycaps with each set pressed simultaneously).
If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold
. For example:
File-related classes include
filesystem
for file systems,file
for files, anddir
for directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialog box text; labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example:
Choose Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).
→ → from the main menu bar to launchTo insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.
Mono-spaced Bold Italic
or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:
To connect to a remote machine using ssh, type ssh
username
@domain.name
at a shell prompt. If the remote machine isexample.com
and your username on that machine is john, type ssh john@example.com.The mount -o remount
file-system
command remounts the named file system. For example, to remount the/home
file system, the command is mount -o remount /home.To see the version of a currently installed package, use the rpm -q
package
command. It will return a result as follows:package-version-release
.
Note the words in bold italics above — username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:
Publican is a DocBook publishing system.
Terminal output and source code listings are set off visually from the surrounding text.
Output sent to a terminal is set in mono-spaced roman
and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
Source-code listings are also set in mono-spaced roman
but add syntax highlighting as follows:
package org.jboss.book.jca.ex1;
import javax.naming.InitialContext;
public class ExClient
{
public static void main(String args[])
throws Exception
{
InitialContext iniCtx = new InitialContext();
Object ref = iniCtx.lookup("EchoBean");
EchoHome home = (EchoHome) ref;
Echo echo = home.create();
System.out.println("Created Echo");
System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));
}
}
Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.
Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.
Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled 'Important' will not cause data loss but may cause irritation and frustration.
Warnings should not be ignored. Ignoring warnings will most likely cause data loss.
Please feel free to raise any issues you find with this document in our issue tracking system
save_state
and
restore_state
methods
A transaction is a unit of work that encapsulates multiple database actions such that that either all the encapsulated actions fail or all succeed.
Transactions ensure data integrity when an application interacts with multiple datasources.
This chapter contains a description of the use of the ArjunaCore transaction engine and the Transactional Objects for Java (TXOJ) classes and facilities. The classes mentioned in this chapter are the key to writing fault-tolerant applications using transactions. Thus, they are described and then applied in the construction of a simple application. The classes to be described in this chapter can be found in the com.arjuna.ats.txoj and com.arjuna.ats.arjuna packages.
Although Narayana can be embedded in various containers, such as WildFly Application Server, it remains a stand-alone transaction manager as well. There are no dependencies between the core Narayana and any container implementations.
In keeping with the object-oriented view, the mechanisms needed to construct reliable distributed applications are presented to programmers in an object-oriented manner. Some mechanisms need to be inherited, for example, concurrency control and state management. Other mechanisms, such as object storage and transactions, are implemented as ArjunaCore objects that are created and manipulated like any other object.
When the manual talks about using persistence and concurrency control facilities it assumes that the Transactional Objects for Java (TXOJ) classes are being used. If this is not the case then the programmer is responsible for all of these issues.
ArjunaCore exploits object-oriented techniques to present programmers with a toolkit of Java classes from which application classes can inherit to obtain desired properties, such as persistence and concurrency control. These classes form a hierarchy, part of which is shown in Figure 1.1, “ArjunaCore Class Hierarchy” and which will be described later in this document.
Apart from specifying the scopes of transactions, and setting appropriate locks within objects, the application programmer does not have any other responsibilities: ArjunaCore and TXOJ guarantee that transactional objects will be registered with, and be driven by, the appropriate transactions, and crash recovery mechanisms are invoked automatically in the event of failures.
ArjunaCore needs to be able to remember the state of an object for several purposes.
The state represents some past state of the object.
The state represents the final state of an object at application termination.
Since these requirements have common functionality they are all implemented using the same mechanism: the
classes
InputObjectState
and
OutputObjectState
. The classes maintain an
internal array into which instances of the standard types can be contiguously packed or unpacked using appropriate
pack
or
unpack
operations. This buffer is automatically resized
as required should it have insufficient space. The instances are all stored in the buffer in a standard form
called
network byte order
, making them machine independent. Any other
architecture-independent format, such as XDR or ASN.1, can be implemented simply by replacing the operations with
ones appropriate to the encoding required.
Implementations of persistence can be affected by restrictions imposed by the Java SecurityManager. Therefore, the object store provided with ArjunaCore is implemented using the techniques of interface and implementation. The current distribution includes implementations which write object states to the local file system or database, and remote implementations, where the interface uses a client stub (proxy) to remote services.
Persistent objects are assigned unique identifiers, which are instances of the
Uid
class,
when they are created. These identifiers are used to identify them within the object store. States are read using
the
read_committed
operation and written by the
write_committed
and
write_uncommitted
operations.
At the root of the class hierarchy is the class
StateManager
.
StateManager
is responsible for object activation and
deactivation, as well as object recovery. Refer to
Example 1.1, “
Statemanager
”
for the simplified
signature of the class.
Example 1.1.
Statemanager
public abstract class StateManager
{
public boolean activate ();
public boolean deactivate (boolean commit);
public Uid get_uid (); // object’s identifier.
// methods to be provided by a derived class
public boolean restore_state (InputObjectState os);
public boolean save_state (OutputObjectState os);
protected StateManager ();
protected StateManager (Uid id);
};
Objects are assumed to be of three possible flavors.
Three Flavors of Objects
StateManager
attempts to generate and maintain appropriate recovery information for
the object. Such objects have lifetimes that do not exceed the application program that creates them.
The lifetime of the object is assumed to be greater than that of the creating or accessing
application, so
that in addition to maintaining recovery information,
StateManager
attempts to
automatically load or unload any existing persistent state for the object by calling the
activate
or
deactivate
operation at appropriate times.
No recovery information is ever kept, nor is object activation or deactivation ever automatically attempted.
If an object is
recoverable
or
recoverable and persistent
, then
StateManager
invokes the operations
save_state
while performing
deactivate
, and
restore_state
while performing
activate
,) at various points during the execution of the application. These operations
must be implemented by the programmer since
StateManager
cannot detect user-level state
changes.
This gives the programmer the ability to decide which parts of an object’s state should be made
persistent. For example, for a spreadsheet it may not be necessary to save all entries if some values can simply
be recomputed. The
save_state
implementation for a class
Example
that has integer member variables called A, B and C might be implemented as in
Example 1.2, “
save_state
Implementation
”
.
Example 1.2.
save_state
Implementation
public boolean save_state(OutputObjectState o)
{
if (!super.save_state(o))
return false;
try
{
o.packInt(A);
o.packInt(B);
o.packInt(C));
}
catch (Exception e)
{
return false;
}
return true;
}
it is necessary for all
save_state
and
restore_state
methods
to call
super.save_state
and
super.restore_state
. This is to
cater for improvements in the crash recovery mechanisms.
A persistent object not in use is assumed to be held in a passive state, with its state residing in an object store and activated on demand. The fundamental life cycle of a persistent object in TXOJ is shown in Figure 1.2, “Life cycle of a persistent Object in TXOJ” .
During its life time, a persistent object may be made active then passive many times.
The concurrency controller is implemented by the class
LockManager
, which provides sensible
default behavior while allowing the programmer to override it if deemed necessary by the particular semantics of
the class being programmed. As with
StateManager
and persistence, concurrency control
implementations are accessed through interfaces. As well as providing access to remote services, the current
implementations of concurrency control available to interfaces include:
Locks are made persistent by being written to the local file system or database.
Locks are maintained within the memory of the virtual machine which created them. This implementation has better performance than when writing locks to the local disk, but objects cannot be shared between virtual machines. Importantly, it is a basic Java object with no requirements which can be affected by the SecurityManager.
The primary programmer interface to the concurrency controller is via the
setlock
operation. By default, the runtime system enforces strict two-phase locking following a multiple reader,
single
writer policy on a per object basis. However, as shown in
Figure 1.1, “ArjunaCore Class Hierarchy”
, by inheriting
from the
Lock
class, you can provide your own lock implementations with different lock
conflict rules to enable type specific concurrency control.
Lock acquisition is, of necessity, under programmer control, since just as
StateManager
cannot determine if an operation modifies an object,
LockManager
cannot determine if an
operation requires a read or write lock. Lock release, however, is under control of the system and requires no
further intervention by the programmer. This ensures that the two-phase property can be correctly maintained.
public class LockResult
{
public static final int GRANTED;
public static final int REFUSED;
public static final int RELEASED;
};
public class ConflictType
{
public static final int CONFLICT;
public static final int COMPATIBLE;
public static final int PRESENT;
};
public abstract class LockManager extends StateManager
{
public static final int defaultRetry;
public static final int defaultTimeout;
public static final int waitTotalTimeout;
public final synchronized boolean releaselock (Uid lockUid);
public final synchronized int setlock (Lock toSet);
public final synchronized int setlock (Lock toSet, int retry);
public final synchronized int setlock (Lock toSet, int retry, int sleepTime);
public void print (PrintStream strm);
public String type ();
public boolean save_state (OutputObjectState os, int ObjectType);
public boolean restore_state (InputObjectState os, int ObjectType);
protected LockManager ();
protected LockManager (int ot);
protected LockManager (int ot, int objectModel);
protected LockManager (Uid storeUid);
protected LockManager (Uid storeUid, int ot);
protected LockManager (Uid storeUid, int ot, int objectModel);
protected void terminate ();
};
The
LockManager
class is primarily responsible for managing requests to set a lock on an
object or to release a lock as appropriate. However, since it is derived from
StateManager
,
it can also control when some of the inherited facilities are invoked. For example,
LockManager
assumes that the setting of a write lock implies that the invoking operation
must be about to modify the object. This may in turn cause recovery information to be saved if the object is
recoverable. In a similar fashion, successful lock acquisition causes
activate
to be
invoked.
Example 1.3, “
Example
Class
”
shows how to try to obtain a write lock on an object.
Example 1.3.
Example
Class
public class Example extends LockManager
{
public boolean foobar ()
{
AtomicAction A = new AtomicAction;
boolean result = false;
A.begin();
if (setlock(new Lock(LockMode.WRITE), 0) == Lock.GRANTED)
{
/*
* Do some work, and TXOJ will
* guarantee ACID properties.
*/
// automatically aborts if fails
if (A.commit() == AtomicAction.COMMITTED)
{
result = true;
}
}
else
A.rollback();
return result;
}
}
The transaction protocol engine is represented by the
AtomicAction
class, which uses
StateManager
to record sufficient information for crash recovery mechanisms to complete the
transaction in the event of failures. It has methods for starting and terminating the transaction, and, for those
situations where programmers need to implement their own resources, methods for registering them with the current
transaction. Because ArjunaCore supports sub-transactions, if a transaction is begun within the scope of an already
executing transaction it will automatically be nested.
You can use ArjunaCore with multi-threaded applications. Each thread within an application can share a transaction or execute within its own transaction. Therefore, all ArjunaCore classes are also thread-safe.
Example 1.4. Relationships Between Activation, Termination, and Commitment
{
. . .
O1 objct1 = new objct1(Name-A);/* (i) bind to "old" persistent object A */
O2 objct2 = new objct2(); /* create a "new" persistent object */
OTS.current().begin(); /* (ii) start of atomic action */
objct1.op(...); /* (iii) object activation and invocations */
objct2.op(...);
. . .
OTS.current().commit(true); /* (iv) tx commits & objects deactivated */
} /* (v) */
This could involve the creation of stub objects and a call to remote objects. Here, we
re-bind to an
existing persistent object identified by
Name-A
, and a new persistent object. A
naming system for remote objects maintains the mapping between object names and locations and is described
in a later chapter.
As a part of a given invocation, the object implementation is responsible to ensure that it is locked in read or write mode, assuming no lock conflict, and initialized, if necessary, with the latest committed state from the object store. The first time a lock is acquired on an object within a transaction the object’s state is acquired, if possible, from the object store.
This includes updating of the state of any modified objects in the object store.
The principal classes which make up the class hierarchy of ArjunaCore are depicted below.
StateManager
LockManager
User-Defined Classes
Lock
User-Defined Classes
AbstractRecord
RecoveryRecord
LockRecord
RecordList
Other management record types
AtomicAction
TopLevelTransaction
Input/OutputObjectBuffer
Input/OutputObjectState
ObjectStore
Programmers of fault-tolerant applications will be primarily concerned with the classes
LockManager
,
Lock
, and
AtomicAction
. Other
classes important to a programmer are
Uid
and
ObjectState
.
Most ArjunaCore classes are derived from the base class
StateManager
, which provides primitive
facilities necessary for managing persistent and recoverable objects. These facilities include support for the
activation and de-activation of objects, and state-based object recovery.
The class
LockManager
uses the facilities of
StateManager
and
Lock
to provide the concurrency control required for implementing the serializability
property of atomic actions. The concurrency control consists of two-phase locking in the current implementation.
The implementation of atomic action facilities is supported by
AtomicAction
and
TopLevelTransaction
.
Consider a simple example. Assume that
Example
is a user-defined persistent class suitably
derived from the
LockManager
. An application containing an atomic transaction
Trans
accesses an object called
O
of type
Example
,
by invoking the operation
op1
, which involves state changes to
O
. The serializability property requires that a write lock must be acquired on
O
before it is modified. Therefore, the body of
op1
should
contain a call to the
setlock
operation of the concurrency controller.
Example 1.5. Simple Concurrency Control
public boolean op1 (...)
{
if (setlock (new Lock(LockMode.WRITE) == LockResult.GRANTED)
{
// actual state change operations follow
...
}
}
Procedure 1.1.
Steps followed by the operation
setlock
The operation
setlock
, provided by the
LockManager
class,
performs the following functions in
Example 1.5, “Simple Concurrency Control”
.
Check write lock compatibility with the currently held locks, and if allowed, continue.
Call the StateManager operation
activate
.
activate
will load,
if not done already, the latest persistent state of
O
from the object store, then call
the
StateManager
operation
modified
, which has the effect of
creating an instance of either
RecoveryRecord
or
PersistenceRecord
for
O
, depending upon whether
O
was persistent or not. The Lock is a WRITE lock so the old state of the object must
be retained prior to modification. The record is then inserted into the RecordList of Trans.
Create and insert a
LockRecord
instance in the
RecordList
of
Trans
.
Now suppose that action
Trans
is aborted sometime after the lock has been acquired. Then
the
rollback
operation of
AtomicAction
will process the
RecordList
instance associated with
Trans
by invoking an
appropriate
Abort
operation on the various records. The implementation of this operation
by the
LockRecord
class will release the WRITE lock while that of
RecoveryRecord
or
PersistenceRecord
will restore the prior state of
O
.
It is important to realize that all of the above work is automatically being performed by ArjunaCore on behalf of the application programmer. The programmer need only start the transaction and set an appropriate lock; ArjunaCore and TXOJ take care of participant registration, persistence, concurrency control and recovery.
This section describes ArjunaCore and Transactional Objects for Java (TXOJ) in more detail, and shows how to use ArjunaCore to construct transactional applications.
Note: in previous releases ArjunaCore was often referred to as TxCore.
ArjunaCore needs to be able to remember the state of an object for several purposes, including recovery (the state represents some past state of the object), and for persistence (the state represents the final state of an object at application termination). Since all of these requirements require common functionality they are all implemented using the same mechanism - the classes Input/OutputObjectState and Input/OutputBuffer.
Example 1.6.
OutputBuffer
and
InputBuffer
public class OutputBuffer
{
public OutputBuffer ();
public final synchronized boolean valid ();
public synchronized byte[] buffer();
public synchronized int length ();
/* pack operations for standard Java types */
public synchronized void packByte (byte b) throws IOException;
public synchronized void packBytes (byte[] b) throws IOException;
public synchronized void packBoolean (boolean b) throws IOException;
public synchronized void packChar (char c) throws IOException;
public synchronized void packShort (short s) throws IOException;
public synchronized void packInt (int i) throws IOException;
public synchronized void packLong (long l) throws IOException;
public synchronized void packFloat (float f) throws IOException;
public synchronized void packDouble (double d) throws IOException;
public synchronized void packString (String s) throws IOException;
};
public class InputBuffer
{
public InputBuffer ();
public final synchronized boolean valid ();
public synchronized byte[] buffer();
public synchronized int length ();
/* unpack operations for standard Java types */
public synchronized byte unpackByte () throws IOException;
public synchronized byte[] unpackBytes () throws IOException;
public synchronized boolean unpackBoolean () throws IOException;
public synchronized char unpackChar () throws IOException;
public synchronized short unpackShort () throws IOException;
public synchronized int unpackInt () throws IOException;
public synchronized long unpackLong () throws IOException;
public synchronized float unpackFloat () throws IOException;
public synchronized double unpackDouble () throws IOException;
public synchronized String unpackString () throws IOException;
};
The
InputBuffer
and
OutputBuffer
classes maintain an internal
array into which instances of the standard Java types can be contiguously packed or unpacked, using the
pack
or
unpack
operations. This buffer is automatically
resized as required should it have insufficient space. The instances are all stored in the buffer in a
standard form called
network byte order
to make them machine independent.
Example 1.7.
OutputObjectState
and
InputObjectState
class OutputObjectState extends OutputBuffer
{
public OutputObjectState (Uid newUid, String typeName);
public boolean notempty ();
public int size ();
public Uid stateUid ();
public String type ();
};
class InputObjectState extends InputBuffer
{
public OutputObjectState (Uid newUid, String typeName, byte[] b);
public boolean notempty ();
public int size ();
public Uid stateUid ();
public String type ();
};
The
InputObjectState
and
OutputObjectState
classes provides all
the functionality of
InputBuffer
and
OutputBuffer
, through
inheritance, and add two additional instance variables that signify the Uid and type of the object for which
the
InputObjectStat
or
OutputObjectState
instance is a
compressed image. These are used when accessing the object store during storage and retrieval of the object
state.
The object store provided with ArjunaCore deliberately has a fairly restricted interface so that it can be implemented in a variety of ways. For example, object stores are implemented in shared memory, on the Unix file system (in several different forms), and as a remotely accessible store. More complete information about the object stores available in ArjunaCore can be found in the Appendix.
As with all ArjunaCore classes, the default object stores are pure Java implementations. to access the shared memory and other more complex object store implementations, you need to use native methods.
All of the object stores hold and retrieve instances of the class
InputObjectState
or
OutputObjectState
. These instances are named by the Uid and Type of the object that they
represent. States are read using the
read_committed
operation and written by the system
using the
write_uncommitted
operation. Under normal operation new object states do not
overwrite old object states but are written to the store as shadow copies. These shadows replace the original
only when the
commit_state
operation is invoked. Normally all interaction with the
object store is performed by ArjunaCore system components as appropriate thus the existence of any shadow versions
of objects in the store are hidden from the programmer.
Example 1.8. StateStatus
public StateStatus
{
public static final int OS_COMMITTED;
public static final int OS_UNCOMMITTED;
public static final int OS_COMMITTED_HIDDEN;
public static final int OS_UNCOMMITTED_HIDDEN;
public static final int OS_UNKNOWN;
}
Example 1.9. ObjectStore
public abstract class ObjectStore
{
/* The abstract interface */
public abstract boolean commit_state (Uid u, String name)
throws ObjectStoreException;
public abstract InputObjectState read_committed (Uid u, String name)
throws ObjectStoreException;
public abstract boolean write_uncommitted (Uid u, String name,
OutputObjectState os) throws ObjectStoreException;
. . .
};
When a transactional object is committing, it must make certain state changes persistent, so it can
recover in
the event of a failure and either continue to commit, or rollback. When using
TXOJ
,
ArjunaCore will take care of this automatically. To guarantee
ACID
properties, these state
changes must be flushed to the persistence store implementation before the transaction can proceed to
commit. Otherwise, the application may assume that the transaction has committed when in fact the state changes
may still reside within an operating system cache, and may be lost by a subsequent machine failure. By default,
ArjunaCore ensures that such state changes are flushed. However, doing so can impose a significant performance
penalty on the application.
To prevent transactional object state flushes, set the
ObjectStoreEnvironmentBean.objectStoreSync
variable to
OFF
.
ArjunaCore comes with support for several different object store implementations. The Appendix
describes these implementations, how to select and configure a given implementation on a per-object
basis using
the
ObjectStoreEnvironmentBean.objectStoreType
property variable, and indicates how
additional implementations can be provided.
The ArjunaCore class
StateManager
manages the state of an object and provides all of the
basic support mechanisms required by an object for state management
purposes.
StateManager
is responsible for creating and registering appropriate
resources concerned with the persistence and recovery of the transactional object. If a transaction is nested,
then
StateManager
will also propagate these resources between child transactions and
their parents at commit time.
Objects are assumed to be of three possible flavors.
Three Flavors of Objects
StateManager
attempts to generate and maintain appropriate recovery information for
the object. Such objects have lifetimes that do not exceed the application program that creates them.
The lifetime of the object is assumed to be greater than that of the creating or
accessing application, so
that in addition to maintaining recovery information,
StateManager
attempts to
automatically load or unload any existing persistent state for the object by calling the
activate
or
deactivate
operation at appropriate times.
No recovery information is ever kept, nor is object activation or deactivation ever automatically attempted.
This object property is selected at object construction time and cannot be changed thereafter. Thus an object cannot gain (or lose) recovery capabilities at some arbitrary point during its lifetime.
Example 1.10.
Object Store Implementation Using
StateManager
public class ObjectStatus
{
public static final int PASSIVE;
public static final int PASSIVE_NEW;
public static final int ACTIVE;
public static final int ACTIVE_NEW;
public static final int UNKNOWN_STATUS;
};
public class ObjectType
{
public static final int RECOVERABLE;
public static final int ANDPERSISTENT;
public static final int NEITHER;
};
public abstract class StateManager
{
public synchronized boolean activate ();
public synchronized boolean activate (String storeRoot);
public synchronized boolean deactivate ();
public synchronized boolean deactivate (String storeRoot, boolean commit);
public synchronized void destroy ();
public final Uid get_uid ();
public boolean restore_state (InputObjectState, int ObjectType);
public boolean save_state (OutputObjectState, int ObjectType);
public String type ();
. . .
protected StateManager ();
protected StateManager (int ObjectType, int objectModel);
protected StateManager (Uid uid);
protected StateManager (Uid uid, int objectModel);
. . .
protected final void modified ();
. . .
};
public class ObjectModel
{
public static final int SINGLE;
public static final int MULTIPLE;
};
If an object is recoverable or persistent,
StateManager
will invoke the operations
save_state
(while performing deactivation),
restore_state
(while performing activation), and
type
at various points during the execution of the
application. These operations must be implemented by the programmer since
StateManager
does not have access to a runtime description of the layout of an arbitrary Java object in memory
and thus
cannot implement a default policy for converting the in memory version of the object to its passive
form. However, the capabilities provided by
InputObjectState
and
OutputObjectState
make the writing of these routines fairly simple. For example, the
save_state
implementation for a class
Example
that had member
variables called
A
,
B
, and
C
could simply be
Example 1.11, “
Example Implementation of Methods for
StateManager
”
.
Example 1.11.
Example Implementation of Methods for
StateManager
public boolean save_state ( OutputObjectState os, int ObjectType )
{
if (!super.save_state(os, ObjectType))
return false;
try
{
os.packInt(A);
os.packString(B);
os.packFloat(C);
return true;
}
catch (IOException e)
{
return false;
}
}
In order to support crash recovery for persistent objects, all
save_state
and
restore_state
methods of user objects must call
super.save_state
and
super.restore_state
.
The
type
method is used to determine the location in the object store where the
state of instances of that class will be saved and ultimately restored. This location can actually be any
valid string. However, you should avoid using the hash character (#) as this is reserved for special
directories that ArjunaCore requires.
The
get_uid
operation of
StateManager
provides read-only
access to an object’s internal system name for whatever purpose the programmer requires, such as registration
of the name in a name server. The value of the internal system name can only be set when an object is
initially constructed, either by the provision of an explicit parameter or by generating a new identifier when
the object is created.
The
destroy
method can be used to remove the object’s state from the object
store. This is an atomic operation, and therefore will only remove the state if the top-level transaction
within which it is invoked eventually commits. The programmer must obtain exclusive access to the object prior
to invoking this operation.
Since object recovery and persistence essentially have complimentary requirements (the only
difference being
where state information is stored and for what purpose),
StateManager
effectively
combines the management of these two properties into a single mechanism. It uses instances of the classes
InputObjectState
and
OutputObjectState
both for recovery and
persistence purposes. An additional argument passed to the
save_state
and
restore_state
operations allows the programmer to determine the purpose for which any
given invocation is being made. This allows different information to be saved for recovery and persistence
purposes.
ArjunaCore supports two models for objects, which affect how an objects state and concurrency control are implemented.
ArjunaCore Object Models
Only a single copy of the object exists within the application. This copy resides within a single JVM, and all clients must address their invocations to this server. This model provides better performance, but represents a single point of failure, and in a multi-threaded environment may not protect the object from corruption if a single thread fails.
Logically, a single instance of the object exists, but copies of it are distributed across different JVMs. The performance of this model is worse than the SINGLE model, but it provides better failure isolation.
The default model is SINGLE. The programmer can override this on a per-object basis by using the appropriate constructor.
In summary, the ArjunaCore class
StateManager
manages the state of an object and provides
all of the basic support mechanisms required by an object for state management purposes. Some operations must
be defined by the class developer. These operations are:
save_state
,
restore_state
, and
type
.
save_state
(
OutputObjectState
state
,
int
objectType
)
Invoked whenever the state of an object might need to be saved for future use, primarily
for recovery
or persistence purposes. The
objectType
parameter indicates the reason that
save_state
was invoked by ArjunaCore. This enables the programmer to save different
pieces of information into the
OutputObjectState
supplied as the first parameter
depending upon whether the state is needed for recovery or persistence purposes. For example, pointers
to other ArjunaCore objects might be saved simply as pointers for recovery purposes but as
Uid
s
for persistence purposes. As shown earlier, the
OutputObjectState
class provides
convenient operations to allow the saving of instances of all of the basic types in Java. In order to
support crash recovery for persistent objects it is necessary for all
save_state
methods to call
super.save_state
.
save_state
assumes that an object is internally consistent and that all
variables saved have valid values. It is the programmer's responsibility to ensure that this is the
case.
restore_state
(
InputObjectState
state
,
int
objectType
)
Invoked whenever the state of an object needs to be restored to the one supplied. Once
again the second
parameter allows different interpretations of the supplied state. In order to support crash recovery for
persistent objects it is necessary for all
restore_state
methods to call
super.restore_state
.
type
()
The ArjunaCore persistence mechanism requires a means of determining the type of an object
as a string so
that it can save or restore the state of the object into or from the object store. By convention this
information indicates the position of the class in the hierarchy. For example,
/StateManager/LockManager/Object
.
The
type
method is used to determine the location in the object store where the
state of instances of that class will be saved and ultimately restored. This can actually be any valid
string. However, you should avoid using the hash character (#) as this is reserved for special
directories that ArjunaCore requires.
Consider the following basic
Array
class derived from the
StateManager
class. In this example, to illustrate saving and restoring of an object’s
state, the
highestIndex
variable is used to keep track of the highest element of the array
that has a non-zero value.
Example 1.12.
Array
Class
public class Array extends StateManager
{
public Array ();
public Array (Uid objUid);
public void finalize ( super.terminate(); super.finalize(); };
/* Class specific operations. */
public boolean set (int index, int value);
public int get (int index);
/* State management specific operations. */
public boolean save_state (OutputObjectState os, int ObjectType);
public boolean restore_state (InputObjectState os, int ObjectType);
public String type ();
public static final int ARRAY_SIZE = 10;
private int[] elements = new int[ARRAY_SIZE];
private int highestIndex;
};
The save_state, restore_state and type operations can be defined as follows:
/* Ignore ObjectType parameter for simplicity */
public boolean save_state (OutputObjectState os, int ObjectType)
{
if (!super.save_state(os, ObjectType))
return false;
try
{
packInt(highestIndex);
/*
* Traverse array state that we wish to save. Only save active elements
*/
for (int i = 0; i <= highestIndex; i++)
os.packInt(elements[i]);
return true;
}
catch (IOException e)
{
return false;
}
}
public boolean restore_state (InputObjectState os, int ObjectType)
{
if (!super.restore_state(os, ObjectType))
return false;
try
{
int i = 0;
highestIndex = os.unpackInt();
while (i < ARRAY_SIZE)
{
if (i <= highestIndex)
elements[i] = os.unpackInt();
else
elements[i] = 0;
i++;
}
return true;
}
catch (IOException e)
{
return false;
}
}
public String type ()
{
return "/StateManager/Array";
}
Concurrency control information within ArjunaCore is maintained by locks. Locks which are required to be shared between objects in different processes may be held within a lock store, similar to the object store facility presented previously. The lock store provided with ArjunaCore deliberately has a fairly restricted interface so that it can be implemented in a variety of ways. For example, lock stores are implemented in shared memory, on the Unix file system (in several different forms), and as a remotely accessible store. More information about the object stores available in ArjunaCore can be found in the Appendix.
As with all ArjunaCore classes, the default lock stores are pure Java implementations. To access the shared memory and other more complex lock store implementations it is necessary to use native methods.
Example 1.13.
LockStore
public class LockStore
{
public abstract InputObjectState read_state (Uid u, String tName)
throws LockStoreException;
public abstract boolean remove_state (Uid u, String tname);
public abstract boolean write_committed (Uid u, String tName,
OutputObjectState state);
};
ArjunaCore comes with support for several different object store implementations. If the object model being
used is
SINGLE, then no lock store is required for maintaining locks, since the information about the object is not
exported from it. However, if the MULTIPLE model is used, then different run-time environments (processes, Java
virtual machines) may need to share concurrency control information. The implementation type of the lock store
to use can be specified for all objects within a given execution environment using the
TxojEnvironmentBean.lockStoreType
property variable. Currently this can have one of the
following values:
This is an in-memory implementation which does not, by default, allow sharing of stored information between execution environments. The application programmer is responsible for sharing the store information.
This is the default implementation, and stores locking information within the local file
system. Therefore
execution environments that share the same file store can share concurrency control information. The root
of the file system into which locking information is written is the
LockStore
directory within the ArjunaCore installation directory. You can override this at runtime by
setting the
TxojEnvironmentBean.lockStoreDir
property variable accordingly, or placing the location
within the
CLASSPATH
.
java -D TxojEnvironmentBean.lockStoreDir=/var/tmp/LockStore myprogram
java –classpath $CLASSPATH;/var/tmp/LockStore myprogram
If neither of these approaches is taken, then the default location will be at the same level
as the
etc
directory of the installation.
The concurrency controller is implemented by the class
LockManager
, which provides
sensible default behavior, while allowing the programmer to override it if deemed necessary by the particular
semantics of the class being programmed. The primary programmer interface to the concurrency controller is via
the
setlock
operation. By default, the ArjunaCore runtime system enforces strict two-phase
locking following a multiple reader, single writer policy on a per object basis. Lock acquisition is under
programmer control, since just as
StateManager
cannot determine if an operation modifies
an object,
LockManager
cannot determine if an operation requires a read or write
lock. Lock release, however, is normally under control of the system and requires no further intervention by the
programmer. This ensures that the two-phase property can be correctly maintained.
The
LockManager
class is primarily responsible for managing requests to set a lock on an
object or to release a lock as appropriate. However, since it is derived from
StateManager
, it can also control when some of the inherited facilities are invoked. For
example, if a request to set a write lock is granted, then
LockManager
invokes modified
directly assuming that the setting of a write lock implies that the invoking operation must be about to modify
the object. This may in turn cause recovery information to be saved if the object is recoverable. In a similar
fashion, successful lock acquisition causes activate to be invoked.
Therefore,
LockManager
is directly responsible for activating and deactivating persistent
objects, as well as registering
Resources
for managing concurrency control. By driving
the
StateManager
class, it is also responsible for registering
Resources
for persistent or recoverable state manipulation and object recovery. The
application programmer simply sets appropriate locks, starts and ends transactions, and extends the
save_state
and
restore_state
methods of
StateManager
.
Example 1.14.
LockResult
public class LockResult
{
public static final int GRANTED;
public static final int REFUSED;
public static final int RELEASED;
};
public class ConflictType
{
public static final int CONFLICT;
public static final int COMPATIBLE;
public static final int PRESENT;
};
public abstract class LockManager extends StateManager
{
public static final int defaultTimeout;
public static final int defaultRetry;
public static final int waitTotalTimeout;
public synchronized int setlock (Lock l);
public synchronized int setlock (Lock l, int retry);
public synchronized int setlock (Lock l, int retry, int sleepTime);
public synchronized boolean releaselock (Uid uid);
/* abstract methods inherited from StateManager */
public boolean restore_state (InputObjectState os, int ObjectType);
public boolean save_state (OutputObjectState os, int ObjectType);
public String type ();
protected LockManager ();
protected LockManager (int ObjectType, int objectModel);
protected LockManager (Uid storeUid);
protected LockManager (Uid storeUid, int ObjectType, int objectModel);
. . .
};
The
setlock
operation must be parametrized with the type of lock required (READ or
WRITE), and the number of retries to acquire the lock before giving up. If a lock conflict occurs, one of the
following scenarios will take place:
If the retry value is equal to
LockManager.waitTotalTimeout
, then the thread which called
setlock
will be blocked until the lock is released, or the total timeout specified
has elapsed, and in which
REFUSED
will be returned.
If the lock cannot be obtained initially then
LockManager
will try for the specified
number of retries, waiting for the specified timeout value between each failed attempt. The default is 100
attempts, each attempt being separated by a 0.25 seconds delay. The time between retries is specified in
micro-seconds.
If a lock conflict occurs the current implementation simply times out lock requests, thereby
preventing
deadlocks, rather than providing a full deadlock detection scheme. If the requested lock is obtained, the
setlock
operation will return the value GRANTED, otherwise the value
REFUSED
is returned. It is the responsibility of the programmer to ensure that the
remainder of the code for an operation is only executed if a lock request is granted. Below are examples of
the use of the setlock operation.
Example 1.15.
setlock
Method Usage
res = setlock(new Lock(WRITE), 10); // Will attempt to set a
// write lock 11 times (10
// retries) on the object
// before giving up.
res = setlock(new Lock(READ), 0); // Will attempt to set a read
// lock 1 time (no retries) on
// the object before giving up.
res = setlock(new Lock(WRITE); // Will attempt to set a write
// lock 101 times (default of
// 100 retries) on the object
// before giving up.
The concurrency control mechanism is integrated into the atomic action mechanism, thus ensuring that as
locks
are granted on an object appropriate information is registered with the currently running atomic action to
ensure that the locks are released at the correct time. This frees the programmer from the burden of explicitly
freeing any acquired locks if they were acquired within atomic actions. However, if locks are acquired on an
object outside of the scope of an atomic action, it is the programmer's responsibility to release the locks when
required, using the corresponding
releaselock
operation.
Unlike many other systems, locks in ArjunaCore are not special system types. Instead they are simply
instances of
other ArjunaCore objects (the class
Lock
which is also derived from
StateManager
so that locks may be made persistent if required and can also be named in a
simple fashion). Furthermore,
LockManager
deliberately has no knowledge of the semantics
of the actual policy by which lock requests are granted. Such information is maintained by the actual
Lock
class instances which provide operations (the
conflictsWith
operation) by which
LockManager
can determine if two locks conflict or not. This
separation is important in that it allows the programmer to derive new lock types from the basic
Lock
class and by providing appropriate definitions of the conflict operations enhanced
levels of concurrency may be possible.
Example 1.16.
LockMode
Class
public class LockMode
{
public static final int READ;
public static final int WRITE;
};
public class LockStatus
{
public static final int LOCKFREE;
public static final int LOCKHELD;
public static final int LOCKRETAINED;
};
public class Lock extends StateManager
{
public Lock (int lockMode);
public boolean conflictsWith (Lock otherLock);
public boolean modifiesObject ();
public boolean restore_state (InputObjectState os, int ObjectType);
public boolean save_state (OutputObjectState os, int ObjectType);
public String type ();
. . .
};
The
Lock
class provides a
modifiesObject
operation which
LockManager
uses to determine if granting this locking request requires a call on
modified. This operation is provided so that locking modes other than simple read and write can be
supported. The supplied
Lock
class supports the traditional multiple reader/single writer
policy.
Recall that ArjunaCore objects can be recoverable, recoverable and persistent, or neither. Additionally each
object
possesses a unique internal name. These attributes can only be set when that object is constructed. Thus
LockManager
provides two protected constructors for use by derived classes, each of which
fulfills a distinct purpose
Protected Constructors Provided by
LockManager
LockManager ()
This constructor allows the creation of new objects, having no prior state.
LockManager
(
int
objectType
,
int
objectModel)
As above, this constructor allows the creation of new objects having no prior state. exist.
The
objectType
parameter determines whether an object is simply recoverable (indicated by
RECOVERABLE
), recoverable and persistent (indicated by
ANDPERSISTENT
), or neither (indicated by
NEITHER
). If an object is
marked as being persistent then the state of the object will be stored in one of the object stores. The
shared parameter only has meaning if it is
RECOVERABLE
. If the object model is
SINGLE
(the default behavior) then the recoverable state of the object is maintained
within the object itself, and has no external representation). Otherwise an in-memory (volatile) object
store is used to store the state of the object between atomic actions.
Constructors for new persistent objects should make use of atomic actions within themselves. This will ensure that the state of the object is automatically written to the object store either when the action in the constructor commits or, if an enclosing action exists, when the appropriate top-level action commits. Later examples in this chapter illustrate this point further.
LockManager
(
Uid
objUid
)
This constructor allows access to an existing persistent object, whose internal name is
given by the
objUid
parameter. Objects constructed using this operation will normally have their
prior state (identified by
objUid
) loaded from an object store automatically by the
system.
LockManager
(
Uid
objUid
,
int
objectModel
)
As above, this constructor allows access to an existing persistent object, whose internal
name is given by
the
objUid
parameter. Objects constructed using this operation will normally have their
prior state (identified by
objUid
) loaded from an object store automatically by the
system. If the object model is
SINGLE
(the default behavior), then the object will not
be reactivated at the start of each top-level transaction.
The finalizer of a programmer-defined class must invoke the inherited operation
terminate
to inform the state management mechanism that the object is about to be
destroyed. Otherwise, unpredictable results may occur.
Atomic actions (transactions) can be used by both application programmers and class developers. Thus entire operations (or parts of operations) can be made atomic as required by the semantics of a particular operation. This chapter will describe some of the more subtle issues involved with using transactions in general and ArjunaCore in particular.
In some cases it may be necessary to enlist participants that are not two-phase commit aware into a two-phase commit transaction. If there is only a single resource then there is no need for two-phase commit. However, if there are multiple resources in the transaction, the Last Resource Commit Optimization (LRCO) comes into play. It is possible for a single resource that is one-phase aware (i.e., can only commit or roll back, with no prepare), to be enlisted in a transaction with two-phase commit aware resources. This feature is implemented by logging the decision to commit after committing the one-phase aware participant: The coordinator asks each two-phase aware participant if they are able to prepare and if they all vote yes then the one-phase aware participant is asked to commit. If the one-phase aware participant commits successfully then the decision to commit is logged and then commit is called on each two-phase aware participant. A heuristic outcome will occur if the coordinator fails before logging its commit decision but after the one-phase participant has committed since each two-phase aware participant will eventually rollback (as required under presumed abort semantics). This strategy delays the logging of the decision to commit so that in failure scenarios we have avoided a write operation. But this choice does mean that there is no record in the system of the fact that a heuristic outcome has occurred.
In order to utilize the LRCO, your participant must implement the
com.arjuna.ats.arjuna.coordinator.OnePhase
interface and be registered with the
transaction through the
BasicAction.add
operation. Since this operation expects instances
of
AbstractRecord
, you must create an instance of
com.arjuna.ats.arjuna.LastResourceRecord
and give your participant as the constructor
parameter.
Example 1.17.
Class
com.arjuna.ats.arjuna.LastResourceRecord
try
{
boolean success = false;
AtomicAction A = new AtomicAction();
OnePhase opRes = new OnePhase(); // used OnePhase interface
System.out.println("Starting top-level action.");
A.begin();
A.add(new LastResourceRecord(opRes));
A.add( "other participants" );
A.commit();
}
In some situations the application thread may not want to be informed of heuristics during completion. However, it is possible some other component in the application, thread or admin may still want to be informed. Therefore, special participants can be registered with the transaction which are triggered during the Synchronization phase and given the true outcome of the transaction. We do not dictate a specific implementation for what these participants do with the information (e.g., OTS allows for the CORBA Notification Service to be used).
To use this capability, create classes derived from the HeuristicNotification class and define the heuristicOutcome method to use whatever mechanism makes sense for your application. Instances of this class should be registered with the tranasction as Synchronizations.
There are no special constructs for nesting of transactions. If an action is begun while another action is running then it is automatically nested. This allows for a modular structure to applications, whereby objects can be implemented using atomic actions within their operations without the application programmer having to worry about the applications which use them, and whether or not the applications will use atomic actions as well. Thus, in some applications actions may be top-level, whereas in others they may be nested. Objects written in this way can then be shared between application programmers, and ArjunaCore will guarantee their consistency.
If a nested action is aborted, all of its work will be undone, although strict two-phase locking means that any locks it may have obtained will be retained until the top-level action commits or aborts. If a nested action commits then the work it has performed will only be committed by the system if the top-level action commits. If the top-level action aborts then all of the work will be undone.
The committing or aborting of a nested action does not automatically affect the outcome of the action within which it is nested. This is application dependent, and allows a programmer to structure atomic actions to contain faults, undo work, etc.
By default, the Transaction Service executes the
commit
protocol of a top-level
transaction in a synchronous manner. All registered resources will be told to prepare in order by a single thread,
and then they will be told to commit or rollback. A similar comment
applies to the volatile phase of the protocol which provides a
synchronization mechanism that allows an interested party to be notified
before and after the transaction completes. This has several possible
disadvantages:
In the case of many registered synchronizations, the
beforeSynchronization
operation can
logically be invoked in parallel on each non interposed
synchronization (and similary for the interposed synchronizations).
The disadvantage is that if an “early” synchronization in the list of
registered resource forces a rollback by throwing an unchecked
exception, possibly many beforeCompletion operations will have been
made needlessly.
In the case of many registered resources, the
prepare
operation can logically be
invoked in parallel on each resource. The disadvantage is that if an “early” resource in the list of
registered resource forces a rollback during
prepare
, possibly many prepare
operations will have been made needlessly.
In the case where heuristic reporting is not required by the application, the second phase of the commit protocol (including any afterCompletion synchronizations) can be done asynchronously, since its success or failure is not important to the outcome of the transaction.
Therefore, Narayana
provides runtime options to enable possible threading optimizations. By setting the
CoordinatorEnvironmentBean.asyncBeforeSynchronization
environment variable to
YES
, during the
beforeSynchronization
phase a separate thread
will be created for each synchronization registered with the transaction.
By setting the
CoordinatorEnvironmentBean.asyncPrepare
environment variable to
YES
, during the
prepare
phase a separate thread will be created for
each registered participant within the transaction. By setting
CoordinatorEnvironmentBean.asyncCommit
to
YES
, a separate thread will be
created to complete the second phase of the transaction provided knowledge about heuristics outcomes is not required.
By setting the
CoordinatorEnvironmentBean.asyncAfterSynchronization
environment variable to
YES
, during the
afterSynchronization
phase a separate thread
will be created for each synchronization registered with the transaction
provided knowledge about heuristics outcomes is not required.
In addition to normal top-level and nested atomic actions, ArjunaCore also supports independent top-level actions, which can be used to relax strict serializability in a controlled manner. An independent top-level action can be executed from anywhere within another atomic action and behaves exactly like a normal top-level action. Its results are made permanent when it commits and will not be undone if any of the actions within which it was originally nested abort.
Top-level actions can be used within an application by declaring and using instances of the class
TopLevelTransaction
. They are used in exactly the same way as other transactions.
Exercise caution when writing the
save_state
and
restore_state
operations to ensure that no atomic actions are started, either explicitly in the operation or implicitly
through
use of some other operation. This restriction arises due to the fact that ArjunaCore may invoke
restore_state
as part of its commit processing resulting in the attempt to execute an
atomic action during the commit or abort phase of another action. This might violate the atomicity properties of
the action being committed or aborted and is thus discouraged.
Example 1.18.
If we consider the
Example 1.12, “
Array
Class
”
given previously, the
set
and
get
operations could be implemented as shown below.
This is a simplification of the code, ignoring error conditions and exceptions.
public boolean set (int index, int value)
{
boolean result = false;
AtomicAction A = new AtomicAction();
A.begin();
// We need to set a WRITE lock as we want to modify the state.
if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
{
elements[index] = value;
if ((value > 0) &&(index > highestIndex
highestIndex = index;
A.commit(true);
result = true;
}
else
A.rollback();
return result;
}
public int get (int index) // assume -1 means error
{
AtomicAction A = new AtomicAction();
A.begin();
// We only need a READ lock as the state is unchanged.
if (setlock(new Lock(LockMode.READ), 0) == LockResult.GRANTED)
{
A.commit(true);
return elements[index];
}
else
A.rollback();
return -1;
}
Java objects are deleted when the garbage collector determines that they are no longer required. Deleting an object that is currently under the control of a transaction must be approached with caution since if the object is being manipulated within a transaction its fate is effectively determined by the transaction. Therefore, regardless of the references to a transactional object maintained by an application, ArjunaCore will always retain its own references to ensure that the object is not garbage collected until after any transaction has terminated.
By default, transactions live until they are terminated by the application that created them or a failure occurs. However, it is possible to set a timeout (in seconds) on a per-transaction basis such that if the transaction has not terminated before the timeout expires it will be automatically rolled back.
In ArjunaCore, the timeout value is provided as a parameter to the
AtomicAction
constructor. If a value of
AtomicAction.NO_TIMEOUT
is provided (the default) then the
transaction will not be automatically timed out. Any other positive value is assumed to be the timeout for the
transaction (in seconds). A value of zero is taken to be a global default timeout, which can be provided by the
property
CoordinatorEnvironmentBean.defaultTimeout
, which has a default value of 60 seconds.
Default timeout values for other Narayana components, such as JTS, may be different and you should consult the relevant documentation to be sure.
When a top-level transaction is created with a non-zero timeout, it is subject to being rolled back if it has not completed within the specified number of seconds. Narayana uses a separate reaper thread which monitors all locally created transactions, and forces them to roll back if their timeouts elapse. If the transaction cannot be rolled back at that point, the reaper will force it into a rollback-only state so that it will eventually be rolled back.
By default this thread is dynamically scheduled to awake according to the timeout values for any
transactions
created, ensuring the most timely termination of transactions. It may alternatively be configured to awake at a
fixed interval, which can reduce overhead at the cost of less accurate rollback timing. For periodic operation,
change the
CoordinatorEnvironmentBean.txReaperMode
property from its default value of
DYNAMIC
to
PERIODIC
and set the interval between runs, in milliseconds,
using the property
CoordinatorEnvironmentBean.txReaperTimeout
. The default interval in
PERIODIC
mode is 120000 milliseconds.
In earlier versions the
PERIODIC
mode was known as
NORMAL
and was the
default behavior. The use of the configuration value
NORMAL
is deprecated and
PERIODIC
should be used instead if the old scheduling behavior is still required.
If a value of
0
is specified for the timeout of a top-level transaction, or no timeout is
specified, then Narayana
will not impose any timeout on the transaction, and the transaction will
be allowed to run indefinitely. This default timeout can be overridden by setting the
CoordinatorEnvironmentBean.defaultTimeout
property variable when using to the required timeout
value in seconds, when using ArjunaCore, ArjunaJTA or ArjunaJTS.
As of JBoss Transaction Service 4.5, transaction timeouts have been unified across all transaction components and are controlled by ArjunaCore.
If you want to be informed when a transaction is rolled back or forced into a rollback-only mode by the
reaper,
you can create a class that inherits from class
com.arjuna.ats.arjuna.coordinator.listener.ReaperMonitor
and overrides the
rolledBack
and
markedRollbackOnly
methods. When registered
with the reaper via the
TransactionReaper.addListener
method, the reaper will invoke
one of these methods depending upon how it tries to terminate the transaction.
The reaper will not inform you if the transaction is terminated (committed or rolled back) outside of its control, such as by the application.
Examples throughout this manual use transactions in the implementation of constructors for new persistent objects. This is deliberate because it guarantees correct propagation of the state of the object to the object store. The state of a modified persistent object is only written to the object store when the top-level transaction commits. Thus, if the constructor transaction is top-level and it commits, the newly-created object is written to the store and becomes available immediately. If, however, the constructor transaction commits but is nested because another transaction that was started prior to object creation is running, the state is written only if all of the parent transactions commit.
On the other hand, if the constructor does not use transactions, inconsistencies in the system can arise. For example, if no transaction is active when the object is created, its state is not saved to the store until the next time the object is modified under the control of some transaction.
Example 1.19. Nested Transactions In Constructors
AtomicAction A = new AtomicAction();
Object obj1;
Object obj2;
obj1 = new Object(); // create new object
obj2 = new Object("old"); // existing object
A.begin(0);
obj2.remember(obj1.get_uid()); // obj2 now contains reference to obj1
A.commit(true); // obj2 saved but obj1 is not
The two objects are created outside of the control of the top-level action
A
.
obj1
is a new object.
obj2
is an
old existing object. When the
remember
operation of
obj2
is
invoked, the object will be activated and the
Uid
of
obj1
remembered. Since this action commits, the persistent state of
obj2
may now contain the
Uid
of
obj1
. However, the state of
obj1
itself
has not been saved since it has not been manipulated under the control of any action. In fact, unless it is
modified under the control of an action later in the application, it will never be saved. If, however, the
constructor had used an atomic action, the state of
obj1
would have automatically been
saved at the time it was constructed and this inconsistency could not arise.
ArjunaCore may invoke the user-defined
save_state
operation of an object at any time during
the lifetime of an object, including during the execution of the body of the object’s constructor. This is
particularly a possibility if it uses atomic actions. It is important, therefore, that all of the variables
saved by
save_state
are correctly initialized. Exercise caution when writing the
save_state
and
restore_state
operations, to ensure that no
transactions are started, either explicitly in the operation, or implicitly through use of some other
operation. The reason for this restriction is that ArjunaCore may invoke
restore_state
as
part of its commit processing. This would result in the attempt to execute an atomic transaction during the
commit or abort phase of another transaction. This might violate the atomicity properties of the transaction
being committed or aborted, and is thus discouraged. In order to support crash recovery for persistent objects,
all
save_state
and
restore_state
methods of user objects must
call
super.save_state
and
super.restore_state
.
All of the basic types of Java (
int
,
long
, etc.) can be saved and restored from an
InputObjectState
or
OutputObjectState
instance by using the
pack
and
unpack
routines provided by
InputObjectState
and
OutputObjectState
. However packing and
unpacking objects should be handled differently. This is because packing objects brings in the additional
problems of aliasing. Aliasing happens when two different object references may point at the same item. For
example:
Example 1.20. Aliasing
public class Test
{
public Test (String s);
...
private String s1;
private String s2;
};
public Test (String s)
{
s1 = s;
s2 = s;
}
Here, both
s1
and
s2
point at the same string. A naive implementation of
save_state
might copy the string twice. From a
save_state
perspective this is simply inefficient. However,
restore_state
would unpack the two
strings into different areas of memory, destroying the original aliasing information. The current version of
ArjunaCore packs and unpacks separate object references.
The examples throughout this manual derive user classes from
LockManager
. These are two
important reasons for this.
Firstly, and most importantly, the serializability constraints of atomic actions require it.
It reduces the need for programmer intervention.
However, if you only require access to ArjunaCore's persistence and recovery mechanisms, direct derivation of a
user
class from
StateManager
is possible.
Classes derived directly from
StateManager
must make use of its state management mechanisms
explicitly. These interactions are normally undertaken by
LockManager
. From a programmer's
point of view this amounts to making appropriate use of the operations
activate
,
deactivate
, and
modified
, since
StateManager
's constructors are effectively identical to those of
LockManager
.
Example 1.21.
activate
boolean activate ()
boolean activate (String storeRoot)
Activate loads an object from the object store. The object’s UID must already have been set via the constructor and the object must exist in the store. If the object is successfully read then restore_state is called to build the object in memory. Activate is idempotent so that once an object has been activated further calls are ignored. The parameter represents the root name of the object store to search for the object. A value of null means use the default store.
Example 1.22.
deactivate
boolean deactivate ()
boolean deactivate (String storeRoot)
The inverse of activate. First calls save_state to build the compacted image of the object which is then saved in the object store. Objects are only saved if they have been modified since they were activated. The parameter represents the root name of the object store into which the object should be saved. A value of null means use the default store.
Example 1.23.
modified
void modified ()
Must be called prior to modifying the object in memory. If it is not called, the object will not be
saved in the
object store by
deactivate
.
Development Phases of a ArjunaCore Application
First, develop new classes with characteristics like persistence, recoverability, and concurrency control.
Then develop the applications that make use of the new classes of objects.
Although these two phases may be performed in parallel and by a single person, this guide refers to the first
step
as the job of the class developer, and the second as the job of the applications developer.
The
class developer defines appropriate
save_state
and
restore_state
operations for the class, sets appropriate locks in operations, and invokes the appropriate ArjunaCore class
constructors. The applications developer defines the general structure of the application, particularly with
regard
to the use of atomic actions.
This chapter outlines a simple application, a simple FIFO Queue class for integer values. The Queue is implemented with a doubly linked list structure, and is implemented as a single object. This example is used throughout the rest of this manual to illustrate the various mechanisms provided by ArjunaCore. Although this is an unrealistic example application, it illustrates all of the ArjunaCore modifications without requiring in depth knowledge of the application code.
The application is assumed not to be distributed. To allow for distribution, context information must be propagated either implicitly or explicitly.
The queue is a traditional FIFO queue, where elements are added to the front and removed from the back. The
operations provided by the queue class allow the values to be placed on to the queue (
enqueue
) and to be removed
from it (
dequeue
), and values of elements in the queue can also be changed or inspected. In this
example implementation, an array represents the queue. A limit of
QUEUE_SIZE
elements has been imposed
for this example.
Example 1.24.
Java interface definition of class
queue
public class TransactionalQueue extends LockManager
{
public TransactionalQueue (Uid uid);
public TransactionalQueue ();
public void finalize ();
public void enqueue (int v) throws OverFlow, UnderFlow,
QueueError, Conflict;
public int dequeue () throws OverFlow, UnderFlow,
QueueError, Conflict;
public int queueSize ();
public int inspectValue (int i) throws OverFlow,
UnderFlow, QueueError, Conflict;
public void setValue (int i, int v) throws OverFlow,
UnderFlow, QueueError, Conflict;
public boolean save_state (OutputObjectState os, int ObjectType);
public boolean restore_state (InputObjectState os, int ObjectType);
public String type ();
public static final int QUEUE_SIZE = 40; // maximum size of the queue
private int[QUEUE_SIZE] elements;
private int numberOfElements;
};
Using an existing persistent object requires the use of a special constructor
that takes the Uid of the persistent object, as shown in
Example 1.25, “
Class
TransactionalQueue
”
.
Example 1.25.
Class
TransactionalQueue
public TransactionalQueue (Uid u)
{
super(u);
numberOfElements = 0;
}
The constructor that creates a new persistent object is similar:
public TransactionalQueue ()
{
super(ObjectType.ANDPERSISTENT);
numberOfElements = 0;
try
{
AtomicAction A = new AtomicAction();
A.begin(0); // Try to start atomic action
// Try to set lock
if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
{
A.commit(true); // Commit
}
else // Lock refused so abort the atomic action
A.rollback();
}
catch (Exception e)
{
System.err.println(“Object construction error: “+e);
System.exit(1);
}
}
The use of an atomic action within the constructor for a new object follows the guidelines outlined earlier and ensures that the object’s state will be written to the object store when the appropriate top level atomic action commits (which will either be the action A or some enclosing action active when the TransactionalQueue was constructed). The use of atomic actions in a constructor is simple: an action must first be declared and its begin operation invoked; the operation must then set an appropriate lock on the object (in this case a WRITE lock must be acquired), then the main body of the constructor is executed. If this is successful the atomic action can be committed, otherwise it is aborted.
The finalizer of the
queue
class is only required to call the
terminate
and
finalizer
operations of
LockManager
.
public void finalize ()
{
super.terminate();
super.finalize();
}
Example 1.26.
Method
save_state
public boolean save_state (OutputObjectState os, int ObjectType)
{
if (!super.save_state(os, ObjectType))
return false;
try
{
os.packInt(numberOfElements);
if (numberOfElements > 0)
{
for (int i = 0; i < numberOfElements; i++)
os.packInt(elements[i]);
}
return true;
}
catch (IOException e)
{
return false;
}
}
Example 1.27.
Method
restore_state
public boolean restore_state (InputObjectState os, int ObjectType)
{
if (!super.restore_state(os, ObjectType))
return false;
try
{
numberOfElements = os.unpackInt();
if (numberOfElements > 0)
{
for (int i = 0; i < numberOfElements; i++)
elements[i] = os.unpackInt();
}
return true;
}
catch (IOException e)
{
return false;
}
}
Example 1.28.
Method
type
Because the Queue class is derived from the LockManager class, the operation type should be:
public String type ()
{
return "/StateManager/LockManager/TransactionalQueue";
}
If the operations of the
queue
class are to be coded as atomic actions, then the enqueue
operation might have the structure given below. The
dequeue
operation is similarly
structured, but is not implemented here.
Example 1.29.
Method
enqueue
public void enqueue (int v) throws OverFlow, UnderFlow, QueueError
{
AtomicAction A = new AtomicAction();
boolean res = false;
try
{
A.begin(0);
if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
{
if (numberOfElements < QUEUE_SIZE)
{
elements[numberOfElements] = v;
numberOfElements++;
res = true;
}
else
{
A.rollback();
throw new UnderFlow();
}
}
if (res)
A.commit(true);
else
{
A.rollback();
throw new Conflict();
}
}
catch (Exception e1)
{
throw new QueueError();
}
}
Example 1.30.
Method
queueSize
public int queueSize () throws QueueError, Conflict
{
AtomicAction A = new AtomicAction();
int size = -1;
try
{
A.begin(0);
if (setlock(new Lock(LockMode.READ), 0) == LockResult.GRANTED)
size = numberOfElements;
if (size != -1)
A.commit(true);
else
{
A.rollback();
throw new Conflict();
}
}
catch (Exception e1)
{
throw new QueueError();
}
return size;
}
The
setValue
method is not implemented here, but is similar in structure to
Example 1.31, “
Method
inspectValue
”
.
Example 1.31.
Method
inspectValue
public int inspectValue (int index) throws UnderFlow,
OverFlow, Conflict, QueueError
{
AtomicAction A = new AtomicAction();
boolean res = false;
int val = -1;
try
{
A.begin();
if (setlock(new Lock(LockMode.READ), 0) == LockResult.GRANTED)
{
if (index < 0)
{
A.rollback();
throw new UnderFlow();
}
else
{
// array is 0 - numberOfElements -1
if (index > numberOfElements -1)
{
A.rollback();
throw new OverFlow();
}
else
{
val = elements[index];
res = true;
}
}
}
if (res)
A.commit(true);
else
{
A.rollback();
throw new Conflict();
}
}
catch (Exception e1)
{
throw new QueueError();
}
return val;
}
Rather than show all of the code for the client, this example concentrates on a representative portion. Before invoking operations on the object, the client must first bind to the object. In the local case this simply requires the client to create an instance of the object.
Example 1.32. Binding to the Object
public static void main (String[] args)
{
TransactionalQueue myQueue = new TransactionalQueue();
Before invoking one of the queue’s operations, the client starts a transaction. The queueSize operation is shown below:
AtomicAction A = new AtomicAction();
int size = 0;
try
{
A.begin(0);
try
{
size = queue.queueSize();
}
catch (Exception e)
{
}
if (size >= 0)
{
A.commit(true);
System.out.println(“Size of queue: “+size);
}
else
A.rollback();
}
catch (Exception e)
{
System.err.println(“Caught unexpected exception!”);
}
}
Since the
queue
object is persistent, the state of the object survives any failures of
the node on which it is located. The state of the object that survives is the state produced by the last top-level
committed atomic action performed on the object. If an application intends to perform two
enqueue
operations atomically, for example, you can nest the
enqueue
operations in another enclosing atomic action. In addition, concurrent operations
on such a persistent object are serialized, preventing inconsistencies in the state of the object.
However, since the elements of the
queue
objects are not individually concurrency
controlled, certain combinations of concurrent operation invocations are executed serially, even though logically
they could be executed concurrently. An example of this is modifying the states of two different elements in the
queue. The platform Development Guide addresses some of these issues.
In this chapter we shall cover information on failure recovery that is specific to ArjunaCore, TXOJ or using Narayana outside the scope of a supported application server.
In some situations it may be required to embed the RecoveryManager in the same process as the transaction service. In this case you can create an instance of the RecoveryManager through the manager method on com.arjuna.ats.arjuna.recovery.RecoveryManager. A RecoveryManager can be created in one of two modes, selected via the parameter to the manager method:
i. INDIRECT_MANAGEMENT: the manager runs periodically but can also be instructed to run when desired via the scan operation or through the RecoveryDriver class to be described below.
ii. DIRECT_MANAGEMENT: the manager does not run periodically and must be driven directly via the scan operation or RecoveryDriver.
By default, the recovery manager listens on the first available port on a given machine. If you wish to control the port number that it uses, you can specify this using the com.arjuna.ats.arjuna.recovery.recoveryPort attribute.
Narayana provides a set of recovery modules that are responsible to manage recovery according to the nature of the participant and its position in a transactional tree. The provided classes over and above the ones covered elsewhere (that all implements the RecoveryModule interface) are:
com.arjuna.ats.internal.txoj.recovery.TORecoveryModule
Recovers Transactional Objects for Java.
The failure recovery subsystem of Narayana will ensure that results of a transaction are applied consistently to all resources affected by the transaction, even if any of the application processes or the machine hosting them crash or lose network connectivity. In the case of machine (system) crash or network failure, the recovery will not take place until the system or network are restored, but the original application does not need to be restarted – recovery responsibility is delegated to the Recovery Manager process (see below). Recovery after failure requires that information about the transaction and the resources involved survives the failure and is accessible afterward: this information is held in the ActionStore, which is part of the ObjectStore.
If the ObjectStore is destroyed or modified, recovery may not be possible.
Until the recovery procedures are complete, resources affected by a transaction that was in progress at the time of the failure may be inaccessible. For database resources, this may be reported as tables or rows held by “in-doubt transactions”. For TransactionalObjects for Java resources, an attempt to activate the Transactional Object (as when trying to get a lock) will fail.
The failure recovery subsystem of Narayana requires that the stand-alone Recovery Manager process be running for each ObjectStore (typically one for each node on the network that is running Narayana applications). The RecoveryManager file is located in the package com.arjuna.ats.arjuna.recovery.RecoveryManager. To start the Recovery Manager issue the following command:
java com.arjuna.ats.arjuna.recovery.RecoveryManager
If the -test flag is used with the Recovery Manager then it will display a “Ready” message when initialised, i.e.,
java com.arjuna.ats.arjuna.recovery.RecoveryManager -test
The RecoveryManager reads the properties defined in the arjuna.properties file and then also reads the property file RecoveryManager.properties, from the same directory as it found the arjuna properties file. An entry for a property in the RecoveryManager properties file will override an entry for the same property in the main TransactionService properties file. Most of the entries are specific to the Recovery Manager.
A default version of RecoveryManager.properties is supplied with the distribution – this can be used without modification, except possibly the debug tracing fields (see below, Output). The rest of this section discusses the issues relevant in setting the properties to other values (in the order of their appearance in the default version of the file).
The RecoveryManager scans the ObjectStore and other locations of information, looking for transactions and resources that require, or may require recovery. The scans and recovery processing are performed by recovery modules, (instances of classes that implement the com.arjuna.ats.arjuna.recovery.RecoveryModule interface), each with responsibility for a particular category of transaction or resource. The set of recovery modules used are dynamically loaded, using properties found in the RecoveryManager property file.
The interface has two methods: periodicWorkFirstPass and periodicWorkSecondPass. At an interval (defined by property com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod), the RecoveryManager will call the first pass method on each property, then wait for a brief period (defined by property com.arjuna.ats.arjuna.recovery.recoveryBackoffPeriod), then call the second pass of each module. Typically, in the first pass, the module scans (e.g. the relevant part of the ObjectStore) to find transactions or resources that are in-doubt (i.e. are part way through the commitment process). On the second pass, if any of the same items are still in-doubt, it is possible the original application process has crashed and the item is a candidate for recovery.
An attempt, by the RecoveryManager, to recover a transaction that is still progressing in the original process(es) is likely to break the consistency. Accordingly, the recovery modules use a mechanism (implemented in the com.arjuna.ats.arjuna.recovery.TransactionStatusManager package) to check to see if the original process is still alive, and if the transaction is still in progress. The RecoveryManager only proceeds with recovery if the original process has gone, or, if still alive, the transaction is completed. (If a server process or machine crashes, but the transaction-initiating process survives, the transaction will complete, usually generating a warning. Recovery of such a transaction is the RecoveryManager’s responsibility).
It is clearly important to set the interval periods appropriately. The total iteration time will be the sum of the periodicRecoveryPeriod, recoveryBackoffPeriod and the length of time it takes to scan the stores and to attempt recovery of any in-doubt transactions found, for all the recovery modules. The recovery attempt time may include connection timeouts while trying to communicate with processes or machines that have crashed or are inaccessible (which is why there are mechanisms in the recovery system to avoid trying to recover the same transaction for ever). The total iteration time will affect how long a resource will remain inaccessible after a failure – periodicRecoveryPeriod should be set accordingly (default is 120 seconds). The recoveryBackoffPeriod can be comparatively short (default is 10 seconds) – its purpose is mainly to reduce the number of transactions that are candidates for recovery and which thus require a “call to the original process to see if they are still in progress
In previous versions of Narayana there was no contact mechanism, and the backoff period had to be long enough to avoid catching transactions in flight at all. From 3.0, there is no such risk.
Two recovery modules (implementations of the com.arjuna.ats.arjuna.recovery.RecoveryModule interface) are supplied with Narayana, supporting various aspects of transaction recovery including JDBC recovery. It is possible for advanced users to create their own recovery modules and register them with the Recovery Manager. The recovery modules are registered with the RecoveryManager using RecoveryEnvironmentBean.recoveryExtensions. These will be invoked on each pass of the periodic recovery in the sort-order of the property names – it is thus possible to predict the ordering (but note that a failure in an application process might occur while a periodic recovery pass is in progress). The default Recovery Extension settings are:
Example 1.33. Recovery Environment Bean XML
<entry key="RecoveryEnvironmentBean.recoveryExtensions">
com.arjuna.ats.internal.arjuna.recovery.AtomicActionRecoveryModule
com.arjuna.ats.internal.txoj.recovery.TORecoveryModule
</entry>
The operation of the recovery subsystem will cause some entries to be made in the ObjectStore that will not be removed in normal progress. The RecoveryManager has a facility for scanning for these and removing items that are very old. Scans and removals are performed by implementations of the com.arjuna.ats.arjuna.recovery.ExpiryScanner interface. Implementations of this interface are loaded by giving the class names as the value of a property RecoveryEnvironmentBean.expiryScanners. The RecoveryManager calls the scan() method on each loaded Expiry Scanner implementation at an interval determined by the property RecoveryEnvironmentBean.expiryScanInterval”. This value is given in hours – default is 12. An expiryScanInterval value of zero will suppress any expiry scanning. If the value as supplied is positive, the first scan is performed when RecoveryManager starts; if the value is negative, the first scan is delayed until after the first interval (using the absolute value)
The kinds of item that are scanned for expiry are:
TransactionStatusManager items: one of these is created by every application process that uses Narayana – they contain the information that allows the RecoveryManager to determine if the process that initiated the transaction is still alive, and what the transaction status is. The expiry time for these is set by the property com.arjuna.ats.arjuna.recovery.transactionStatusManagerExpiryTime (in hours – default is 12, zero means never expire). The expiry time should be greater than the lifetime of any single Narayana-using process.
The Expiry Scanner properties for these are:
Example 1.34. Recovery Environment Bean XML
<entry key="RecoveryEnvironmentBean.expiryScanners">
com.arjuna.ats.internal.arjuna.recovery.ExpiredTransactionStatusManagerScanner
</entry>
To illustrate the behavior of a recovery module, the following pseudo code describes the basic algorithm used for Atomic Action transactions and Transactional Objects for java.
Example 1.35. AtomicAction pseudo code
First Pass:
< create a collection containing all transactions currently in the log >
Second Pass:
while < there are transactions in the collection >
do
if < the intention list for the transaction still exists >
then
< create new transaction cached item >
< obtain the status of the transaction >
if < the transaction is not in progress (ie phase 2 has finished ) >
then
< replay phase two of the commit protocol >
endif.
endif.
end while.
Example 1.36. Transactional Object pseudo code
First Pass:
< Create a hash table for uncommitted transactional objects. >
< Read in all transactional objects within the object store. >
while < there are transactional objects >
do
if < the transactional object has an Uncommited status in the object store >
then
< add the transactional Object o the hash table for uncommitted transactional objects>
end if.
end while.
Second Pass:
while < there are transactions in the hash table for uncommitted transactional objects >
do
if < the transaction is still in the Uncommitted state >
then
if < the transaction is not in the Transaction Cache >
then
< check the status of the transaction with the original application process >
if < the status is Rolled Back or the application process is inactive >
< rollback the transaction by removing the Uncommitted status from the Object Store >
endif.
endif.
endif.
end while.
In order to recover from failure, we have seen that the Recovery Manager contacts recovery modules by invoking periodically the methods periodicWorkFirstPass and periodicWorkSecondPass. Each Recovery Module is then able to manage recovery according the type of resources that need to be recovered. The Narayana product is shipped with a set of recovery modules (TOReceveryModule, XARecoveryModule…), but it is possible for a user to define its own recovery module that fit his application. The following basic example illustrates the steps needed to build such recovery module
This basic example does not aim to present a complete process to recover from failure, but mainly to illustrate the way to implement a recovery module.
The application used here consists to create an atomic transaction, to register a participant within the created transaction and finally to terminate it either by commit or abort. A set of arguments are provided:
to decide to commit or abort the transaction,
to decide generating a crash during the commitment process.
The code of the main class that control the application is given below
Example 1.37. TestRecoveryModule.java
package com.arjuna.demo.recoverymodule;
import com.arjuna.ats.arjuna.AtomicAction;
import com.arjuna.ats.arjuna.coordinator.*;
public class TestRecoveryModule {
public static void main(String args[]) {
try {
AtomicAction tx = new AtomicAction();
tx.begin(); // Top level begin
// enlist the participant
tx.add(SimpleRecord.create());
System.out.println("About to complete the transaction ");
for (int i = 0; i < args.length; i++) {
if ((args[i].compareTo("-commit") == 0))
_commit = true;
if ((args[i].compareTo("-rollback") == 0))
_commit = false;
if ((args[i].compareTo("-crash") == 0))
_crash = true;
}
if (_commit)
tx.commit(); // Top level commit
else
tx.abort(); // Top level rollback
} catch (Exception e) {
e.printStackTrace();
}
}
protected static boolean _commit = true;
protected static boolean _crash = false;
}
The registered participant has the following behavior:
During the prepare phase, it writes a simple message - “I’m prepared”- on the disk such The message is written in a well known file
During the commit phase, it writes another message - “I’m committed”- in the same file used during prepare
If it receives an abort message, it removes from the disk the file used for prepare if any.
If a crash has been decided for the test, then it crashes during the commit phase – the file remains with the message “I’m prepared”.
The main portion of the code illustrating such behavior is described hereafter.
that the location of the file given in variable filename can be changed
Example 1.38. SimpleRecord.java
package com.arjuna.demo.recoverymodule;
import com.arjuna.ats.arjuna.coordinator.*;
import java.io.File;
public class SimpleRecord extends AbstractRecord {
public String filename = "c:/tmp/RecordState";
public SimpleRecord() {
System.out.println("Creating new resource");
}
public static AbstractRecord create() {
return new SimpleRecord();
}
public int topLevelAbort() {
try {
File fd = new File(filename);
if (fd.exists()) {
if (fd.delete())
System.out.println("File Deleted");
}
} catch (Exception ex) {
// …
}
return TwoPhaseOutcome.FINISH_OK;
}
public int topLevelCommit() {
if (TestRecoveryModule._crash)
System.exit(0);
try {
java.io.FileOutputStream file = new java.io.FileOutputStream(
filename);
java.io.PrintStream pfile = new java.io.PrintStream(
file);
pfile.println("I'm Committed");
file.close();
} catch (java.io.IOException ex) {
// ...
}
return TwoPhaseOutcome.FINISH_OK;
}
public int topLevelPrepare() {
try {
java.io.FileOutputStream file = new java.io.FileOutputStream(
filename);
java.io.PrintStream pfile = new java.io.PrintStream(
file);
pfile.println("I'm prepared");
file.close();
} catch (java.io.IOException ex) {
// ...
}
return TwoPhaseOutcome.PREPARE_OK;
}
// …
}
The role of the Recovery Module in such application consists to read the content of the file used to store the status of the participant, to determine that status and print a message indicating if a recovery action is needed or not.
Example 1.39. SimpleRecoveryModule.java
package com.arjuna.demo.recoverymodule;
import com.arjuna.ats.arjuna.recovery.RecoveryModule;
public class SimpleRecoveryModule implements RecoveryModule {
public String filename = "c:/tmp/RecordState";
public SimpleRecoveryModule() {
System.out
.println("The SimpleRecoveryModule is loaded");
}
public void periodicWorkFirstPass() {
try {
java.io.FileInputStream file = new java.io.FileInputStream(
filename);
java.io.InputStreamReader input = new java.io.InputStreamReader(
file);
java.io.BufferedReader reader = new java.io.BufferedReader(
input);
String stringState = reader.readLine();
if (stringState.compareTo("I'm prepared") == 0)
System.out
.println("The transaction is in the prepared state");
file.close();
} catch (java.io.IOException ex) {
System.out.println("Nothing found on the Disk");
}
}
public void periodicWorkSecondPass() {
try {
java.io.FileInputStream file = new java.io.FileInputStream(
filename);
java.io.InputStreamReader input = new java.io.InputStreamReader(
file);
java.io.BufferedReader reader = new java.io.BufferedReader(
input);
String stringState = reader.readLine();
if (stringState.compareTo("I'm prepared") == 0) {
System.out
.println("The record is still in the prepared state");
System.out.println("– Recovery is needed");
} else if (stringState
.compareTo("I'm Committed") == 0) {
System.out
.println("The transaction has completed and committed");
}
file.close();
} catch (java.io.IOException ex) {
System.out.println("Nothing found on the Disk");
System.out
.println("Either there was no transaction");
System.out.println("or it as been rolled back");
}
}
}
The recovery module should now be deployed in order to be called by the Recovery Manager. To do so, we just need to add an entry in the the config file for the extension:
Example 1.40. Recovery Environment Bean Recovery Extensions XML
<entry key="RecoveryEnvironmentBean.recoveryExtenstions">
com.arjuna.demo.recoverymodule.SimpleRecoveryModule
</entry>
Once started, the Recovery Manager will automatically load the listed Recovery modules.
The source of the code can be retrieved under the trailmap directory of the Narayana installation.
As mentioned, the basic application presented above does not present the complete process to recover from failure, but it was just presented to describe how the build a recovery module. In case of the OTS protocol, let’s consider how a recovery module that manages recovery of OTS resources can be configured.
To manage recovery in case of failure, the OTS specification has defined a recovery protocol. Transaction’s participants in a doubt status could use the RecoveryCoordinator to determine the status of the transaction. According to that transaction status, those participants can take appropriate decision either by roll backing or committing. Asking the RecoveryCoordinator object to determine the status consists to invoke the replay_completion operation on the RecoveryCoordinator.
For each OTS Resource in a doubt status, it is well known which RecoveyCoordinator to invoke to determine the status of the transaction in which the Resource is involved – It’s the RecoveryCoordinator returned during the Resource registration process. Retrieving such RecoveryCoordinator per resource means that it has been stored in addition to other information describing the resource.
A recovery module dedicated to recover OTS Resources could have the following behavior. When requested by the recovery Manager on the first pass it retrieves from the disk the list of resources that are in the doubt status. During the second pass, if the resources that were retrieved in the first pass still remain in the disk then they are considered as candidates for recovery. Therefore, the Recovery Module retrieves for each candidate its associated RecoveryCoordinator and invokes the replay_completion operation that the status of the transaction. According to the returned status, an appropriate action would be taken (for instance, rollback the resource is the status is aborted or inactive).
Apart from ensuring that the run-time system is executing normally, there is little continuous administration needed for the Narayana software. Refer to Important Points for Administrators for some specific concerns.
Important Points for Administrators
The present implementation of the Narayana system provides no security or protection for data. The objects stored in the Narayana object store are (typically) owned by the user who ran the application that created them. The Object Store and Object Manager facilities make no attempt to enforce even the limited form of protection that Unix/Windows provides. There is no checking of user or group IDs on access to objects for either reading or writing.
Persistent objects created in the Object Store never go away unless the StateManager.destroy method is invoked on the object or some application program explicitly deletes them. This means that the Object Store gradually accumulates garbage (especially during application development and testing phases). At present we have no automated garbage collection facility. Further, we have not addressed the problem of dangling references. That is, a persistent object, A, may have stored a Uid for another persistent object, B, in its passive representation on disk. There is nothing to prevent an application from deleting B even though A still contains a reference to it. When A is next activated and attempts to access B, a run-time error will occur.
There is presently no support for version control of objects or database reconfiguration in the event of class structure changes. This is a complex research area that we have not addressed. At present, if you change the definition of a class of persistent objects, you are entirely responsible for ensuring that existing instances of the object in the Object Store are converted to the new representation. The Narayana software can neither detect nor correct references to old object state by new operation versions or vice versa.
Object store management is critically important to the transaction service.
By default the transaction manager starts up in an active state such that new transactions can be created
immediately. If you wish to have more control over this it is possible to set the
CoordinatorEnvironmentBean.startDisabled
configuration option to
YES
and in
which case no transactions can be created until the transaction manager is enabled via a call to method
TxControl.enable
).
It is possible to stop the creation of new transactions at any time by calling method
TxControl.disable
. Transactions that are currently executing will not be affected. By
default recovery will be allowed to continue and the transaction system will still be available to manage recovery
requests from other instances in a distributed environment. (See the Failure Recovery Guide for further
details). However, if you wish to disable recovery as well as remove any resources it maintains, then you can pass
true
to method
TxControl.disable
; the default is to use
false
.
If you wish to shut the system down completely then it may also be necessary to terminate the background
transaction
reaper (see the Programmers Guide for information about what the reaper does.) In order to do this you may want to
first prevent the creation of new transactions (if you are not creating transactions with timeouts then this step is
not necessary) using method
TxControl.disable
. Then you should call method
TransactionReaper.terminate
. This method takes a Boolean parameter: if
true
then the method will wait for the normal timeout periods associated with any transactions to
expire before terminating the transactions; if
false
then transactions will be forced to
terminate (rollback or have their outcome set such that they can only ever rollback) immediately.
if you intent to restart the recovery manager later after having terminated it then you MUST use the
TransactionReapear.terminate
method with asynchronous behavior set to
false
.
Within the transaction service installation, the object store is updated regularly whenever transactions are created, or when Transactional Objects for Java is used. In a failure-free environment, the only object states which should reside within the object store are those representing objects created with the Transactional Objects for Java API.
However, if failures occur, transaction logs may remain in the object store until crash recovery facilities have resolved the transactions they represent. As such it is very important that the contents of the object store are not deleted without due care and attention, as this will make it impossible to resolve in doubt transactions. In addition, if multiple users share the same object store it is important that they realize this and do not simply delete the contents of the object store assuming it is an exclusive resource.
Compile-time configuration information is available via class
com.arjuna.common.util.ConfigurationInfo
. Runtime configuration is embodied in the various
classes where name refers to the
particular configuration category (see the configuration section of the user guide). These beans have
corresponding
MBean interfaces and may be linked to JMX for remote
inspection of the configuration if desired.
name
EnvironmentBean
The failure recovery subsystem of Narayana
will ensure that results of a transaction are applied consistently to
all resources affected by the transaction, even if any of the application processes or the machine hosting them
crash or lose network connectivity. In the case of machine (system) crash or network failure, the recovery will not
take place until the system or network are restored, but the original application does not need to be
restarted. Recovery responsibility is delegated to
Section 2.1.5.1, “The Recovery Manager”
. Recovery after failure
requires that information about the transaction and the resources involved survives the failure and is accessible
afterward: this information is held in the
ActionStore
, which is part of the
ObjectStore
.
If the
ObjectStore
is destroyed or modified, recovery may not be possible.
Until the recovery procedures are complete, resources affected by a transaction that was in progress at the time
of
the failure may be inaccessible. For database resources, this may be reported as tables or rows held by “in-doubt
transactions”. For
TransactionalObjects for Java
resources, an attempt to activate the
Transactional Object
(as when trying to get a lock) will fail.
The failure recovery subsystem of Narayana
requires that the stand-alone Recovery Manager process be running for
each
ObjectStore
(typically one for each node on the network that is running Narayana
applications). The
RecoveryManager
file is located in the arjunacore JAR file within the
package
com.arjuna.ats.arjuna.recovery.RecoveryManager
. To start the Recovery Manager issue the
following command:
java com.arjuna.ats.arjuna.recovery.RecoveryManager
If the
-test
flag is used with the Recovery Manager then it will display a
Ready
message when initialized, i.e.,
java com.arjuna.ats.arjuna.recovery.RecoveryManager -test
The RecoveryManager reads the properties defined in the
jbossts-properties.xml
file.
A default version of
jbossts-properties.xml
is supplied with the distribution. This can
be used without modification, except possibly the debug tracing fields, as shown in
Section 2.1.5.3, “Output”
.
It is likely that installations will want to have some form of output from the RecoveryManager, to provide a record of what recovery activity has taken place. RecoveryManager uses the logging mechanism provided by jboss logging , which provides a high level interface that hides differences that exist between existing logging APIs such Jakarta log4j or JDK logging API.
The configuration of jboss logging depends on the underlying logging framework that is used, which is determined by the availability and ordering of alternatives on the classpath. Please consult the jboss logging documentation for details. Each log message has an associated log Level, that gives the importance and urgency of a log message. The set of possible Log Levels, in order of least severity, and highest verbosity, is:
TRACE
DEBUG
INFO
WARN
ERROR
FATAL
Messages describing the start and the periodical behavior made by the RecoveryManager are output using the
INFO
level. If other debug tracing is wanted, the finer debug or trace levels should be set
appropriately.
Setting the normal recovery messages to the
INFO
level allows the RecoveryManager to produce a
moderate level of reporting. If nothing is going on, it just reports the entry into each module for each periodic
pass. To disable
INFO
messages produced by the Recovery Manager, the logging level could be set
to the higher level of
ERROR
, which means that the RecoveryManager will only produce
ERROR
,
WARNING
, or
FATAL
messages.
The RecoveryManager scans the ObjectStore and other locations of information, looking for transactions and
resources that require, or may require recovery. The scans and recovery processing are performed by recovery
modules. These recovery modules are instances of classes that implement the
com.arjuna.ats.arjuna.recovery.RecoveryModule interface
. Each module has
responsibility for a particular category of transaction or resource. The set of recovery modules used is
dynamically loaded, using properties found in the RecoveryManager property file.
The interface has two methods:
periodicWorkFirstPass
and
periodicWorkSecondPass
. At an interval defined by property
com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod
, the RecoveryManager calls the first
pass method on each property, then waits for a brief period, defined by property
com.arjuna.ats.arjuna.recovery.recoveryBackoffPeriod
. Next, it calls the second pass of each
module. Typically, in the first pass, the module scans the relevant part of the ObjectStore to find transactions
or resources that are in-doubt. An in-doubt transaction may be part of the way through the commitment process, for
instance. On the second pass, if any of the same items are still in-doubt, the original application process
may
have crashed, and the item is a candidate for recovery.
An attempt by the RecoveryManager to recover a transaction that is still progressing in the original process is likely to break the consistency. Accordingly, the recovery modules use a mechanism, implemented in the com.arjuna.ats.arjuna.recovery.TransactionStatusManager package, to check to see if the original process is still alive, and if the transaction is still in progress. The RecoveryManager only proceeds with recovery if the original process has gone, or, if still alive, the transaction is completed. If a server process or machine crashes, but the transaction-initiating process survives, the transaction completes, usually generating a warning. Recovery of such a transaction is the responsibility of the RecoveryManager.
It is clearly important to set the interval periods appropriately. The total iteration time will be the sum of the periodicRecoveryPeriod and recoveryBackoffPeriod properties, and the length of time it takes to scan the stores and to attempt recovery of any in-doubt transactions found, for all the recovery modules. The recovery attempt time may include connection timeouts while trying to communicate with processes or machines that have crashed or are inaccessible. There are mechanisms in the recovery system to avoid trying to recover the same transaction indefinitely. The total iteration time affects how long a resource will remain inaccessible after a failure. – periodicRecoveryPeriod should be set accordingly. Its default is 120 seconds. The recoveryBackoffPeriod can be comparatively short, and defaults to 10 seconds. –Its purpose is mainly to reduce the number of transactions that are candidates for recovery and which thus require a call to the original process to see if they are still in progress.
In previous versions of Narayana , there was no contact mechanism, and the back-off period needed to be long enough to avoid catching transactions in flight at all. From 3.0, there is no such risk.
Two recovery modules, implementations of the
com.arjuna.ats.arjuna.recovery.RecoveryModule
interface, are supplied with
Narayana
. These modules support various aspects of transaction recovery, including JDBC
recovery. It is possible for advanced users to create their own recovery modules and register them with the
Recovery Manager. The recovery modules are registered with the RecoveryManager using
RecoveryEnvironmentBean.recoveryModuleClassNames
. These will be invoked on each pass of the
periodic recovery in the sort-order of the property names – it is thus possible to predict the ordering, but a
failure in an application process might occur while a periodic recovery pass is in progress. The default Recovery
Extension settings are:
<entry key="RecoveryEnvironmentBean.recoveryModuleClassNames">
com.arjuna.ats.internal.arjuna.recovery.AtomicActionRecoveryModule
com.arjuna.ats.internal.txoj.recovery.TORecoveryModule
com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule
</entry>
The operation of the recovery subsystem cause some entries to be made in the ObjectStore that are not
removed in
normal progress. The RecoveryManager has a facility for scanning for these and removing items that are very
old. Scans and removals are performed by implementations of the
com.arjuna.ats.arjuna.recovery.ExpiryScanner
interface. These implementations are
loaded by giving the class names as the value of a property
RecoveryEnvironmentBean.expiryScannerClassNames
. The RecoveryManager calls the
scan()
method on each loaded Expiry Scanner implementation at an interval determined by the property
RecoveryEnvironmentBean.expiryScanInterval
. This value is given in hours, and defaults to
12hours. An
expiryScanInterval
value of zero suppresses any expiry scanning. If the value
supplied is positive, the first scan is performed when RecoveryManager starts. If the value is negative, the first
scan is delayed until after the first interval, using the absolute value.
The kinds of item that are scanned for expiry are:
One TransactionStatusManager item is created by every application process that uses Narayana . It contains the information that allows the RecoveryManager to determine if the process that initiated the transaction is still alive, and its status. The expiry time for these items is set by the property com.arjuna.ats.arjuna.recovery.transactionStatusManagerExpiryTime , expressed in hours. The default is 12, and 0 (zero) means never to expire.The expiry time should be greater than the lifetime of any single processes using Narayana .
The Expiry Scanner properties for these are:
<entry key="RecoveryEnvironmentBean.expiryScannerClassNames">
com.arjuna.ats.internal.arjuna.recovery.ExpiredTransactionStatusManagerScanner
</entry>
This section covers the types and causes of errors and exceptions which may be thrown or reported during a transactional application.
Errors and Exceptions
NO_MEMORY
The application has run out of memory, and has thrown an
OutOfMemoryError
exception.
Narayana
has attempted to do some cleanup, by running the garbage
collector, before re-throwing the exception. This is probably a transient problem and retrying the invocation
should succeed.
com.arjuna.ats.arjuna.exceptions.FatalError
An error has occurred, and the error is of such severity that that the transaction system must shut down. Prior to this error being thrown the transaction service ensures that all running transactions have rolled back. If an application catches this error, it should tidy up and exit. If further work is attempted, application consistency may be violated.
com.arjuna.ats.arjuna.exceptions.ObjectStoreError
An error occurred while the transaction service attempted to use the object store. Further forward progress is not possible.
Object store warnings about access problems on states may occur during the normal execution of crash recovery. This is the result of multiple concurrent attempts to perform recovery on the same transaction. It can be safely ignored.
Two variants of the JTA implementation are accessible through the same interface. These are:
Purely local JTA |
Only non-distributed JTA transactions are executed. This is the only version available with the Narayana distribution. |
Remote, CORBA-based JTA |
Executes distributed JTA transactions. This functionality is provided by the JTS distribution and requires a supported CORBA ORB. Consult the JTS Installation and Administration Guide for more information. |
Both of these implementations are fully compatible with the transactional JDBC driver.
Procedure 2.1. Selecting the local JTA implementation
Set the property
JTAEnvironmentBean.jtaTMImplementation
to value
com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionManagerImple
.
Set the property
JTAEnvironmentBean.jtaUTImplementation
to value
com.arjuna.ats.internal.jta.transaction.arjunacore.UserTransactionImple
.
These settings are the default values for the properties, so nothing needs to be changed to use the local implementation.
Narayana supports construction of both local and distributed transactional applications which access databases using the JDBC APIs. JDBC supports two-phase commit of transactions, and is similar to the XA X/Open standard. provides JDBC support in package com.arjuna.ats.jdbc. A list of the tested drivers is available from the website.
Only use the transactional JDBC support provided in package com.arjuna.ats.jdbc when you are using outside of an application server, such as WildFly Application Server, or another container. Otherwise, use the JDBC support provided by your application server or container.
Narayana needs the ability to associate work performed on a JDBC connection with a specific transaction. Therefore, applications need to use a combination of implicit transaction propagation and indirect transaction management. For each JDBC connection, Narayana must be able to determine the invoking thread's current transaction context.
Nested transactions are not supported by JDBC. If you try to use a JDBC connection within a
subtransaction,
Narayana
throws a suitable exception and no work is allowed on that connection. However, if you need nested
transactions, and are comfortable with straying from the JDBC standard, you can set property
com.arjuna.ats.jta.supportSubtransactions
property to
YES
.
The approach Narayana
takes for incorporating JDBC connections within transactions is to provide transactional
JDBC drivers as conduits for all interactions. These drivers intercept all invocations and ensure that they are
registered with, and driven by, appropriate transactions. The driver
com.arjuna.ats.jdbc.TransactionalDriver
handles all JDBC drivers, implementing the
java.sql.Driver
interface. If the database is not transactional, ACID properties
cannot be guaranteed.
Example 2.1. Instantiating and using the driver within an application
TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver();
Example 2.2. Registering the drivers with the JDBC driver manager using the Java system properties
Properties p = System.getProperties();
switch (dbType)
{
case MYSQL:
p.put("jdbc.drivers", "com.mysql.jdbc.Driver");
break;
case PGSQL:
p.put("jdbc.drivers", "org.postgresql.Driver");
break;
}
System.setProperties(p);
The jdbc.drivers property contains a colon-separated list of driver class names, which the JDBC driver manager loads when it is initialized. After the driver is loaded, you can use it to make a connection with a database.
Example 2.3.
Using the
Class.forName
method
Calling
Class.forName()
automatically registers the driver with the JDBC driver
manager. It is also possible to explicitly create an instance of the JDBC driver.
sun.jdbc.odbc.JdbcOdbcDriver drv = new sun.jdbc.odbc.JdbcOdbcDriver();
DriverManager.registerDriver(drv);
Because Narayana provides JDBC connectivity via its own JDBC driver, application code can support transactions with relatively small code changes. Typically, the application programmer only needs to start and terminate transactions.
The Narayana
driver accepts the following properties, all located in class
com.arjuna.ats.jdbc.TransactionalDriver
.
username |
the database username |
password |
the database password |
createDb |
creates the database automatically if set to
|
dynamicClass |
specifies a class to instantiate to connect to the database, instead of using JNDI. |
JDBC connections are created from appropriate DataSources. Connections which participate in distributed transactions are obtained from XADataSources. When using a JDBC driver, Narayana uses the appropriate DataSource whenever a connection to the database is made. It then obtains XAResources and registers them with the transaction via the JTA interfaces. The transaction service uses these XAResources when the transaction terminates in order to drive the database to either commit or roll back the changes made via the JDBC connection.
Narayana JDBC support can obtain XADataSources through the Java Naming and Directory Interface (JNDI) or dynamic class instantiation.
A JDBC driver can use arbitrary DataSources without having to know specific details about their implementations, by using JNDI. A specific DataSource or XADataSource can be created and registered with an appropriate JNDI implementation, and the application, or JDBC driver, can later bind to and use it. Since JNDI only allows the application to see the DataSource or XADataSource as an instance of the interface (e.g., javax.sql.XADataSource) rather than as an instance of the implementation class (e.g., com.mydb.myXADataSource), the application is not tied at build-time to only use a specific implementation.
For the TransactionalDriver class to use a JNDI-registered XADataSource, you need to create the XADataSource instance and store it in an appropriate JNDI implementation. Details of how to do this can be found in the JDBC tutorial available at the Java web site.
Example 2.4. Storing a datasource in a JNDI implementation
XADataSource ds = MyXADataSource();
Hashtable env = new Hashtable();
String initialCtx = PropertyManager.getProperty("Context.INITIAL_CONTEXT_FACTORY");
env.put(Context.INITIAL_CONTEXT_FACTORY, initialCtx);
initialContext ctx = new InitialContext(env);
ctx.bind("jdbc/foo", ds);
The Context.INITIAL_CONTEXT_FACTORY property is the JNDI way of specifying the type of JNDI implementation to use.
The application must pass an appropriate connection URL to the JDBC driver:
Properties dbProps = new Properties();
dbProps.setProperty(TransactionalDriver.userName, "user");
dbProps.setProperty(TransactionalDriver.password, "password");
// the driver uses its own JNDI context info, remember to set it up:
jdbcPropertyManager.propertyManager.setProperty(
"Context.INITIAL_CONTEXT_FACTORY", initialCtx);
jdbcPropertyManager.propertyManager.setProperty(
"Context.PROVIDER_URL", myUrl);
TransactionalDriver arjunaJDBCDriver = new TransactionalDriver();
Connection connection = arjunaJDBCDriver.connect("jdbc:arjuna:jdbc/foo", dbProps);
The JNDI URL must be pre-pended with
jdbc:arjuna:
in order for the TransactionalDriver to
recognize that the DataSource must participate within transactions and be driven accordingly.
If a JNDI implementation is not available. you can specify an implementation of the
DynamicClass
interface, which is used to get the XADataSource object. This is
not recommended, but provides a fallback for environments where use of JNDI is not feasible.
Use the property
TransactionalDriver.dynamicClass
to specify the implementation to use. An
example is
PropertyFileDynamicClass
, a DynamicClass implementation that reads the
XADataSource implementation class name and configuration properties from a file, then instantiates and
configures it.
The oracle_8_1_6 dynamic class is deprecated and should not be used.
Example 2.5. Instantiating a dynamic class
The application code must specify which dynamic class the TransactionalDriver should instantiate when setting up the connection:
Properties dbProps = new Properties();
dbProps.setProperty(TransactionalDriver.userName, "user");
dbProps.setProperty(TransactionalDriver.password, "password");
dbProps.setProperty(TransactionalDriver.dynamicClass,
"com.arjuna.ats.internal.jdbc.drivers.PropertyFileDynamicClass");
TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver();
Connection connection = arjunaJDBC2Driver.connect("jdbc:arjuna:/path/to/property/file", dbProperties);
Once the connection is established, all operations on the connection are monitored by Narayana. you do not need to use the transactional connection within transactions. If a transaction is not present when the connection is used, then operations are performed directly on the database.
JDBC does not support subtransactions.
You can use transaction timeouts to automatically terminate transactions if a connection is not terminated within an appropriate period.
You can use Narayana
connections within multiple transactions simultaneously. An example would be different
threads, with different notions of the current transaction. Narayana
does connection pooling for each
transaction within the JDBC connection. Although multiple threads may use the same instance of the JDBC
connection, internally there may be a separate connection for each transaction. With the exception of method
close
, all operations performed on the connection at the application level are only
performed on this transaction-specific connection.
Narayana automatically registers the JDBC driver connection with the transaction via an appropriate resource. When the transaction terminates, this resource either commits or rolls back any changes made to the underlying database via appropriate calls on the JDBC driver.
Once created, the driver and any connection can be used in the same way as any other JDBC driver or connection.
Example 2.6. Creating and using a connection
Statement stmt = conn.createStatement();
try
{
stmt.executeUpdate("CREATE TABLE test_table (a INTEGER,b INTEGER)");
}
catch (SQLException e)
{
// table already exists
}
stmt.executeUpdate("INSERT INTO test_table (a, b) VALUES (1,2)");
ResultSet res1 = stmt.executeQuery("SELECT * FROM test_table");
For each user name and password, Narayana
maintains a single instance of each connection for as long as that
connection is in use. Subsequent requests for the same connection get a reference to the original connection,
rather than a new instance. You can try to close the connection, but the connection will only actually be closed
when all users (including transactions) have either finished with the connection, or issued
close
calls.
Some JDBC drivers allow the reuse of a connection for multiple different transactions once a given
transaction
completes. Unfortunately this is not a common feature, and other drivers require a new connection to be
obtained for each new transaction. By default, the Narayana
transactional driver always obtains a new
connection for each new transaction. However, if an existing connection is available and is currently unused,
Narayana
can reuse this connection. To turn on this feature, add option
reuseconnection=true
to the JDBC URL. For instance,
jdbc:arjuna:sequelink://host:port;databaseName=foo;reuseconnection=true
When a transaction with an associated JDBC connection terminates, because of the application or because a transaction timeout expires, Narayana uses the JDBC driver to drive the database to either commit or roll back any changes made to it. This happens transparently to the application.
If property
AutoCommit
of the interface
java.sql.Connection
is set to
true
for JDBC, the execution of every SQL statement is a separate top-level transaction, and
it is not possible to group multiple statements to be managed within a single OTS transaction. Therefore,
Narayana
disables
AutoCommit
on JDBC connections before they can be used. If
AutoCommit
is later set to
true
by the application, Narayana
throws the
java.sql.SQLException
.
When you use the Narayana
JDBC driver, you may need to set the underlying transaction isolation level on the XA
connection. By default, this is set to
TRANSACTION_SERIALIZABLE
, but another value may be
more appropriate for your application. To change it, set the property
com.arjuna.ats.jdbc.isolationLevel
to the appropriate isolation level in string form. Example
values are
TRANSACTION_READ_COMMITTED
or
TRANSACTION_REPEATABLE_READ
.
Currently, this property applies to all XA connections created in the JVM.
Example 2.7. JDBC example
This simplified example assumes that you are using the transactional JDBC driver provided with . For details about how to configure and use this driver see the previous Chapter.
public class JDBCTest
{
public static void main (String[] args)
{
/*
*/
Connection conn = null;
Connection conn2 = null;
Statement stmt = null; // non-tx statement
Statement stmtx = null; // will be a tx-statement
Properties dbProperties = new Properties();
try
{
System.out.println("\nCreating connection to database: "+url);
/*
* Create conn and conn2 so that they are bound to the JBossTS
* transactional JDBC driver. The details of how to do this will
* depend on your environment, the database you wish to use and
* whether or not you want to use the Direct or JNDI approach. See
* the appropriate chapter in the JTA Programmers Guide.
*/
stmt = conn.createStatement(); // non-tx statement
try
{
stmt.executeUpdate("DROP TABLE test_table");
stmt.executeUpdate("DROP TABLE test_table2");
}
catch (Exception e)
{
// assume not in database.
}
try
{
stmt.executeUpdate("CREATE TABLE test_table (a INTEGER,b INTEGER)");
stmt.executeUpdate("CREATE TABLE test_table2 (a INTEGER,b INTEGER)");
}
catch (Exception e)
{
}
try
{
System.out.println("Starting top-level transaction.");
com.arjuna.ats.jta.UserTransaction.userTransaction().begin();
stmtx = conn.createStatement(); // will be a tx-statement
System.out.println("\nAdding entries to table 1.");
stmtx.executeUpdate("INSERT INTO test_table (a, b) VALUES (1,2)");
ResultSet res1 = null;
System.out.println("\nInspecting table 1.");
res1 = stmtx.executeQuery("SELECT * FROM test_table");
while (res1.next())
{
System.out.println("Column 1: "+res1.getInt(1));
System.out.println("Column 2: "+res1.getInt(2));
}
System.out.println("\nAdding entries to table 2.");
stmtx.executeUpdate("INSERT INTO test_table2 (a, b) VALUES (3,4)");
res1 = stmtx.executeQuery("SELECT * FROM test_table2");
System.out.println("\nInspecting table 2.");
while (res1.next())
{
System.out.println("Column 1: "+res1.getInt(1));
System.out.println("Column 2: "+res1.getInt(2));
}
System.out.print("\nNow attempting to rollback changes.");
com.arjuna.ats.jta.UserTransaction.userTransaction().rollback();
com.arjuna.ats.jta.UserTransaction.userTransaction().begin();
stmtx = conn.createStatement();
ResultSet res2 = null;
System.out.println("\nNow checking state of table 1.");
res2 = stmtx.executeQuery("SELECT * FROM test_table");
while (res2.next())
{
System.out.println("Column 1: "+res2.getInt(1));
System.out.println("Column 2: "+res2.getInt(2));
}
System.out.println("\nNow checking state of table 2.");
stmtx = conn.createStatement();
res2 = stmtx.executeQuery("SELECT * FROM test_table2");
while (res2.next())
{
System.out.println("Column 1: "+res2.getInt(1));
System.out.println("Column 2: "+res2.getInt(2));
}
com.arjuna.ats.jta.UserTransaction.userTransaction().commit(true);
}
catch (Exception ex)
{
ex.printStackTrace();
System.exit(0);
}
}
catch (Exception sysEx)
{
sysEx.printStackTrace();
System.exit(0);
}
}
This class implements the
XAResourceRecovery
interface for XAResources. The parameter supplied in
setParameters
can contain arbitrary information necessary to initialize the class once created. In this example, it
contains the
name of the property file in which the database connection information is specified, as well as the number of
connections that this file contains information on. Each item is separated by a semicolon.
This is only a small example of the sorts of things an XAResourceRecovery implementer could do. This implementation uses a property file that is assumed to contain sufficient information to recreate connections used during the normal run of an application so that recovery can be performed on them. Typically, user-names and passwords should never be presented in raw text on a production system.
Example 2.8. Database parameter format for the properties file
DB_x_DatabaseURL= DB_x_DatabaseUser= DB_x_DatabasePassword= DB_x_DatabaseDynamicClass=
x
is the number of the connection information.
Some error-handling code is missing from this example, to make it more readable.
Example 2.9. Failure recovery example with BasicXARecovery
/*
* Some XAResourceRecovery implementations will do their startup work here,
* and then do little or nothing in setDetails. Since this one needs to know
* dynamic class name, the constructor does nothing.
*/
public BasicXARecovery () throws SQLException
{
numberOfConnections = 1;
connectionIndex = 0;
props = null;
}
/**
* The recovery module will have chopped off this class name already. The
* parameter should specify a property file from which the url, user name,
* password, etc. can be read.
*
* @message com.arjuna.ats.internal.jdbc.recovery.initexp An exception
* occurred during initialisation.
*/
public boolean initialise (String parameter) throws SQLException
{
if (parameter == null)
return true;
int breakPosition = parameter.indexOf(BREAKCHARACTER);
String fileName = parameter;
if (breakPosition != -1)
{
fileName = parameter.substring(0, breakPosition - 1);
try
{
numberOfConnections = Integer.parseInt(parameter
.substring(breakPosition + 1));
}
catch (NumberFormatException e)
{
return false;
}
}
try
{
String uri = com.arjuna.common.util.FileLocator
.locateFile(fileName);
jdbcPropertyManager.propertyManager.load(XMLFilePlugin.class
.getName(), uri);
props = jdbcPropertyManager.propertyManager.getProperties();
}
catch (Exception e)
{
return false;
}
return true;
}
/**
* @message com.arjuna.ats.internal.jdbc.recovery.xarec {0} could not find
* information for connection!
*/
public synchronized XAResource getXAResource () throws SQLException
{
JDBC2RecoveryConnection conn = null;
if (hasMoreResources())
{
connectionIndex++;
conn = getStandardConnection();
if (conn == null) conn = getJNDIConnection();
}
return conn.recoveryConnection().getConnection().getXAResource();
}
public synchronized boolean hasMoreResources ()
{
if (connectionIndex == numberOfConnections)
return false;
else
return true;
}
private final JDBC2RecoveryConnection getStandardConnection ()
throws SQLException
{
String number = new String("" + connectionIndex);
String url = new String(dbTag + number + urlTag);
String password = new String(dbTag + number + passwordTag);
String user = new String(dbTag + number + userTag);
String dynamicClass = new String(dbTag + number + dynamicClassTag);
Properties dbProperties = new Properties();
String theUser = props.getProperty(user);
String thePassword = props.getProperty(password);
if (theUser != null)
{
dbProperties.put(TransactionalDriver.userName, theUser);
dbProperties.put(TransactionalDriver.password, thePassword);
String dc = props.getProperty(dynamicClass);
if (dc != null)
dbProperties.put(TransactionalDriver.dynamicClass, dc);
return new JDBC2RecoveryConnection(url, dbProperties);
}
else
return null;
}
private final JDBC2RecoveryConnection getJNDIConnection ()
throws SQLException
{
String number = new String("" + connectionIndex);
String url = new String(dbTag + jndiTag + number + urlTag);
String password = new String(dbTag + jndiTag + number + passwordTag);
String user = new String(dbTag + jndiTag + number + userTag);
Properties dbProperties = new Properties();
String theUser = props.getProperty(user);
String thePassword = props.getProperty(password);
if (theUser != null)
{
dbProperties.put(TransactionalDriver.userName, theUser);
dbProperties.put(TransactionalDriver.password, thePassword);
return new JDBC2RecoveryConnection(url, dbProperties);
}
else
return null;
}
private int numberOfConnections;
private int connectionIndex;
private Properties props;
private static final String dbTag = "DB_";
private static final String urlTag = "_DatabaseURL";
private static final String passwordTag = "_DatabasePassword";
private static final String userTag = "_DatabaseUser";
private static final String dynamicClassTag = "_DatabaseDynamicClass";
private static final String jndiTag = "JNDI_";
/*
* Example:
*
* DB2_DatabaseURL=jdbc\:arjuna\:sequelink\://qa02\:20001
* DB2_DatabaseUser=tester2 DB2_DatabasePassword=tester
* DB2_DatabaseDynamicClass=com.arjuna.ats.internal.jdbc.drivers.sequelink_5_1
*
* DB_JNDI_DatabaseURL=jdbc\:arjuna\:jndi DB_JNDI_DatabaseUser=tester1
* DB_JNDI_DatabasePassword=tester DB_JNDI_DatabaseName=empay
* DB_JNDI_Host=qa02 DB_JNDI_Port=20000
*/
private static final char BREAKCHARACTER = ';'; // delimiter for parameters
You can use the class
com.arjuna.ats.internal.jdbc.recovery.JDBC2RecoveryConnection
to
create a new connection to the database using the same parameters used to create the initial connection.
WildFly Application Server is discussed here. Refer to the documentation for your application server for differences.
When Narayana
runs embedded in WildFly Application Server,
the transaction subsystem is configured primarily through the
jboss-cli
configuration tool, which overrides properties read from the default properties
file mbedded in the
.jar
file.
Table 2.1. Common configuration attributes
default-timeout |
The default transaction timeout to be used for new transactions. Specified as an integer in seconds. |
enable-statistics |
This determines whether or not the transaction service should gather statistical information. This information can then be viewed using the TransactionStatistics MBean. Specified as a Boolean. The default is to not gather this information. |
See the
jboss-cli
tool and the WildFly Application Server
administration and configuration guide for further
information.
To make
logging semantically consistent with WildFly Application Server,
the
TransactionManagerService
modifies the level of some log messages, by overriding
the value of the
LoggingEnvironmentBean.loggingFactory
property in the
jbossts-properties.xml
file. Therefore, the value of this property has no effect on the
logging behavior when running embedded in WildFly Application Server.
By forcing use of the
log4j_releveler
logger, the
TransactionManagerService
changes the level of all
INFO
level messages in the transaction code to
DEBUG
. Therefore, these
messages do not appear in log files if the filter level is
INFO
. All other log messages behave
as normal.
The
TransactionManager
bean provides transaction management services to other
components in WildFly Application Server.
There are two different version of this bean and they requires different configuration.
Use
jboss-cli
to select JTA or JTS mode.
You can coordinate transactions from a coordinator which is not located within the WildFly Application Server, such as when using transactions created by an external OTS server. To ensure the transaction context is propagated via JRMP invocations to the server, the transaction propagation context factory needs to be explicitly set for the JRMP invoker proxy. This is done as follows:
JRMPInvokerProxy.setTPCFactory( new com.arjuna.ats.internal.jbossatx.jts.PropagationContextManager() );
Procedure 2.2. Pre-Installation Steps
Before installing the Narayana software, we recommend the following administrative steps be taken, assuming a default configuration for Narayana.
Install the distribution into the required location.
Typically, the distribution is extracted from a
.ZIP
file.
Specify the Location for the Object Store
Narayana requires a minimum object store for storing the outcome of transactions in the event of system crashes. The location of this should be specified in the properties file using the ObjectStoreEnvironmentBean.objectStoreDir key or by using environment variable:
java –DObjectStoreEnvironmentBean.objectStoreDir =C:\temp foo.
Optional: Specify the sub-directory within the Object Store root.
By default, all object states will be stored within the
defaultStore/
sub-directory of the
object store root. For instance, if the object store root is
/usr/local/Arjuna/TransactionService/ObjectStore
, the subdirectory
/usr/local/Arjuna/TransactionService/ObjectStore/defaultStore/
is used.
To change this subdirectory, set the ObjectStoreEnvironmentBean.localOSRoot or com.arjuna.ats.arjuna.objectstore.localOSRoot property variable accordingly.
Four scripts, located in the
Services\bin\windows
folder, install and uninstall the recovery manager and transaction server services.
Installation Scripts for Microsoft Windows
InstallRecoveryManagerService-NT.bat
InstallTransactionServiceService-NT.bat
Uninstallation Scripts for Microsoft Windows
UninstallRecoveryManagerService-NT.bat
UninstallTransactionServerService-NT.bat
Each of the scripts requires administrative privileges.
After running any of the scripts, a status message indicates success or failure.
Procedure 2.3. Installing Services in Linux / UNIX
Log into the system with
root
privileges.
The installer needs these privileges to create files in
/etc
.
Change to
directory.
JBOSS_HOME
/services/installer
refers to the directory where you extracted Narayana.
JBOSS_HOME
Set the
JAVA_HOME
variable, if necessary.
Set the
JAVA_HOME
variable to the
base directory
of the JVM the service will use. The base directory is the directory above
bin/java
.
Bash: export JAVA_HOME="/opt/java"
CSH: setenv JAVA_HOME="/opt/java"
Run the installer script.
./install_service.sh
The start-up and shut-down scripts are installed.
Information similar to the output below is displayed.
Adding $JAVA_HOME (/opt/java) to $PATH in /opt/arjuna/ats-3.2/services/bin/solaris/recoverymanagerservice.sh Adding $JAVA_HOME (/opt/java) to $PATH in /opt/arjuna/ats-3.2/services/bin/solaris/transactionserverservice.sh Installing shutdown scripts into /etc/rcS.d: K01recoverymanagerservice K00transactionserverservice Installing shutdown scripts into /etc/rc0.d: K01recoverymanagerservice K00transactionserverservice Installing shutdown scripts into /etc/rc1.d: K01recoverymanagerservice K00transactionserverservice Installing shutdown scripts into /etc/rc2.d: K01recoverymanagerservice K00transactionserverservice Installing startup scripts into /etc/rc3.d: S98recoverymanagerservice S99transactionserverservice
The start-up and shut-down scripts are installed for each run-level. Depending on your specific operating system, you may need to explicitly enable the services for automatic start-up.
Procedure 2.4. Uninstalling Services in Linux / UNIX
Log into the system with
root
privileges.
The installer needs these privileges to delete files in
/etc
.
Change to
directory.
JBOSS_HOME
/services/installer
refers to the directory where you extracted Narayana.
JBOSS_HOME
Run the installation script with the
-u
option.
./install_services.sh -u
The start-up and shut-down scripts are removed.
Messages like the ones below indicate that the start-up and shut-down scripts have been removed successfully.
Removing startup scripts from /etc/rc3.d: S98recoverymanagerservice S99transactionserverservice Removing shutdown scripts from /etc/rcS.d: K01recoverymanagerservice K00transactionserverservice Removing shutdown scripts from /etc/rc0.d: K01recoverymanagerservice K00transactionserverservice Removing shutdown scripts from /etc/rc1.d: K01recoverymanagerservice K00transactionserverservice Removing shutdown scripts from /etc/rc2.d: K01recoverymanagerservice K00transactionserverservice
The recovery manager and the transaction server services produce log files which are located in the
services/logs/
directory. Two log files are created per service.
service-name
-service.log
Contains information regarding whether the service is stopped, started, restarted, or in another state.
service-name
.log
Contains information logged from the actual service.
To configure what information is logged in these files, edit the appropriate LOG4J configuration files located
in
services/config/
.
To use all of the facilities available within Narayana,
you need to add all of the JAR files contained in the
lib/
directory of the distribution to the
CLASSPATH
.
Narayana
has been designed to be highly configurable at runtime through
the use of various property attributes.
Although these attributes can be provided at runtime
on the command line, it may be more convenient to specify them through a
single properties file or via
setter
methods on the
beans. At runtime, Narayana
looks for the file
jbossts-properties.xml
, in a specific search order.
A location specified by a system property , allowing the normal search path to be overridden.
The directory from which the application was executed.
The home directory of the user that launched Narayana.
java.home
The
CLASSPATH
, which normally includes the installation's
etc/
directory.
A default set of properties embedded in the
JAR
file.
Where properties are defined in both the system properties by using the
-D
switch, and in the
properties file, the value from the system property takes precedence. This facilitates overriding individual
properties easily on the command line.
The properties file uses
java.uil.Properties
XML format, for example:
<entry key="CoordinatorEnvironmentBean.asyncCommit">NO</entry>
<entyr key="ObjectStoreEnvironmentBean.objectStoreDir">/var/ObjectStore</entry>
You can override the name of the properties file at runtime by specifying a new file using the
com.arjuna.ats.arjuna.common.propertiesFile
attribute variable.
Unlike earlier releases, there is no longer one properties file name per module. This properties file name key is now global for all components in the JVM.
This chapter will briefly cover the key features required to construct a JTA application. It is assumed that the reader is familiar with the concepts of the JTA.
The key Java packages (and corresponding jar files) for writing basic JTA applications are:
com.arjuna.ats.jts: this package contains the implementations of the JTS and JTA.
com.arjuna.ats.jta: this package contains local and remote JTA implementation support.
com.arjuna.ats.jdbc: this package contains transactional JDBC support.
All of these packages appear in the lib directory of the installation, and should be added to the programmer’s CLASSPATH.
In order to fully utilize all of the facilities available within , it will be necessary to add some additional jar files to your classpath. See bin/setup-env.sh or bin\setup-env.bat for details.
Narayana has also been designed to be configurable at runtime through the use of various property attributes. These attributes can be provided at runtime on command line or specified through a properties file.
Narayana requires an object store in order to persistently record the outcomes of transactions in the event of failures. In order to specify the location of the object store it is necessary to specify the location when the application is executed; for example:
java –DObjectStoreEnvironmentBean.objectStoreDir=/var/tmp/ObjectStore myprogram
The default location is a directory under the current execution directory.
By default, all object states will be stored within the defaultStore subdirectory of the object store root, e.g., /usr/local/Arjuna/TransactionService/ObjectStore/defaultStore. However, this subdirectory can be changed by setting the ObjectStoreEnvironmentBean.localOSRoot property variable accordingly.
The Java Transaction API consists of three elements: a high-level application transaction demarcation interface, a high-level transaction manager interface intended for application server, and a standard Java mapping of the X/Open XA protocol intended for transactional resource manager. All of the JTA classes and interfaces occur within the jakarta.transaction package, and the corresponding Narayana implementations within the com.arjuna.ats.jta package.
The UserTransaction interface provides applications with the ability to control transaction boundaries.
In Narayana, UserTransaction can be obtained from the static com.arjuna.ats.jta.UserTransaction.userTransaction() method. When obtained the UserTransaction object can be used to control transactions
Example 2.10. User Transaction Example
//get UserTransaction
UserTransaction utx = com.arjuna.ats.jta.UserTransaction.userTransaction();
// start transaction work..
utx.begin();
// perform transactional work
utx.commit();
The TransactionManager interface allows the application server to control transaction boundaries on behalf of the application being managed.
In Narayana, transaction manager implementations can be obtained from the static com.arjuna.ats.jta.TransactionManager.transactionManager() method
The Transaction interface allows operations to be performed on the transaction associated with the target object. Every top-level transaction is associated with one Transaction object when the transaction is created. The Transaction object can be used to:
enlist the transactional resources in use by the application.
register for transaction synchronization call backs.
commit or rollback the transaction.
obtain the status of the transaction.
A Transaction object can be obtained using the TransactionManager by invoking the method getTransaction() method.
Transaction txObj = TransactionManager.getTransaction();
In order to ensure interoperability between JTA applications, it is recommended to rely on the JTS/OTS specification to ensure transaction propagation among transaction managers.
In order to select the local JTA implementation it is necessary to perform the following steps:
make sure the property JTAEnvironmentBean.jtaTMImplementation is set to com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionManagerImple.
make sure the property JTAEnvironmentBean.jtaUTImplementation is set to com.arjuna.ats.internal.jta.transaction.arjunacore.UserTransactionImple.
In order to select the distributed JTA implementation it is necessary to perform the following steps:
make sure the property JTAEnvironmentBean.jtaTMImplementation is set to com.arjuna.ats.internal.jta.transaction.jts.TransactionManagerImple.
make sure the property JTAEnvironmentBean.jtaUTImplementation is set to com.arjuna.ats.internal.jta.transaction.jts.UserTransactionImple.
JTS supports the construction of both local and distributed transactional applications which access databases using the JDBC APIs. JDBC supports two-phase commit of transactions, and is similar to the XA X/Open standard. The JDBC support is found in the com.arjuna.ats.jdbc package.
The ArjunaJTS approach to incorporating JDBC connections within transactions is to provide transactional JDBC drivers through which all interactions occur. These drivers intercept all invocations and ensure that they are registered with, and driven by, appropriate transactions. (There is a single type of transactional driver through which any JDBC driver can be driven. This driver is com.arjuna.ats.jdbc.TransactionalDriver, which implements the java.sql.Driver interface.)
Once the connection has been established (for example, using the java.sql.DriverManager.getConnection method), all operations on the connection will be monitored by Narayana. Once created, the driver and any connection can be used in the same way as any other JDBC driver or connection.
Narayana connections can be used within multiple different transactions simultaneously, i.e., different threads, with different notions of the current transaction, may use the same JDBC connection. Narayana does connection pooling for each transaction within the JDBC connection. So, although multiple threads may use the same instance of the JDBC connection, internally this may be using a different connection instance per transaction. With the exception of close, all operations performed on the connection at the application level will only be performed on this transaction-specific connection.
Narayana will automatically register the JDBC driver connection with the transaction via an appropriate resource. When the transaction terminates, this resource will be responsible for either committing or rolling back any changes made to the underlying database via appropriate calls on the JDBC driver.
The following table shows some of the configuration features, with default values shown in italics. For more detailed information, the relevant section numbers are provided. You should look at the various Programmers Guides for more options.
You need to prefix certain properties in this table with the string com.arjuna.ats.internal.jta.transaction. The prefix has been removed for formatting reasons, and has been replaced by ...
Configuration Name | Possible Values |
---|---|
com.arjuna.ats.jta.supportSubtransactions |
YES NO |
com.arjuna.ats.jta.jtaTMImplementation |
...arjunacore.TransactionManagerImple ...jts.TransactionManagerImple |
com.arjuna.ats.jta.jtaUTImplementation |
...arjunacore.UserTransactionImple ...jts.UserTransactionImple |
com.arjuna.ats.jta.xaBackoffPeriod |
Time in seconds. |
com.arjuna.ats.jdbc.isolationLevel |
Any supported JDBC isolation level. |
Since the release of 4.1, the Web Services Transaction product has been merged into . is thus a single product that is compliant with all of the major distributed transaction standards and specifications.
Knowledge of Web Services is not required to administer a installation that only uses the CORBA/J2EE component, nor is knowledge of CORBA required to use the Web Services component. This, administrative tasks are separated when they touch only one component or the other.
Apart from ensuring that the run-time system is executing normally, there is little continuous administration needed for the Narayana software. Refer to Important Points for Administrators for some specific concerns.
Important Points for Administrators
The present implementation of the Narayana system provides no security or protection for data. The objects stored in the Narayana object store are (typically) owned by the user who ran the application that created them. The Object Store and Object Manager facilities make no attempt to enforce even the limited form of protection that Unix/Windows provides. There is no checking of user or group IDs on access to objects for either reading or writing.
Persistent objects created in the Object Store never go away unless the StateManager.destroy method is invoked on the object or some application program explicitly deletes them. This means that the Object Store gradually accumulates garbage (especially during application development and testing phases). At present we have no automated garbage collection facility. Further, we have not addressed the problem of dangling references. That is, a persistent object, A, may have stored a Uid for another persistent object, B, in its passive representation on disk. There is nothing to prevent an application from deleting B even though A still contains a reference to it. When A is next activated and attempts to access B, a run-time error will occur.
There is presently no support for version control of objects or database reconfiguration in the event of class structure changes. This is a complex research area that we have not addressed. At present, if you change the definition of a class of persistent objects, you are entirely responsible for ensuring that existing instances of the object in the Object Store are converted to the new representation. The Narayana software can neither detect nor correct references to old object state by new operation versions or vice versa.
Object store management is critically important to the transaction service.
By default the transaction manager starts up in an active state such that new transactions can be created
immediately. If you wish to have more control over this it is possible to set the
CoordinatorEnvironmentBean.startDisabled
configuration option to
YES
and in
which case no transactions can be created until the transaction manager is enabled via a call to method
TxControl.enable
).
It is possible to stop the creation of new transactions at any time by calling method
TxControl.disable
. Transactions that are currently executing will not be affected. By
default recovery will be allowed to continue and the transaction system will still be available to manage recovery
requests from other instances in a distributed environment. (See the Failure Recovery Guide for further
details). However, if you wish to disable recovery as well as remove any resources it maintains, then you can pass
true
to method
TxControl.disable
; the default is to use
false
.
If you wish to shut the system down completely then it may also be necessary to terminate the background
transaction
reaper (see the Programmers Guide for information about what the reaper does.) In order to do this you may want to
first prevent the creation of new transactions (if you are not creating transactions with timeouts then this step is
not necessary) using method
TxControl.disable
. Then you should call method
TransactionReaper.terminate
. This method takes a Boolean parameter: if
true
then the method will wait for the normal timeout periods associated with any transactions to
expire before terminating the transactions; if
false
then transactions will be forced to
terminate (rollback or have their outcome set such that they can only ever rollback) immediately.
if you intent to restart the recovery manager later after having terminated it then you MUST use the
TransactionReapear.terminate
method with asynchronous behavior set to
false
.
The run-time support consists of run-time packages and the OTS transaction manager server. By default, does not use a separate transaction manager server. Instead, transaction managers are co-located with each application process to improve performance and improve application fault-tolerance by reducing application dependency on other services.
When running applications which require a separate transaction manager, set the
JTSEnvironmentBean.transactionManager
environment variable to value
YES
. The system locates the
transaction manager server in a manner specific to the ORB being used. This method may be any of:
Being registered with a name server.
Being added to the ORB’s initial references.
Via a specific references file.
By the ORB’s specific location mechanism (if applicable).
You override the default registration mechanism by using the
OrbPortabilityEnvironmentBean.resolveService
environment variable, which takes the following
values:
Table 3.1.
Possible values of
OrbPortabilityEnvironmentBean.resolveService
CONFIGURATION_FILE |
This is the default, and causes the system to use the
|
NAME_SERVICE |
attempts to use a name service to register the transaction factory. If this is not supported, an exception is thrown. |
BIND_CONNECT |
uses the ORB-specific bind mechanism. If this is not supported, an exception is thrown. |
RESOLVE_INITIAL_REFERENCES |
attempts to register the transaction service with the ORB's initial service references. If the ORB does not support this, an exception is thrown, and another option must be used. |
Similar to the
resolve_initial_references
,
supports an initial reference file
where references for specific services can be stored and used at runtime. The file,
CosServices.cfg
, consists of two columns: the service name (in the case of the OTS server
TransactionService), and the IOR, separated by a single space.
CosServices.cfg
is located
at runtime by the following
OrbPortabilityEnvironmentBean
properties:
initialReferencesRoot
|
The directory where the file is located, defaulting to the current working directory. |
initialReferencesFile
|
The name of the configuration file itself,
|
The OTS server automatically registers itself in the
CosServices.cfg
file if the
OrbPortabilityEnvironmentBean
option is used, creating the file if necessary. Stale
information is also automatically removed. Machines sharing the same transaction server should have access to
this file, or a copy of it locally.
Example 3.1. Example ORB reference file settings
OrbPortabilityEnvironmentBean.initialReferencesFile
=myFile
OrbPortabilityEnvironmentBean.initialReferencesRoot
=/tmp
If your ORB supports a name service, and is configured to use it, the transaction manager is registered with it automatically. There is no further work required.
This option is not used for JacORB
Each XA Xid that
creates must have a unique node identifier encoded within it.
only recovers
transactions and states that match a specified node identifier. Provide the node identifier with the
CoreEnvironmentBean.nodeIdentifier
property. This value must be unique across your
instances. If you do not provide a value,
generates one and reports the value via the logging
infrastructure.
When running XA recovery, you need to specify which types of Xid
can recover. Use the
JTAEnvironmentBean.xaRecoveryNodes
property to provide one or more values, in a space-separated
list.
A value of ‘*’ forces to recover, and possibly rollback, all transactions, regardless of their node identifier. Use this value with extreme caution.
Two variants of the JTA implementation are now provided and accessible through the same interface. These are:
Only non-distributed JTA transactions can be executed. This is the only version available with the Narayana product.
Distributed JTA transactions can be executed. This version is only available with the $PARENT_PRODUCT product and requires a supported CORBA ORB.
Both of these implementations are fully compatible with the transactional JDBC driver provided with .
Procedure 3.1. Selecting the local JTA implementation
Set the property
JTAEnvironmentBean.jtaTMImplementation
to value
com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionManagerImple
.
Set the property
JTAEnvironmentBean.jtaUTImplementation
to value
com.arjuna.ats.internal.jta.transaction.arjunacore.UserTransactionImple
.
These settings are the default values for the properties and do not need to be specified if the local implementation is required.
Procedure 3.2. Selecting the remote JTA implementation
Set the property
JTAEnvironmentBean.jtaTMImplementation
to value
com.arjuna.ats.internal.jta.transaction.jts..TransactionManagerImple
.
Set the property
JTAEnvironmentBean.jtaUTImplementation
to value
com.arjuna.ats.internal.jta.transaction.jts.UserTransactionImple
.
The failure recovery subsystem of Narayana
will ensure that results of a transaction are applied consistently to
all resources affected by the transaction, even if any of the application processes or the machine hosting them
crash or lose network connectivity. In the case of machine (system) crash or network failure, the recovery will not
take place until the system or network are restored, but the original application does not need to be
restarted. Recovery responsibility is delegated to
Section 2.1.5.1, “The Recovery Manager”
. Recovery after failure
requires that information about the transaction and the resources involved survives the failure and is accessible
afterward: this information is held in the
ActionStore
, which is part of the
ObjectStore
.
If the
ObjectStore
is destroyed or modified, recovery may not be possible.
Until the recovery procedures are complete, resources affected by a transaction that was in progress at the time
of
the failure may be inaccessible. For database resources, this may be reported as tables or rows held by “in-doubt
transactions”. For
TransactionalObjects for Java
resources, an attempt to activate the
Transactional Object
(as when trying to get a lock) will fail.
The failure recovery subsystem of Narayana
requires that the stand-alone Recovery Manager process be running for
each
ObjectStore
(typically one for each node on the network that is running Narayana
applications). The
RecoveryManager
file is located in the Narayana
JAR file within the
package
com.arjuna.ats.arjuna.recovery.RecoveryManager
. To start the Recovery Manager issue the
following command:
java com.arjuna.ats.arjuna.recovery.RecoveryManager
If the
-test
flag is used with the Recovery Manager then it will display a
Ready
message when initialized, i.e.,
java com.arjuna.ats.arjuna.recovery.RecoveryManager -test
The RecoveryManager reads the properties defined in the
jbossts-properties.xml
file.
A default version of
jbossts-properties.xml
is supplied with the distribution. This can
be used without modification, except possibly the debug tracing fields, as shown in
Section 2.1.5.3, “Output”
.
It is likely that installations will want to have some form of output from the RecoveryManager, to provide a record of what recovery activity has taken place. RecoveryManager uses the logging mechanism provided by jboss logging , which provides a high level interface that hides differences that exist between existing logging APIs such Jakarta log4j or JDK logging API.
The configuration of jboss logging depends on the underlying logging framework that is used, which is determined by the availability and ordering of alternatives on the classpath. Please consult the jboss logging documentation for details. Each log message has an associated log Level, that gives the importance and urgency of a log message. The set of possible Log Levels, in order of least severity, and highest verbosity, is:
TRACE
DEBUG
INFO
WARN
ERROR
FATAL
Messages describing the start and the periodical behavior made by the RecoveryManager are output using the
INFO
level. If other debug tracing is wanted, the finer debug or trace levels should be set
appropriately.
Setting the normal recovery messages to the
INFO
level allows the RecoveryManager to produce a
moderate level of reporting. If nothing is going on, it just reports the entry into each module for each periodic
pass. To disable
INFO
messages produced by the Recovery Manager, the logging level could be set
to the higher level of
ERROR
, which means that the RecoveryManager will only produce
ERROR
,
WARNING
, or
FATAL
messages.
The RecoveryManager scans the ObjectStore and other locations of information, looking for transactions and
resources that require, or may require recovery. The scans and recovery processing are performed by recovery
modules. These recovery modules are instances of classes that implement the
com.arjuna.ats.arjuna.recovery.RecoveryModule interface
. Each module has
responsibility for a particular category of transaction or resource. The set of recovery modules used is
dynamically loaded, using properties found in the RecoveryManager property file.
The interface has two methods:
periodicWorkFirstPass
and
periodicWorkSecondPass
. At an interval
defined by property
com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod
, the RecoveryManager
calls the first pass method on each property, then waits for a brief period, defined by property
com.arjuna.ats.arjuna.recovery.recoveryBackoffPeriod
. Next, it calls the second pass of each
module. Typically, in the first pass, the module scans the relevant part of the ObjectStore to find transactions or
resources that are in-doubt. An in-doubt transaction may be part of the way through the commitment process,
for
instance. On the second pass, if any of the same items are still in-doubt, the original application process may have
crashed, and the item is a candidate for recovery.
An attempt by the RecoveryManager to recover a transaction that is still progressing in the original process is likely to break the consistency. Accordingly, the recovery modules use a mechanism, implemented in the com.arjuna.ats.arjuna.recovery.TransactionStatusManager package, to check to see if the original process is still alive, and if the transaction is still in progress. The RecoveryManager only proceeds with recovery if the original process has gone, or, if still alive, the transaction is completed. If a server process or machine crashes, but the transaction-initiating process survives, the transaction completes, usually generating a warning. Recovery of such a transaction is the responsibility of the RecoveryManager.
It is clearly important to set the interval periods appropriately. The total iteration time will be the sum of the periodicRecoveryPeriod and recoveryBackoffPeriod properties, and the length of time it takes to scan the stores and to attempt recovery of any in-doubt transactions found, for all the recovery modules. The recovery attempt time may include connection timeouts while trying to communicate with processes or machines that have crashed or are inaccessible. There are mechanisms in the recovery system to avoid trying to recover the same transaction indefinitely. The total iteration time affects how long a resource will remain inaccessible after a failure. – periodicRecoveryPeriod should be set accordingly. Its default is 120 seconds. The recoveryBackoffPeriod can be comparatively short, and defaults to 10 seconds. –Its purpose is mainly to reduce the number of transactions that are candidates for recovery and which thus require a call to the original process to see if they are still in progress.
In previous versions of Narayana , there was no contact mechanism, and the back-off period needed to be long enough to avoid catching transactions in flight at all. From 3.0, there is no such risk.
Two recovery modules, implementations of the
com.arjuna.ats.arjuna.recovery.RecoveryModule
interface, are supplied with
Narayana
. These modules support various aspects of transaction recovery, including
JDBC recovery. It is possible for advanced users to create their own recovery modules and register them with the
Recovery Manager. The recovery modules are registered with the RecoveryManager using
RecoveryEnvironmentBean.recoveryModuleClassNames
. These will be invoked on each pass of the
periodic recovery in the sort-order of the property names – it is thus possible to predict the ordering, but a
failure in an application process might occur while a periodic recovery pass is in progress. The default Recovery
Extension settings are:
<entry key="RecoveryEnvironmentBean.recoveryModuleClassNames">
com.arjuna.ats.internal.arjuna.recovery.AtomicActionRecoveryModule
com.arjuna.ats.internal.txoj.recovery.TORecoveryModule
com.arjuna.ats.internal.jts.recovery.transactions.TopLevelTransactionRecoveryModule
com.arjuna.ats.internal.jts.recovery.transactions.ServerTransactionRecoveryModule
com.arjuna.ats.internal.jta.recovery.jts.XARecoveryModule
</entry>
The operation of the recovery subsystem cause some entries to be made in the ObjectStore that are not
removed in
normal progress. The RecoveryManager has a facility for scanning for these and removing items that are very
old. Scans and removals are performed by implementations of the
com.arjuna.ats.arjuna.recovery.ExpiryScanner
interface. These implementations are
loaded by giving the class names as the value of a property
RecoveryEnvironmentBean.expiryScannerClassNames
. The RecoveryManager calls the
scan()
method on each loaded Expiry Scanner implementation at an interval determined by the property
RecoveryEnvironmentBean.expiryScanInterval
. This value is given in hours, and defaults to
12hours. An
expiryScanInterval
value of zero suppresses any expiry scanning. If the value
supplied is positive, the first scan is performed when RecoveryManager starts. If the value is negative, the first
scan is delayed until after the first interval, using the absolute value.
The kinds of item that are scanned for expiry are:
One TransactionStatusManager item is created by every application process that uses Narayana . It contains the information that allows the RecoveryManager to determine if the process that initiated the transaction is still alive, and its status. The expiry time for these items is set by the property com.arjuna.ats.arjuna.recovery.transactionStatusManagerExpiryTime , expressed in hours. The default is 12, and 0 (zero) means never to expire.The expiry time should be greater than the lifetime of any single processes using Narayana .
The Expiry Scanner properties for these are:
<entry key="RecoveryEnvironmentBean.expiryScannerClassNames">
com.arjuna.ats.internal.arjuna.recovery.ExpiredTransactionStatusManagerScanner
</entry>
For JacORB to function correctly it needs a valid
jacorb.properties
or
.jacorb_properties
file in one of the following places, in searched order:
The classpath
The home directory of the user running the
Service. The home directory is retrieved using
System.getProperty( “user.home” );
The current directory
The
lib/
directory of the JDK used to run your application. This is retrieved using
System.getProperty(“java.home” );
A template
jacorb.properties
file is located in the JacORB installation directory.
Within the JacORB properties file there are two important properties which must be tailored to suit your application.
jacorb.poa.thread_pool_max
jacorb.poa.thread_pool_min
These properties specify the minimum and maximum number of request processing threads that JacORB uses in its thread pool. If no threads are available, may block until a thread becomes available.. For more information on configuring JacORB, refer to the JacORB documentation.
JacORB includes its own implementation of the classes defined in the
CosTransactions.idl
file. Unfortunately these are incompatible with the version shipped with .
Therefore, the
jar files absolutely must appear in the CLASSPATH before any JacORB jars.
When running the recovery manager, it should always uses the same well-known port for each machine on which
it
runs. Do not use the
OAPort
property provided by JacORB unless the recovery manager has its own
jacorb.properties
file or the property is provided on the command line when starting the
recovery manager. If the recovery manager and other components of
share the same
jacorb.properties
file, use the
JTSEnvironmentBean.recoveryManagerPort
and
JTSEnvironmentBean.recoveryManagerAddress
properties.
most be initialized correctly before any application object is created. To guarantee this, use the
initORB
and
create_POA
methods described in the
Orb
Portability Guide
. Consult the Orb Portability Guide if you need to use the underlying
ORB_init
and
create_POA
methods provided by the ORB instead of the
methods.
A transaction is a unit of work that encapsulates multiple database actions such that that either all the encapsulated actions fail or all succeed.
Transactions ensure data integrity when an application interacts with multiple datasources.
Practical Example. If you subscribe to a newspaper using a credit card, you are using a transactional system. Multiple systems are involved, and each of the systems needs the ability to roll back its work, and cause the entire transaction to roll back if necessary. For instance, if the newspaper's subscription system goes offline halfway through your transaction, you don't want your credit card to be charged. If the credit card is over its limit, the newspaper doesn't want your subscription to go through. In either of these cases, the entire transaction should fail of any part of it fails. Neither you as the customer, nor the newspaper, nor the credit card processor, wants an unpredictable (indeterminate) outcome to the transaction.
This ability to roll back an operation if any part of it fails is what is all about. This guide assists you in writing transactional applications to protect your data.
"Transactions" in this guide refers to atomic transactions, and embody the "all-or-nothing" concept outlined above. Transactions are used to guarantee the consistency of data in the presence of failures. Transactions fulfill the requirements of ACID: Atomicity, Consistency, Isolation, Durability.
ACID Properties
The transaction completes successfully (commits) or if it fails (aborts) all of its effects are undone (rolled back).
Transactions produce consistent results and preserve application specific invariants.
Intermediate states produced while a transaction is executing are not visible to others. Furthermore transactions appear to execute serially, even if they are actually executed concurrently.
The effects of a committed transaction are never lost (except by a catastrophic failure).
A transaction can be terminated in two ways: committed or aborted (rolled back). When a transaction is committed, all changes made within it are made durable (forced on to stable storage, e.g., disk). When a transaction is aborted, all of the changes are undone. Atomic actions can also be nested; the effects of a nested action are provisional upon the commit/abort of the outermost (top-level) atomic action.
A two-phase commit protocol guarantees that all of the transaction participants either commit or abort any changes made. Figure 3.1, “Two-Phase Commit” illustrates the main aspects of the commit protocol.
Procedure 3.3. Two-phase commit protocol
During phase 1, the action coordinator, C, attempts to communicate with all of the action participants, A and B, to determine whether they will commit or abort.
An abort reply from any participant acts as a veto, causing the entire action to abort.
Based upon these (lack of) responses, the coordinator chooses to commit or abort the action.
If the action will commit, the coordinator records this decision on stable storage, and the protocol enters phase 2, where the coordinator forces the participants to carry out the decision. The coordinator also informs the participants if the action aborts.
When each participant receives the coordinator’s phase-one message, it records sufficient information on stable storage to either commit or abort changes made during the action.
After returning the phase-one response, each participant who returned a commit response must remain blocked until it has received the coordinator’s phase-two message.
Until they receive this message, these resources are unavailable for use by other actions. If the coordinator fails before delivery of this message, these resources remain blocked. However, if crashed machines eventually recover, crash recovery mechanisms can be employed to unblock the protocol and terminate the action.
The action coordinator maintains a transaction context where resources taking part in the action need to be registered. Resources must obey the transaction commit protocol to guarantee ACID properties. Typically, the resource provides specific operations which the action can invoke during the commit/abort protocol. However, some resources may not be able to be transactional in this way. This may happen if you have legacy code which cannot be modified. Transactional proxies allow you to use these anomalous resources within an action.
The proxy is registered with, and manipulated by, the action as though it were a transactional resource, and the proxy performs implementation specific work to make the resource it represents transactional. The proxy must participate within the commit and abort protocols. Because the work of the proxy is performed as part of the action, it is guaranteed to be completed or undone despite failures of the action coordinator or action participants.
Given a system that provides transactions for certain operations, you can combine them to form another operation, which is also required to be a transaction. The resulting transaction’s effects are a combination of the effects of its constituent transactions. This paradigm creates the concept of nested subtransactions, and the resulting combined transaction is called the enclosing transaction. The enclosing transaction is sometimes referred to as the parent of a nested (or child) transaction. It can also be viewed as a hierarchical relationship, with a top-level transaction consisting of several subordinate transactions.
An important difference exists between nested and top-level transactions.
The effect of a nested transaction is provisional upon the commit/roll back of its enclosing transactions. The effects are recovered if the enclosing transaction aborts, even if the nested transaction has committed.
Subtransactions are a useful mechanism for two reasons:
If a subtransaction rolls back, perhaps because an object it is using fails, the enclosing transaction does not need to roll back.
If a transaction is already associated with a call when a new transaction begins, the new transaction is nested within it. Therefore, if you know that an object requires transactions, you can them within the object. If the object’s methods are invoked without a client transaction, then the object’s transactions are top-level. Otherwise, they are nested within the scope of the client's transactions. Likewise, a client does not need to know whether an object is transactional. It can begin its own transaction.
The CORBA architecture, as defined by the OMG, is a standard which promotes the construction of interoperable applications that are based upon the concepts of distributed objects. The architecture principally contains the following components:
Enables objects to transparently send and receive requests in a distributed, heterogeneous environment. This component is the core of the OMG reference model.
A collection of services that support functions for using and implementing objects. Such services are necessary for the construction of any distributed application. The Object Transaction Service (OTS) is the most relevant to Narayana.
Other useful services that applications may need, but which are not considered to be fundamental. Desktop management and help facilities fit this category.
The CORBA architecture allows both implementation and integration of a wide variety of object systems. In particular, applications are independent of the location of an object and the language in which an object is implemented, unless the interface the object explicitly supports reveals such details. As defined in the OMG CORBA Services documentation, object services are defined as a collection of services (interfaces and objects) that support the basic functions for using and implementing objects. These services are necessary to construct distributed application, and are always independent of an application domain. The standards specify several core services including naming, event management, persistence, concurrency control and transactions.
The OTS specification allows, but does not require, nested transactions. is a fully compliant version of the OTS version 1.1 draft 5, and support nested transactions.
The transaction service provides interfaces that allow multiple distributed objects to cooperate in a
transaction,
committing or rolling back their changes as a group. However, the OTS does not require all objects to have
transactional behavior. An object's support of transactions can be none at all, for some operations, or
fully. Transaction information may be propagated between client and server explicitly, or implicitly. You have
fine-grained control over an object's support of transactions. If your objects supports partial or complete
transactional behavior, it needs interfaces derived from interface
TransactionalObject
.
The Transaction Service specification also distinguishes between recoverable objects and transactional objects. Recoverable objects are those that contain the actual state that may be changed by a transaction and must therefore be informed when the transaction commits or aborts to ensure the consistency of the state changes. This is achieved be registering appropriate objects that support the Resource interface (or the derived SubtransactionAwareResource interface) with the current transaction. Recoverable objects are also by definition transactional objects.
In contrast, a simple transactional object does not necessarily need to be recoverable if its state is actually implemented using other recoverable objects. A simple transactional object does not need to participate the commit protocol used to determine the outcome of the transaction since it maintains no state information of its own.
The OTS is a protocol engine that guarantees obedience to transactional behavior. It does not directly support all of the transaction properties, but relies on some cooperating services:
Persistence/Recovery Service |
Supports properties of atomicity and durability. |
Concurrency Control Service |
Supports the isolation properties. |
You are responsible for using the appropriate services to ensure that transactional objects have the necessary ACID properties.
is based upon the original Arjuna system developed at the University of Newcastle between 1986 and 1995. Arjuna predates the OTS specification and includes many features not found in the OTS. is a superset of the OTS. Applications written using the standard OTS interfaces are portable across OTS implementations.
features in terms of OTS specifications
full draft 5 compliance, with support for Synchronization objects and PropagationContexts.
support for subtransactions.
implicit context propagation where support from the ORB is available.
support for multi-threaded applications.
fully distributed transaction managers, i.e., there is no central transaction manager, and the creator of a top-level transaction is responsible for its termination. Separate transaction manager support is also available, however.
transaction interposition.
X/Open compliance, including checked transactions. This checking can optionally be disabled. Note: checked transactions are disabled by default, i.e., any thread can terminate a transaction.
JDBC support.
Full Jakarta Transactions support.
You can use in three different levels, which correspond to the sections in this chapter, and are each explored in their own chapters as well.
Because of differences in ORB implementations, uses a separate ORB Portability library which acts as an abstraction later. Many of the examples used throughout this manual use this library. Refer to the ORB Portability Manual for more details.
The OTS is only a protocol engine for driving registered resources through a two-phase commit protocol.
You are
responsible for building and registering the
Resource
objects which handle
persistence and concurrency control, ensuring ACID properties for transactional application objects. You need to
register
Resources
at appropriate times, and ensure that a given
Resource
is only registered within a single transaction. Programming at the raw
OTS level is extremely basic. You as the programmer are responsible for almost everything to do with
transactions, including managing persistence and concurrency control on behalf of every transactional object.
The OTS implementation of nested transactions is extremely limited, and can lead to the generation of heuristic results. An example of such a result is when a subtransaction coordinator discovers part of the way through committing that some resources cannot commit, but being unable to tell the committed resources to abort. allows nested transactions to execute a full two-phase commit protocol, which removes the possibility that some resources will comment while others roll back.
When resources are registered with a transaction, you have no control over the order in which these resources are invoked during the commit/abort protocol. For example, if previously registered resources are replaced with newly registered resources, resources registered with a subtransaction are merged with the subtraction's parent. provides an additional Resource subtype which you this level of control.
The OTS does not provide any
Resource
implementations. You are responsible for
implementing these interfaces. The interfaces defined within the OTS specification are too low-level for most
application programmers. Therefore,
includes
Transactional Objects for Java
(TXOJ)
, which makes use of the raw Common Object Services interfaces but provides a higher-level
API for building transactional applications and frameworks. This API automates much of the activities concerned
with participating in an OTS transaction, freeing you to concentrate on application development, rather than
transactions.
The architecture of the system is shown in Figure 2. The API interacts with the concurrency control and persistence services, and automatically registers appropriate resources for transactional objects. These resources may also use the persistence and concurrency services.
exploits object-oriented techniques to provide you with a toolkit of Java classes which are inheritable by application classes, to obtain transactional properties. These classes form a hierarchy, illustrated in Figure 3.2, “ class hierarchy ” .
Your main responsibilities are specifying the scope of transactions and setting appropriate locks within
objects.
guarantees that transactional objects will be registered with, and be driven by, the
appropriate transactions. Crash recovery mechanisms are invoked automatically in the event of failures. When
using the provided interfaces, you do not need to create or register
Resource
objects or
call services controlling persistence or recovery. If a transaction is nested, resources are automatically
propagated to the transaction’s parent upon commit.
The design and implementation goal of was to provide a programming system for constructing fault-tolerant distributed applications. Three system properties were considered highly important:
Integration of Mechanisms |
Fault-tolerant distributed systems require a variety of system functions for naming, locating and invoking operations upon objects, as well as for concurrency control, error detection and recovery from failures. These mechanisms are integrated in a way that is easy for you to use. |
Flexibility |
Mechanisms must be flexible, permitting implementation of application-specific enhancements, such as type-specific concurrency and recovery control, using system defaults. |
Portability |
You need to be able to run on any ORB. |
is implemented in Java and extensively uses the type-inheritance facilities provided by the language to provide user-defined objects with characteristics such as persistence and recoverability.
The OTS specification is written with flexibility in mind, to cope with different application requirements for transactions. supports all optional parts of the OTS specification. In addition, if the specification allows functionality to be implemented in a variety of different ways, supports all possible implementations.
Table 3.2. implementation of OTS specifications
OTS specification | default implementation |
---|---|
If the transaction service chooses to restrict the availability of the transaction
context, then it
should raise the
|
does not restrict the availability of the transaction context. |
An implementation of the transaction service need not initialize the transaction context for every request. |
only initializes the transaction context if the interface supported by the target object
extends
the
|
An implementation of the transaction service may restrict the ability for the
|
does not impose restrictions on the propagation of these objects. |
The transaction service may restrict the termination of a transaction to the client that started it. |
allows the termination of a transaction by any client that uses the
|
A
|
provides multiple ways in which the
|
A transaction service implementation may use the Event Service to report heuristic decisions. |
does not use the Event Service to report heuristic decisions. |
An implementation of the transaction service does not need to support nested transactions. |
supports nested transactions. |
|
allows
|
A transaction service implementation is not required to support interposition. |
supports various types of interposition. |
is fully multi-threaded and supports the OTS notion of allowing multiple threads to be active within a
transaction, and for a thread to execute multiple transactions. A thread can only be active within a single
transaction at a time, however. By default, if a thread is created within the scope of a transaction, the
new
thread is not associated with the transaction. If the thread needs to be associated with the transaction, use the
resume
method of either the
AtomicTransaction
class or the
Current
class.
However, if newly created threads need to automatically inherit the transaction context of their parent,
then they
should extend the
OTS_Thread
class.
Example 3.2.
Extending the
OTS_Thread
class
public class OTS_Thread extends Thread
{
public void terminate ();
public void run ();
protected OTS_Thread ();
};
Call the
run
method of
OTS_Thread
at the start of the application
thread class's
run
method. Call
terminate
before you exit the
body of the application thread’s
run
method.
Although the CORBA specification is a standard, it is written so that an ORB can be implemented in multiple ways. As such, writing portable client and server code can be difficult. Because has been ported to most of the widely available ORBs, it includes a series of ORB Portability classes and macros. If you write your application using these classes, it should be mostly portable between different ORBs. These classes are described in the separate ORB Portability Manual.
Basic
programming involves using the OTS interfaces provided in the
CosTransactions
module, which is specified in
CosTransactions.idl
. This chapter is based on the
OTS
Specification1
,
specifically with the aspects of OTS that are valuable for developing OTS applications
using .
Where relevant, each section describes
implementation decisions and runtime choices available
to you. These choices are also summarized at the end of this chapter. Subsequent chapters illustrate using these
interfaces to construct transactional applications.
The raw
CosTransactions
interfaces reside in package
org.omg.CosTransactions.
The
implementations of these interfaces reside in package
com.arjuna.CosTransactions
and its sub-packages.
You can override many run-time decisions of
Java properties specified at run-time. The property names are
mentioned in the
com.arjuna.ats.jts.common.Environment
class.
A client application program can manage a transaction using direct or indirect context management.
Indirect context management
means that an application uses the pseudo-object
Current
, provided by the Transaction Service, to associate the transaction context with
the application thread of control.
For
direct context management
, an application manipulates the
Control
object and the other objects associated with the transaction.
An object may require transactions to be either explicitly or implicitly propagated to its operations.
Explicit propagation
means that an application propagates a transaction context by
passing objects defined by the Transaction Service as explicit parameters. Typically the object is the
PropagationContext
structure.
Implicit propagation
means that requests are implicitly associated with the client’s
transaction, by sharing the client's transaction context. The context is transmitted to the objects without
direct client intervention. Implicit propagation depends on indirect context management, since it propagates
the transaction context associated with the
Current
pseudo-object. An object that
supports implicit propagation should not receive any Transaction Service object as an explicit parameter.
A client may use one or both forms of context management, and may communicate with objects that use either method of transaction propagation. This results in four ways in which client applications may communicate with transactional objects:
The client application directly accesses the
Control
object, and the other objects
which describe the state of the transaction. To propagate the transaction to an object, the client must
include the appropriate Transaction Service object as an explicit parameter of an operation. Typically, the
object is the
PropagationContext
structure.
The client application uses operations on the
Current
pseudo-object to create and
control its transactions. When it issues requests on transactional objects, the transaction context
associated with the current thread is implicitly propagated to the object.
for an implicit model application to use explicit propagation, it can get access to the Control using the get_control operation on the Current pseudo object. It can then use a Transaction Service object as an explicit parameter to a transactional object; for efficiency reasons this should be the PropagationContext structure, obtained by calling get_txcontext on the appropriate Coordinator reference. This is explicit propagation.
A client that accesses the Transaction Service objects directly can use the
resume
pseudo-object operation to set the implicit transaction context associated with its thread. This
way, the
client can invoke operations of an object that requires implicit propagation of the transaction context.
The main difference between direct and indirect context management is the effect on the invoking thread’s
transaction context. Indirect context management causes the thread’s transaction context to be modified
automatically by the OTS. For instance, if method
begin
is called, the thread’s notion of
the current transaction is modified to the newly-created transaction. When the transaction is terminated, the
transaction previously associated with the thread, if one existed, is restored as the thread’s context. This
assumes that subtransactions are supported by the OTS implementation.
If you use direct management, no changes to the thread's transaction context are made by the OTS, leaving the responsibility to you.
Table 3.3. Interfaces
Function | Used by | Direct context mgmt | Indirect context mgmt |
---|---|---|---|
Create a transaction |
Transaction originator |
Factory::create
|
begin set_timeout |
Terminate a transaction |
Transaction originator (implicit) All (explicit) |
| commit rollback |
Rollback transaction | Server |
|
|
Propagation of transaction to server | Server | Declaration of method parameter |
|
Client control of transaction propagation to server | All | Request parameters |
|
Register with a transaction | Recoverable Server |
| N/A |
Miscellaneous | All |
|
N/A |
For clarity, subtransaction operations are not shown
The
TransactionFactory
interface allows the transaction originator to begin a
top-level transaction. Subtransactions must be created using the
begin
method of
Current
, or the
create_subtransaction
method of the parent’s
Coordinator.) Operations on the factory and
Coordinator
to create new transactions use
direct context management, and therefore do not modify the calling thread’s transaction context.
The
create
operation creates a new top-level transaction and returns its
Control
object, which you can use to manage or control participation in the new
transaction. Method
create
takes a parameter that is is an application-specific timeout
value, in seconds. If the transaction does not complete before this timeout elapses, it is rolled back. If the
parameter is
0
, no application-specific timeout is established.
Subtransactions do not have a timeout associated with them.
The Transaction Service implementation allows the
TransactionFactory
to be a separate
server from the application, shared by transactions clients, and which manages transactions on their
behalf. However, the specification also allows the TransactionFactory to be implemented by an object within each
transactional client. This is the default implementation used by ,
because it removes the need for a
separate service to be available in order for transactional applications to execute, and therefore reduces a point
of failure.
If your applications require a separate transaction manager, set the
OTS_TRANSACTION_MANAGER
environment variable to the value
YES
. The system locates the transaction manager server in a
manner specific to the ORB being used. The server can be located in a number of ways.
Registration with a name server.
Addition to the ORB’s initial references, using a specific references file.
The ORB’s specific location mechanism, if applicable.
Similar to the
resolve_initial_references
,
supports an initial reference file
where you can store references for specific services, and use these references at runtime. The file,
CosServices.cfg
, consists of two columns, separated by a single space.
The service name, which is
TransactionService
in the case of the OTS server.
The IOR
CosServices.cfg
is usually located in the
etc/
directory of the
installation. The OTS server automatically registers itself in this file, creating it if necessary, if
you use the configuration file mechanism. Stale information is also automatically removed. The
Transaction
Service locates
CosServices.cfg
at runtime, using the
OrbPortabilityEnvironmentBean
properties
initialReferencesRoot
and
InitialReferencesFile
.
initialReferencesRoot
names a directory, and
defaults to the current working directory.
initialReferencesFile
refers to a file within the
initialReferencesRoot
, and defaults to the name
CosServices.cfg
.
If your ORB supports a name service, and you configure to use it, the transaction manager is automatically registered with it.
You can override the default location mechanism with the
RESOLVE_SERVICE
property variable,
which can have any of three possible values.
CONFIGURATION_FILE |
This is the default option, and directs the system to use the
|
NAME_SERVICE |
tries to use a name service to locate the transaction factory. If the ORB does not support the name service mechanism, throws an exception. |
BIND_CONNECT |
uses the ORB-specific bind mechanism. If the ORB does not support such a mechanism, throws an exception. |
If
RESOLVE_SERVICE
is specified when running the transaction factory, the factory registers
itself with the specified resolution mechanism.
As of 4.5, transaction timeouts are unified across all transaction components and are controlled by ArjunaCore . Refer to the ArjunaCore Development Guide for more information.
Transaction contexts are fundamental to the OTS architecture. Each thread is associated with a context in one of three ways.
Null |
The thread has no associated transaction. |
A transaction ID | The thread is associated with a transaction. |
Contexts may be shared across multiple threads. In the presence of nested transactions, a context remembers
the
stack of transactions started within the environment, so that the context of the thread can be restored to the
state before the nested transaction started, when the nested transaction ends. Threads most commonly use object
Current
to manipulate transactional information, which is represented by
Control
objects.
Current
is the broker between a transaction and
Control
objects.
Your application can manage transaction contexts either directly or indirectly. In the direct approach, the
transaction originator issues a request to a
TransactionFactory
to begin a new top-level
transaction. The factory returns a
Control
object that enables both a
Terminator
interface and a
Coordinator
interface.
Terminator
ends a
transaction.
Coordinator
associates a thread with a transaction, or begins a nested
transaction. You need to pass each interface as an explicit parameter in invocations of operations, because
creating a transaction with them does not change a thread's current context. If you use the factory, and need to
set the current context for a thread to the context which its control object returns, use the
resume
method of interface
Current
.
Example 3.3.
Interfaces
Terminator
,
Coordinator
, and
Control
interface Terminator
{
void commit (in boolean report_heuristics) raises (HeuristicMixed, HeuristicHazard);
void rollback ();
};
interface Coordinator
{
Status get_status ();
Status get_parent_status ();
Status get_top_level_status ();
RecoveryCoordinator register_resource (in Resource r) raises (Inactive);
Control create_subtransaction () raises (SubtransactionsUnavailable,
Inactive);
void rollback_only () raises (Inactive);
...
};
interface Control
{
Terminator get_terminator () raises (Unavailable);
Coordinator get_coordinator () raises (Unavailable);
};
interface TransactionFactory
{
Control create (in unsigned long time_out);
};
When the factory creates a transaction, you can specify a timeout value in seconds. If the transaction times
out,
it is subject to possible roll-back. Set the timeout to
0
to disable application-specific
timeout.
The
Current
interface handles implicit context management. Implicit context
management provides simplified transaction management functionality, and automatically creates nested transactions
as required. Transactions created using
Current
do not alter a thread’s current
transaction context.
Example 3.4.
Interface
Current
interface Current : CORBA::Current
{
void begin () raises (SubtransactionsUnavailable);
void commit (in boolean report_heuristics) raises (NoTransaction,
HeuristicMixed,
HeuristicHazard);
void rollback () raises (NoTransaction);
void rollback_only () raises (NoTransaction);
. . .
Control get_control ();
Control suspend ();
void resume (in Control which) raises (InvalidControl);
};
Subtransactions are a useful mechanism for two reasons:
If a subtransaction rolls back, the enclosing transaction does not also need to roll back. This preserves as much of the work done so far, as possible.
Indirect transaction management does not require special syntax for creating subtransactions. Begin a transaction, and if another transaction is associated with the calling thread, the new transaction is nested within the existing one. If you know that an object requires transactions, you can use them within the object. If the object's methods are invoked without a client transaction, the object's transaction is top-level. Otherwise, it is nested within the client's transaction. A client does not need to know whether an object is transactional.
The outermost transaction of the hierarchy formed by nested transactions is called the top-level transaction. The inner components are called subtransactions. Unlike top-level transactions, the commits of subtransactions depend upon the commit/rollback of the enclosing transactions. Resources acquired within a subtransaction should be inherited by parent transactions when the top-level transaction completes. If a subtransaction rolls back, it can release its resources and undo any changes to its inherited resources.
In the OTS, subtransactions behave differently from top-level transactions at commit time. Top-level transactions undergo a two-phase commit protocol, but nested transactions do not actually perform a commit protocol themselves. When a program commits a nested transaction, it only informs registered resources of its outcome. If a resource cannot commit, an exception is thrown, and the OTS implementation can ignore the exception or roll back the subtransaction. You cannot roll back a subtransaction if any resources have been informed that the transaction committed.
The OTS supports both implicit and explicit propagation of transactional behavior.
Implicit propagation means that an operation signature specifies no transactional behavior, and each invocation automatically sends transaction context associated with the calling thread.
Explicit propagation means that applications must define their own mechanism for propagating transactions. This has the following features:
A client to control if its transaction is propagated with any operation invocation.
A client can invoke operations on both transactional and non-transactional objects within a transaction.
Transaction context management and transaction propagation are different things that may be controlled independently of each other. Mixing of direct and indirect context management with implicit and explicit transaction propagation is supported. Using implicit propagation requires cooperation from the ORB. The client must send current context associated with the thread with any operation invocations, and the server must extract them before calling the targeted operation.
If you need implicit context propagation, ensure that
is correctly initialized before you create
objects. Both client and server must agree to use implicit propagation. To use implicit context propagation,
your ORB needs to support filters or interceptors, or the
CosTSPortability
interface.
Implicit context propagation |
Property variable
|
Interposition |
Property variable
|
Interposition is required to use the Advanced API.
Example 3.5. Simple transactional client using direct context management and explicit transaction propagation
{
...
org.omg.CosTransactions.Control c;
org.omg.CosTransactions.Terminator t;
org.omg.CosTransactions.PropagationContext pgtx;
c = transFact.create(0); // create top-level action
pgtx = c.get_coordinator().get_txcontext();
...
trans_object.operation(arg, pgtx); // explicit propagation
...
t = c.get_terminator(); // get terminator
t.commit(false); // so it can be used to commit
...
}
The next example rewrites the same program to use indirect context management and implicit propagation. This example is considerably simpler, because the application only needs to start and either commit or abort actions.
Example 3.6. Indirect context management and implicit propagation
{
...
current.begin(); // create new action
...
trans_object2.operation(arg); // implicit propagation
...
current.commit(false); // simple commit
...
}
The last example illustrates the flexibility of OTS by using both direct and indirect context management in conjunction with explicit and implicit transaction propagation.
Example 3.7. Direct and direct context management with explicitly and implicit propagation
{
...
org.omg.CosTransactions.Control c;
org.omg.CosTransactions.Terminator t;
org.omg.CosTransactions.PropagationContext pgtx;
c = transFact.create(0); // create top-level action
pgtx = c.get_coordinator().get_txcontext();
current.resume(c); // set implicit context
...
trans_object.operation(arg, pgtx); // explicit propagation
trans_object2.operation(arg); // implicit propagation
...
current.rollback(); // oops! rollback
...
}
The
Control
interface allows a program to explicitly manage or propagate a
transaction context. An object supporting the
Control
interface is associated with
one specific transaction. The
Control
interface supports two operations:
get_terminator
and
get_coordinator
.
get_terminator
returns an instance of the
Terminator
interface.
get_coordinator
returns an instance
of the
Coordinator
interface. Both of these methods throw the
Unavailable
exception if the
Control
cannot provide the
requested object. The OTS implementation can restrict the ability to use the Terminator and Coordinator in other
execution environments or threads. At a minimum, the creator must be able to use them.
Obtain the
Control
object for a transaction when it is created either by using either the
TransactionFactory
or
create_subtransaction
methods defined by
the
Coordinator
interface. Obtain a
Control
for the
transaction associated with the current thread using the
get_control
or
suspend
methods defined by the
Current
interface.
The transaction creator must be able to use its
Control
, but the OTS
implementation decides whether other threads can use
Control
.
places no
restrictions the users of the
Control
.
The OTS specification does not provide a means to indicate to the transaction system that information
and
objects associated with a given transaction can be purged from the system. In ,
the
Current
interface destroys all information about a transaction when it
terminates. For that reason, do not use any
Control
references to the transaction
after it commits or rolls back.
However, if the transaction is terminated using the Terminator interface, it is up to the programmer to signal that the transaction information is no longer required: this can be done using the destroyControl method of the OTS class in the com.arjuna.CosTransactions package. Once the program has indicated that the transaction information is no longer required, the same restrictions on using Control references apply as described above. If destroyControl is not called then transaction information will persist until garbage collected by the Java runtime.
In ,
you can propagate
Coordinators
and
Terminators
between execution environments.
The
Terminator
interface supports
commit
and
rollback
operations. Typically, the transaction originator uses these operations. Each
object supporting the Terminator interface is associated with a single transaction. Direct context management via
the Terminator interface does not change the client thread’s notion of the current transaction.
The
commit
operation attempts to commit the transaction. To successfully commit, the
transaction must not be marked
rollback only
, and all of its must participants agree to
commit. Otherwise, the
TRANSACTION_ROLLEDBACK
exception is thrown. If the
report_heuristics
parameter is
true
, the Transaction Service reports
inconsistent results using the
HeuristicMixed
and
HeuristicHazard
exceptions.
When a transaction is committed, the coordinator drives any registered
Resources
using
their
prepare
or
commit
methods. These Resources are responsible
to ensure that any state changes to recoverable objects are made permanent, to guarantee the ACID properties.
When
rollback
is called, the registered
Resources
need to
guarantee that all changes to recoverable objects made within the scope of the transaction, and its descendants,
is undone. All resources locked by the transaction are made available to other transactions, as appropriate to the
degree of isolation the resources enforce.
See
Section 3.2.3.7.1, “
specifics
”
for how long
Terminator
references remain valid after a transaction terminates.
When a transaction is committing, it must make certain state changes persistent, so that it can recover
if a
failure occurs, and continue to commit, or rollback. To guarantee ACID properties, flush these state changes to
the persistence store implementation before the transaction proceeds to commit. Otherwise, the application may
assume that the transaction has committed, when the state changes may still volatile storage, and may be lost by
a subsequent hardware failure. By default,
makes sure that such state changes are flushed. However,
these flushes can impose a significant performance penalty to the application. To prevent transaction state
flushes, set the
TRANSACTION_SYNC
variable to
OFF
. Obviously, do this at
your own risk.
When a transaction commits, if only a single resource is registered, the transaction manager does not
need to
perform the two-phase protocol. A single phase commit is possible, and the outcome of the transaction is
determined by the resource. In a distributed environment, this optimization represents a significant performance
improvement. As such,
defaults to performing single phase commit in this situation. Override this
behavior at runtime by setting the
COMMIT_ONE_PHASE
property variable to
NO
.
The Coordinator interface is returned by the
get_coordinator
method of the
Control
interface. It supports the operations resources need to participate in a
transaction. These participants are usually either recoverable objects or agents of recoverable objects, such as
subordinate coordinators. Each object supporting the
Coordinator
interface is
associated with a single transaction. Direct context management via the Coordinator interface does not change the
client thread’s notion of the current transaction. You can terminate transaction directly, through the
Terminator
interface. In that case, trying to terminate the transaction a second
time using
Current
causes an exception to be thrown for the second termination
attempt.
The operations supported by the Coordinator interface of interest to application programmers are:
Table 3.4.
Operations supported by the
Coordinator
interface
|
Return the status of the associated transaction. At any given time a transaction can have one of the following status values representing its progress:
|
|
You can use these operations for transaction comparison. Resources may use these various operations to guarantee that they are registered only once with a specific transaction. |
|
Returns a hash code for the specified transaction. |
|
Registers the specified Resource as a participant in the transaction. The
|
register_subtran_aware |
Registers the specified subtransaction-aware resource with the current transaction, so
that it know when
the subtransaction commits or rolls back. This method cannot register the resource as a participant in
the top-level transaction. The
|
register_synchronization |
Registers the
|
rollback_only |
Marks the transaction so that the only possible outcome is for it to rollback. The Inactive exception is raised if the transaction has already been prepared/completed. |
create_subtransaction |
A new subtransaction is created. Its parent is the current transaction. The
|
See
Section 3.2.3.7.1, “
specifics
”
to control how long
Coordinator
references remain valid after a transaction terminates.
To disable subtransactions, set set the
OTS_SUPPORT_SUBTRANSACTIONS
property variable to
NO
.
The OTS permits individual resources to make heuristic decisions. Heuristic decisions are unilateral decisions made by one or more participants to commit or abort the transaction, without waiting for the consensus decision from the transaction service. Use heuristic decisions with care and only in exceptional circumstances, because they can lead to a loss of integrity in the system. If a participant makes a heuristic decision, an appropriate exception is raised during commit or abort processing.
Table 3.5. Possible heuristic outcomes
HeuristicRollback |
Raised on an attempt to commit, to indicate that the resource already unilaterally rolled back the transaction. |
HeuristicCommit |
Raised on an attempt to roll back, to indicate that the resource already unilaterally committed the transaction. |
HeuristicMixed |
Indicates that a heuristic decision has been made. Some updates committed while others rolled back. |
HeuristicHazard |
Indicates that a heuristic decision may have been made, and the outcome of some of the updates is unknown. For those updates which are known, they either all committed or all rolled back. |
HeuristicMixed takes priority over HeuristicHazard. Heuristic decisions are only reported back to the
originator
if the
report_heuristics
argument is set to
true
when you invoke the
commit operation.
The
Current
interface defines operations that allow a client to explicitly manage
the association between threads and transactions, using indirect context management. It defines operations that
simplify the use of the Transaction Service.
Table 3.6.
Methods of
Current
begin |
Creates a new transaction and associates it with the current thread. If the client
thread is currently
associated with a transaction, and the OTS implementation supported nested transactions, the new
transaction becomes a subtransaction of that transaction. Otherwise, the new transaction is a top-level
transaction. If the OTS implementation does not support nested transactions, the
|
commit |
Commits the transaction. If the client thread does not have permission to commit the
transaction, the
standard exception
|
rollback |
Rolls back the transaction. If the client thread does not have permission to terminate
the transaction,
the standard exception
|
rollback_only |
Limits the transaction's outcome to rollback only. If the transaction has already been terminated, or is in the process of terminating, an appropriate exception is thrown. |
get_status |
Returns the status of the current transaction, or exception
|
set_timeout |
Modifies the timeout associated with top-level transactions for subsequent
|
get_control |
Obtains a
|
suspend |
Obtains an object representing a transaction's context. If the client thread is not
associated with a
transaction, a null object reference is returned. You can pass this object to the
|
resume |
Associates the client thread with a transaction. If the parameter is a null object reference, the client thread becomes associated with no transaction. The thread loses association with any previous transactions. |
Ideally, you should Obtain
Current
by using the life-cycle service factory
finder. However, very few ORBs support this.
provides method
get_current
of
Current
for this purpose. This class hides any ORB-specific mechanisms required
for obtaining
Current
.
If no timeout value is associated with
Current
,
associates no timeout
with the transaction. The current OTS specification does not provide a means whereby the timeout associated with
transaction creation can be obtained. However,
Current supports a get_timeout method.
By default, the
implementation of
Current
does not use a separate
TransactionFactory
server when creating new top-level transactions. Each transactional
client has a
TransactionFactory
co-located with it. Override this by setting the
OTS_TRANSACTION_MANAGER
variable to
YES
.
The transaction factory is located in the
bin/
directory of the
distribution. Start
it by executing the OTS script.
Current
locates the factory
in a manner specific to the ORB: using the name service, through
resolve_initial_references
, or via the
CosServices.cfg
file. The
CosServices.cfg
file is similar to
resolve_initial_references
,
and
is automatically updated when the transaction factory is started on a particular machine. Copy the file to each
instance which needs to share the same transaction factory.
If you do not need subtransaction support, set the
OTS_SUPPORT_SUBTRANSACTIONS
property
variable to
NO
. The
setCheckedAction
method overrides the
CheckedAction
implementation associated with each transaction created by the
thread.
The Transaction Service uses a two-phase commit protocol to complete a top-level transaction with each registered resource.
Example 3.8. Completing a top-level transaction
interface Resource
{
Vote prepare ();
void rollback () raises (HeuristicCommit, HeuristicMixed,
HeuristicHazard);
void commit () raises (NotPrepared, HeuristicRollback,
HeuristicMixed, HeuristicHazard);
void commit_one_phase () raises (HeuristicRollback, HeuristicMixed,
HeuristicHazard);
void forget ();
};
The
Resource
interface defines the operations invoked by the transaction
service. Each
Resource
object is implicitly associated with a single top-level
transaction. Do not register a
Resource
with the same transaction more than
once. When you tell a
Resource
to prepare, commit, or abort, it must do so on
behalf of a specific transaction. However, the
Resource
methods do not specify the
transaction identity. It is implicit, since a
Resource
can only be registered with
a single transaction.
Transactional objects must use the
register_resource
method to register objects
supporting the
Resource
interface with the current transaction. An object
supporting the
Coordinator
interface is either passed as a parameter in the case of
explicit propagation, or retrieved using operations on the
Current
interface in the
case of implicit propagation. If the transaction is nested, the
Resource
is not
informed of the subtransaction’s completion, and is registered with its parent upon commit.
This example assumes that transactions are only nested two levels deep, for simplicity.
Do not register a given
Resource
with the same transaction more than once, or it
will receive multiple termination calls. When a
Resource
is directed to prepare,
commit, or abort, it needs to link these actions to a specific transaction. Because
Resource
methods do not specify the transaction identity, but can only be associated with a
single transaction, you can infer the identity.
A single
Resource
or group of
Resources
guarantees
the ACID properties for the recoverable object they represent. A Resource's work depends on the phase of its
transaction.
If none of the persistent data associated with the resource is modified by the transaction, the
Resource can
return
VoteReadOnly
and forget about the transaction. It does not need to know the
outcome of the second phase of the commit protocol, since it hasn't made any changes.
If the resource can write, or has already written, all the data needed to commit the transaction
to stable
storage, as well as an indication that it has prepared the transaction, it can return
VoteCommit
. After receiving this response, the Transaction Service either commits
or rolls back. To support recovery, the resource should store the
RecoveryCoordinator
reference in stable storage.
The resource can return
VoteRollback
under any circumstances. After returning this
response, the resource can forget the transaction.
The
Resource
reports inconsistent outcomes using the
HeuristicMixed
and
HeuristicHazard
exceptions. One example
is that a
Resource
reports that it can commit and later decides to roll
back. Heuristic decisions must be made persistent and remembered by the
Resource
until the transaction coordinator issues the
forget
method. This method tells the
Resource
that
the heuristic decision has been noted, and possibly resolved.
The resource should undo any changes made as part of the transaction. Heuristic exceptions can be used to report heuristic decisions related to the resource. If a heuristic exception is raised, the resource must remember this outcome until the forget operation is performed so that it can return the same outcome in case rollback is performed again. Otherwise, the resource can forget the transaction.
If necessary, the resource should commit all changes made as part of this transaction. As with
rollback
, it can raise heuristic exceptions. The
NotPrepared
exception is raised if the resource has not been prepared.
Since there can be only a single resource, the
HeuristicHazard
exception reports
heuristic decisions related to that resource.
Performed after the resource raises a heuristic exception. After the coordinator determines that
the
heuristic situation is addressed, it issues
forget
on the resource. The resource
can forget all knowledge of the transaction.
Recoverable objects that need to participate within a nested transaction may support the
SubtransactionAwareResource
interface, a specialization of the
Resource
interface.
Example 3.9.
Interface
SubtransactionAwareResource
interface SubtransactionAwareResource : Resource
{
void commit_subtransaction (in Coordinator parent);
void rollback_subtransaction ();
};
A recoverable object is only informed of the completion of a nested transaction if it registers a
SubtransactionAwareResource
. Register the object with either the
register_resource
of the
Coordinator
interface, or the
register_subtran_aware
method of the
Current
interface. A
recoverable object registers Resources to participate within the completion of top-level transactions, and
SubtransactionAwareResources keep track of the completion of subtransactions. The
commit_subtransaction
method uses a reference to the parent transaction to allow
subtransaction resources to register with these transactions.
SubtransactionAwareResources find out about the completion of a transaction after it terminates. They cannot affect the outcome of the transaction. Different OTS implementations deal with exceptions raised by SubtransactionAwareResources in implementation-specific ways.
Use method
register_resource
or method
register_subtran_aware
to
register a SubtransactionAwareResource with a transaction using.
If the transaction is a subtransaction, the resource is informed of its completion, and automatically registered with the subtransaction’s parent if the parent commits.
If the transaction is not a subtransaction, an exception is thrown. Otherwise, the resource is
informed when
the subtransaction completes. Unlike
register_resource
, the resource is not
propagated to the subtransaction’s parent if the transaction commits. If you need this propagation,
re-register using the supplied parent parameter.
In either case, the resource cannot affect the outcome of the transaction completion. It can only act on the transaction's decision, after the decision is made. However, if the resource cannot respond appropriately, it can raise an exception. Thee OTS handles these exceptions in an implementation-specific way.
A
SubtransactionAwareResource
which raises an exception to the commitment of a
transaction may create inconsistencies within the transaction if other
SubtransactionAwareResources
think the transaction committed. To prevent this possibility
of inconsistency,
forces the enclosing transaction to abort if an exception is raised.
also provides extended subtransaction aware resources to overcome this, and other problems. See Section for further details.
If an object needs notification before a transaction commits, it can register an object which is an
implements the
Synchronization
interface, using the
register_synchronization
operation of the
Coordinator
interface. Synchronizations flush volatile state data to a recoverable object or database before the
transaction
commits. You can only associate Synchronizations with top-level transactions. If you try to associate a
Synchronization to a nested transaction, an exception is thrown. Each object supporting the
Synchronization
interface is associated with a single top-level transaction.
Example 3.10. Synchronization
interface Synchronization : TransactionalObject
{
void before_completion ();
void after_completion (in Status s);
};
The method
before_completion
is called before the two-phase commit protocol starts, and
after_completion
is called after the protocol completes. The final status of the
transaction is given as a parameter to
after_completion
. If
before_completion
raises an exception, the transaction rolls back. Any exceptions thrown
by
after_completion
do not affect the transaction outcome.
The OTS only requires Synchronizations to be invoked if the transaction commits. If it rolls back, registered Synchronizations are not informed.
Given the previous description of
Control
,
Resource
,
SubtransactionAwareResource
, and Synchronization, the following UML relationship
diagram can be drawn:
Figure 3.9. Relationship between Control, Resource, SubtransactionAwareResource, and Synchronization
Synchronizations must be called before the top-level transaction commit protocol starts, and after it
completes. By default, if the transaction is instructed to roll back, the Synchronizations associated
with the
transaction is not contacted. To override this, and call Synchronizations regardless of the transaction's
outcome, set the
OTS_SUPPORT_ROLLBACK_SYNC
property variable to
YES
.
If you use distributed transactions and interposition, a local proxy for the top-level transaction
coordinator
is created for any recipient of the transaction context. The proxy looks like a
Resource
or
SubtransactionAwareResource
, and registers itself as such with the actual top-level
transaction coordinator. The local recipient uses it to register
Resources
and
Synchronizations
locally.
The local proxy can affect how Synchronizations are invoked during top-level transaction commit. Without
the
proxy, all Synchronizations are invoked before any Resource or SubtransactionAwareResource objects are
processed. However, with interposition, only those Synchronizations registered locally to the transaction
coordinator are called. Synchronizations registered with remote participants are only called when the interposed
proxy is invoked. The local proxy may only be invoked after locally-registered Resource or
SubtransactionAwareResource objects are invoked. With the
OTS_SUPPORT_INTERPOSED_SYNCHRONIZATION
property variable set to
YES
, all
Synchronizations are invoked before any Resource or SubtransactionAwareResource, no matter where they are
registered.
In
Figure 3.11, “Subtransaction commit”
, a subtransaction with both
Resource
and
SubtransactionAwareResource
objects commits. The
SubtransactionAwareResources
were registered using
register_subtran_aware
. The
Resources
do not know the
subtransaction terminated, but the
SubtransactionAwareResources
do. Only the
Resources
are automatically propagated to the parent transaction.
Figure 3.12, “Subtransaction rollback”
illustrates the impact of a subtransaction rolling back. Any registered
resources are discarded, and all
SubtransactionAwareResources
are informed of the
transaction outcome.
Figure 3.13, “Top-level commit”
shows the activity diagram for committing a top-level
transaction. Subtransactions within the top-level transaction which have also successfully committed propagate
SubtransactionAwareResources
to the top-level transaction. These
SubtransactionAwareResources
then participate within the two-phase commit protocol. Any
registered
Synchronizations
are contacted before
prepare
is
called. Because of indirect context management, when the transaction commits, the transaction service changes the
invoking thread’s transaction context.
The
TransactionalObject
interface indicates to an object that it is
transactional. By supporting this interface, an object indicates that it wants to associate the transaction
context associated with the client thread with all operations on its interface. The
TransactionalObject
interface defines no operations.
OTS specifications do not require an OTS to initialize the transaction context of every request handler. It
is
only a requirement if the interface supported by the target object is derived from
TransactionalObject
. Otherwise, the initial transaction context of the thread is
undefined. A transaction service implementation can raise the
TRANSACTION_REQUIRED
exception if a
TransactionalObject
is invoked outside the scope of a transaction.
In a single-address space application, transaction contexts are implicitly shared between clients and
objects,
regardless of whether or not the objects support the
TransactionalObject
interface. To preserve distribution transparency, where implicit transaction propagation is supported, you
can
direct
to always propagate transaction contexts to objects. The default is only to propagate if the object
is a
TransactionalObject
. Set the
OTS_ALWAYS_PROPAGATE_CONTEXT
property variable to
NO
to override this behavior.
By default,
does not require objects which support the
TransactionalObject
interface to invoked within the scope of a transaction. The object determines whether it should be invoked
within
a transaction. If so, it must throw the
TransactionRequired
exception. Override this
default by setting the
OTS_NEED_TRAN_CONTEXT
shell environment variable to
YES
.
Make sure that the settings for
OTS_ALWAYS_PROPAGATE_CONTEXT
and
OTS_NEED_TRAN_CONTEXT
are identical at the client and the server. If they are not identical
at both ends, your application may terminate abnormally.
OTS objects supporting interfaces such as the
Control
interface are standard CORBA
objects. When an interface is passed as a parameter in an operation call to a remote server, only an object
reference is passed. This ensures that any operations that the remote server performs on the interface are
correctly performed on the real object. However, this can have substantial penalties for the application, because
of the overhead of remote invocation. For example, when the server registers a
Resource
with the current transaction, the invocation might be remote to the originator of the transaction.
To avoid this overhead, your OTS may support interposition. This permits a server to create a local control object which acts as a local coordinator, and fields registration requests that would normally be passed back to the originator. This coordinator must register itself with the original coordinator, so that it can correctly participate in the commit protocol. Interposed coordinators form a tree structure with their parent coordinators.
To use interposition, ensure that
is correctly initialized before creating objects. Also, the client and
server must both use interposition. Your ORB must support filters or interceptors, or the
CosTSPortability
interface, since interposition requires the use of implicit
transaction propagation. To use interposition, set the
OTS_CONTEXT_PROP_MODE
property variable
to
INTERPOSITION
.
Interposition is not required if you use the advanced API.
A reference to a
RecoveryCoordinator
is returned as a result of successfully calling
register_resource
on the transaction's
Coordinator
. Each
RecoveryCoordinator
is implicitly associated with a single
Resource
. It can drive the
Resource
through recovery procedures in
the event of a failure which occurs during the transaction.
The OTS supports both checked and unchecked transaction behavior.
Integrity constraints of checked transactions
A transaction will not commit until all transactional objects involved in the transaction have completed their transactional requests.
Only the transaction originator can commit the transaction
Checked transactional behavior is typical transaction behavior, and is widely implemented. Checked behavior requires implicit propagation, because explicit propagation prevents the OTS from tracking which objects are involved in the transaction.
Unchecked behavior allows you to implement relaxed models of atomicity. Any use of explicit propagation
implies
the possibility of unchecked behavior, since you as the programmer are in control of the behavior. Even if you use
implicit propagation, a server may unilaterally abort or commit the transaction using the
Current
interface, causing unchecked behavior.
Some OTS implementations enforce checked behavior for the transactions they support, to provide an extra level of transaction integrity. The checks ensure that all transactional requests made by the application complete their processing before the transaction is committed. A checked Transaction Service guarantees that commit fails unless all transactional objects involved in the transaction complete the processing of their transactional requests. Rolling back the transaction does not require such as check, since all outstanding transactional activities will eventually roll back if they are not directed to commit.
There are many possible implementations of checking in a Transaction Service. One provides equivalent function to that provided by the request and response inter-process communication models defined by X/Open. The X/Open Transaction Service model of checking widely implemented. It describes the transaction integrity guarantees provided by many existing transaction systems. These transaction systems provide the same level of transaction integrity for object-based applications, by providing a Transaction Service interface that implements the X/Open checks.
In X/Open, completion of the processing of a request means that the object has completed execution of its method and replied to the request. The level of transaction integrity provided by a Transaction Service implementing the X/Open model provides equivalent function to that provided by the XATMI and TxRPC interfaces defined by X/Open for transactional applications. X/Open DTP Transaction Managers are examples of transaction management functions that implement checked transaction behavior.
This implementation of checked behavior depends on implicit transaction propagation. When implicit propagation is used, the objects involved in a transaction at any given time form a tree, called the request tree for the transaction. The beginner of the transaction is the root of the tree. Requests add nodes to the tree, and replies remove the replying node from the tree. Synchronous requests, or the checks described below for deferred synchronous requests, ensure that the tree collapses to a single node before commit is issued.
If a transaction uses explicit propagation, the Transaction Service has no way to know which objects are or will be involved in the transaction. Therefore, the use of explicit propagation is not permitted by a Transaction Service implementation that enforces X/Open-style checked behavior.
Applications that use synchronous requests exhibit checked behavior. If your application uses deferred
synchronous
requests, all clients and objects need to be under the control of a checking Transaction Service. In that case,
the Transaction Service can enforce checked behavior, by applying a
reply
check and a
committed
check. The Transaction Service must also apply a
resume
check, so that the transaction is only resumed by applications in the correct part
of the request tree.
reply check |
Before an object replies to a transactional request, a check is made to ensure that the object has received replies to all the deferred synchronous requests that propagated the transaction in the original request. If this condition is not met, an exception is raised and the transaction is marked as rollback-only. A Transaction Service may check that a reply is issued within the context of the transaction associated with the request. |
commit check |
Before a commit can proceed, a check is made to ensure that the commit request for the transaction is being issued from the same execution environment that created the transaction, and that the client issuing commit has received replies to all the deferred synchronous requests it made that propagated the transaction. |
resume check |
Before a client or object associates a transaction context with its thread of control, a check is made to ensure that this transaction context was previously associated with the execution environment of the thread. This association would exist if the thread either created the transaction or received it in a transactional operation. |
Where support from the ORB is available,
supports X/Open checked transaction behavior. However, unless
the
OTS_CHECKED_TRANSACTIONS
property variable is set to
YES
, checked
transactions are disabled. This is the default setting.
Checked transactions are only possible with a co-located transaction manager.
In a multi-threaded application, multiple threads may be associated with a transaction during its
lifetime,
sharing the context. In addition, if one thread terminates a transaction, other threads may still be active
within it. In a distributed environment, it can be difficult to guarantee that all threads have finished with a
transaction when it terminates. By default,
issues a warning if a thread terminates a transaction when
other threads are still active within it, but allow the transaction termination to continue. You can choose to
block the thread which is terminating the transaction until all other threads have disassociated themselves from
its context, or use other methods to solve the problem.
provides the
com.arjuna.ats.arjuna.coordinator.CheckedAction
class, which allows you to override the
thread and transaction termination policy. Each transaction has an instance of this class associated with it,
and you can implement the class on a per-transaction basis.
Example 3.11.
CheckedAction
implementation
public class CheckedAction
{
public CheckedAction ();
public synchronized void check (boolean isCommit, Uid actUid,
BasicList list);
};
When a thread attempts to terminate the transaction and there active threads exist within it, the system
invokes
the
check
method on the transaction’s
CheckedAction
object. The
parameters to the check method are:
isCommit |
Indicates whether the transaction is in the process of committing or rolling back. |
actUid |
The transaction identifier. |
list |
A list of all of the threads currently marked as active within this transaction. |
When
check
returns, the transaction termination continues. Obviously the state of the
transaction at this point may be different from that when check was called.
Set the
CheckedAction
instance associated with a given transaction with the
setCheckedAction
method of
Current
.
Any execution environment (thread, process) can use a transaction Control.
Control
s,
Coordinator
s, and
Terminator
s are valid for use for the duration of the transaction if implicit
transaction control is used, via
Current
. If you use explicit control, via the
TransactionFactory
and
Terminator
, then use the
destroyControl
method of the OTS class in
com.arjuna.CosTransactions
to signal when the information can be garbage collected.
You can propagate
Coordinator
s and
Terminator
s between
execution environments.
If you try to commit a transaction when there are still active subtransactions within it, rolls back the parent and the subtransactions.
includes full support for nested transactions. However, if a resource raises an exception to the
commitment of a subtransaction after other resources have previously been told that the transaction
committed,
forces the enclosing transaction to abort. This guarantees that all resources used within the
subtransaction are returned to a consistent state. You can disable support for subtransactions by
setting the
OTS_SUPPORT_SUBTRANSACTIONS
variable to
NO
.
Obtain
Current
from the
get_current
method of the OTS.
A timeout value of zero seconds is assumed for a transaction if none is specified using
set_timeout
.
by default,
Current
does not use a separate transaction manager server by
default. Override this behavior by setting the
OTS_TRANSACTION_MANAGER
environment
variable. Location of the transaction manager is ORB-specific.
Checked transactions are disabled by default. To enable them, set the
OTS_CHECKED_TRANSACTIONS
property to
YES
.
must be correctly initialized before you create any application object. To guarantee this, use the
initORB
and
POA
methods described in the
Orb
Portability Guide
. Consult the
Orb Portability Guide
if you need direct use
of the
ORB_init
and
create_POA
methods provided by the
underlying ORB.
If you need implicit context propagation and interposition, initialize
correctly before you create any
objects. You can only use implicit context propagation on an ORB which supports filters and interceptors, or the
CosTSPortability
interface. You can set
OTS_CONTEXT_PROP_MODE
to
CONTEXT
or
INTERPOSITION
,
depending on which functionality you need. If
you are using the
API, you need to use interposition.
Steps to participate in an OTS transaction
Create
Resource
and
SubtransactionAwareResource
objects for each
object which will participate within the transaction or subtransaction. These resources manage the
persistence, concurrency control, and recovery for the object. The OTS invokes these objects during the
prepare, commit, or abort phase of the transaction or subtransaction, and the Resources perform the work of
the application.
Register
Resource
and
SubtransactionAwareResource
objects at the
correct time within the transaction, and ensure that the object is only registered once within a given
transaction. As part of registration, a
Resource
receives a reference to a
RecoveryCoordinator
. This reference must be made persistent, so that the transaction
can recover in the event of a failure.
Correctly propagate resources such as locks to parent transactions and
SubtransactionAwareResource
objects.
Drive the crash recovery for each resource which was participating within the transaction, in the event of a failure.
The OTS does not provide any
Resource
implementations. You need to provide these
implementations. The interfaces defined within the OTS specification are too low-level for most
situations.
is designed to make use of raw
Common Object Services (COS)
interfaces,
but provides a higher-level API for building transactional applications and framework. This API automates much of
the work involved with participating in an OTS transaction.
If you use implicit transaction propagation, ensure that appropriate objects support the
TransactionalObject
interface. Otherwise, you need to pass the transaction contexts
as parameters to the relevant operations.
Example 3.12. Indirect and implicit transaction originator
...
txn_crt.begin();
// should test the exceptions that might be raised
...
// the client issues requests, some of which involve
// transactional objects;
BankAccount1.makeDeposit(deposit);
...
A transaction originator uses indirect context management and implicit transaction
propagation.
txn_crt
is a pseudo object supporting the
Current
interface. The client uses the
begin
operation
to start the transaction, which becomes implicitly associated with the originator’s thread of control.
The program commits the transaction associated with the client thread. The
report_heuristics
argument is set to
false
, so the Transaction
Service makes no reports about possible heuristic decisions.
...
txn_crt.commit(false);
...
Example 3.13. Direct and explicit transaction originator
...
org.omg.CosTransactions.Control c;
org.omg.CosTransactions.Terminator t;
org.omg.CosTransactions.Coordinator co;
org.omg.CosTransactions.PropagationContext pgtx;
c = TFactory.create(0);
t = c.get_terminator();
pgtx = c.get_coordinator().get_txcontext();
...
This transaction originator uses direct context management and explicit transaction propagation. The
client
uses a factory object supporting the
CosTransactions::TransactionFactory
interface to create a new transaction, and uses the returned
Control
object to retrieve
the
Terminator
and
Coordinator
objects.
The client issues requests, some of which involve transactional objects. This example uses explicit
propagation of the context. The
Control
object reference is passed as an explicit
parameter of the request. It is declared in the OMG IDL of the interface.
...
transactional_object.do_operation(arg, pgtx);
The transaction originator uses the
Terminator
object to commit the transaction. The
report_heuristics
argument is set to
false
, so the Transaction
Service makes no reports about possible heuristic decisions.
...
t.commit(false);
The
commit
operation of
Current
or the
Terminator
interface takes the
boolean
report_heuristics
parameter. If the
report_heuristics
argument is
false
, the commit operation can complete as soon as the
Coordinator
makes the decision to commit or roll back the transaction. The application does not need to wait for the
Coordinator
to complete the commit protocol by informing all the participants of the
outcome of the transaction. This can significantly reduce the elapsed time for the commit operation, especially
where participant
Resource
objects are located on remote network nodes. However, no
heuristic conditions can be reported to the application in this case.
Using the
report_heuristics
option guarantees that the commit operation does not complete until
the
Coordinator
completes the commit protocol with all
Resource
objects involved in the transaction. This guarantees that the application is informed of any non-atomic
outcomes
of the transaction, through one of the exceptions
HeuristicMixed
or
HeuristicHazard
. However, it increases the application-perceived elapsed time for the
commit operation.
A Recoverable Server includes at least one transactional object and one resource object, each of which have distinct responsibilities.
The transactional object implements the transactional object's operations
and registers a
Resource
object with the
Coordinator
, so that the Recoverable
Server's resources, including any necessary recovery, can commit.
The
Resource
object identifies the involvement of the Recoverable Server in a particular
transaction. This requires a
Resource
object to only be registered in one transaction at
a time. Register a different
Resource
object for each transaction in which a recoverable
server is concurrently involved. A transactional object may receive multiple requests within the scope of a
single transaction. It only needs to register its involvement in the transaction once. The
is_same_transaction
operation allows the transactional object to determine if the
transaction associated with the request is one in which the transactional object is already registered.
The
hash_transaction
operations allow the transactional object to reduce the number of
transaction comparisons it has to make. All
Coordinators
for the same transaction return
the same hash code. The
is_same_transaction
operation only needs to be called on
Coordinators
with the same hash code as the
Coordinator
of the
current request.
A
Resource
object participates in the completion of the transaction, updates the
resources of the Recoverable Server in accordance with the transaction outcome, and ensures termination of the
transaction, including across failures.
A
Reliable Server
is a special case of a Recoverable Server. A Reliable Server can use
the same interface as a Recoverable Server to ensure application integrity for objects that do not have
recoverable state. In the case of a Reliable Server, the transactional object can register a
Resource
object that replies
VoteReadOnly
to
prepare
if its integrity constraints are satisfied. It replies
VoteRollback
if it finds a problem. This approach allows the server to apply integrity
constraints which apply to the transaction as a whole, rather than to individual requests to the server.
Example 3.14. Reliable server
/*
BankAccount1 is an object with internal resources. It inherits from both the TransactionalObject and the Resource interfaces:
*/
interface BankAccount1:
CosTransactions::TransactionalObject, CosTransactions::Resource
{
...
void makeDeposit (in float amt);
...
};
/* The corresponding Java class is: */
public class BankAccount1
{
public void makeDeposit(float amt);
...
};
/*
Upon entering, the context of the transaction is implicitly associated with the object’s thread. The pseudo object
supporting the Current interface is used to retrieve the Coordinator object associated with the transaction.
*/
void makeDeposit (float amt)
{
org.omg.CosTransactions.Control c;
org.omg.CosTransactions.Coordinator co;
c = txn_crt.get_control();
co = c.get_coordinator();
...
/*
Before registering the resource the object should check whether it has already been registered for the same
transaction. This is done using the hash_transaction and is_same_transaction operations. that this object registers
itself as a resource. This imposes the restriction that the object may only be involved in one transaction at a
time. This is not the recommended way for recoverable objects to participate within transactions, and is only used as an
example. If more parallelism is required, separate resource objects should be registered for involvement in the same
transaction.
*/
RecoveryCoordinator r;
r = co.register_resource(this);
// performs some transactional activity locally
balance = balance + f;
num_transactions++;
...
// end of transactional operation
};
Example 3.15. Transactional object
/* A BankAccount2 is an object with external resources that inherits from the TransactionalObject interface: */
interface BankAccount2: CosTransactions::TransactionalObject
{
...
void makeDeposit(in float amt);
...
};
public class BankAccount2
{
public void makeDeposit(float amt);
...
}
/*
Upon entering, the context of the transaction is implicitly associated with the object’s thread. The makeDeposit
operation performs some transactional requests on external, recoverable servers. The objects res1 and res2 are
recoverable objects. The current transaction context is implicitly propagated to these objects.
*/
void makeDeposit(float amt)
{
balance = res1.get_balance(amt);
balance = balance + amt;
res1.set_balance(balance);
res2.increment_num_transactions();
} // end of transactional operation
The Transaction Service provides atomic outcomes for transactions in the presence of application, system or communication failures. From the viewpoint of each user object role, two types of failure are relevant:
A local failure, which affects the object itself.
An external failure, such as failure of another object or failure in the communication with an object.
The transaction originator and transactional server handle these failures in different ways.
If a Transaction originator fails before the originator issues
commit
, the
transaction is rolled back. If the originator fails after issuing commit and before the outcome is
reported, the transaction can either commit or roll back, depending on timing. In this case, the
transaction completes without regard to the failure of the originator.
Any external failure which affects the transaction before the originator issues
commit
causes the transaction to roll back. The standard exception
TransactionRolledBack
is raised in the originator when it issues
commit
.
If a failure occurs after commit and before the outcome is reported, the client may not be
informed of the
outcome of the transaction. This depends on the nature of the failure, and the use of the
report_heuristics
option of
commit
. For example, the transaction
outcome is not reported to the client if communication between the client and the
Coordinator
fails.
A client can determine the outcome of the transaction by using method
get_status
on the
Coordinator
. However, this is not reliable because it may return the status
NoTransaction
, which is ambiguous. The transaction could have committed and been
forgotten, or it could have rolled back and been forgotten.
An originator is only guaranteed to know the transaction outcome in one of two ways.
if its implementation includes a
Resource
object, so that it can participate in
the two-phase commit procedure.
The originator and
Coordinator
must be located in the same failure domain.
If the Transactional Server fails, optional checks by a Transaction Service implementation
may make the
transaction to roll back. Without such checks, whether the transaction rolls back depends on whether the
commit decision is already made, such as when an unchecked client invokes
commit
before receiving all replies from servers.
Any external failure affecting the transaction during the execution of a Transactional
Server causes the
transaction to be rolled back. If the failure occurs while the transactional object’s method is executing,
the failure has no effect on the execution of this method. The method may terminate normally, returning
the reply to its client. Eventually the
TransactionRolledBack
exception is
returned to a client issuing
commit
.
Behavior of a recoverable server when failures occur is determined by the two phase commit
protocol
between the
Coordinator
and the recoverable server’s
Resource
object.
When you develop OTS applications which use the raw OTS interfaces, be aware of the following items:
Create
Resource
and
SubtransactionAwareResource
objects for each
object which will participate within the transaction or subtransaction. These resources handle the
persistence, concurrency control, and recovery for the object. The OTS invokes these objects during the
prepare, commit, and abort phases of the transaction or subtransaction, and the
Resources
then perform all appropriate work.
Register
Resource
and
SubtransactionAwareResource
objects at the
correct time within the transaction, and ensure that the object is only registered once within a given
transaction. As part of registration, a
Resource
receives a reference to a
RecoveryCoordinator
, which must be made persistent so that recovery can occur in the
event of a failure.
For nested transactions, make sure that any propagation of resources, such as locks to parent
transactions,
are done correctly. You also need to manage propagation of
SubtransactionAwareResource
objects to parents.
in the event of failures, drive the crash recovery for each
Resource
which participates
within the transaction.
The OTS does not provide any
Resource
implementations.
This chapter contains a description of the use of the classes you can use to extend the OTS interfaces. These advanced interfaces are all written on top of the basic OTS engine described previously, and applications which use them run on other OTS implementations, only without the added functionality.
Features
Provides a more manageable interface to the OTS transaction than
CosTransactions::Current
. It automatically keeps track of transaction scope,
and allows you to create nested top-level transactions in a more natural manner than the one provided by the
OTS.
Allow nested transactions to use a two-phase commit protocol. These Resources can also be ordered
within
,
enabling you to control the order in which
Resource
s are called during the
commit or abort protocol.
Where available, uses implicit context propagation between client and server. Otherwise, provides an explicit interposition class, which simplifies the work involved in interposition. The API, Transactional Objects for Java (TXOJ) , requires either explicit or implicit interposition. This is even true in a stand-alone mode when using a separate transaction manager. TXOJ is fully described in the ArjunaCore Development Guide .
the extensions to the
CosTransactions.idl
are located in the
com.arjuna.ArjunaOTS
package and the
ArjunaOTS.idl
file.
The OTS implementation of nested transactions is extremely limited, and can lead to the generation of inconsistent results. One example is a scenario in which a subtransaction coordinator discovers part of the way through committing that a resources cannot commit. It may not be able to tell the committed resources to abort.
In most transactional systems which support subtransactions, the subtransaction commit protocol is the same
as a
top-level transaction’s. There are two phases, a
prepare
phase and a
commit
or
abort
phase. Using a multi-phase commit protocol
avoids the above problem of discovering that one resources cannot commit after others have already been told to
commit. The
prepare
phase generates consensus on the commit outcome, and the
commit
or
abort
phase enforces the outcome.
supports the strict OTS implementation of subtransactions for those resources derived from
CosTransactions::SubtransactionAwareResource
. However, if a resource is derived
from
ArjunaOTS::ArjunaSubtranAwareResource
, it is driven by a two-phase commit
protocol whenever a nested transaction commits.
Example 3.16. ArjunaSubtranAwareResource
interface ArjunaSubtranAwareResource :
CosTransactions::SubtransactionAwareResource
{
CosTransactions::Vote prepare_subtransaction ();
};
During the first phase of the commit protocol the
prepare_subtransaction
method is
called, and the resource behaves as though it were being driven by a top-level transaction, making any state
changes provisional upon the second phase of the protocol. Any changes to persistent state must still be
provisional upon the second phase of the top-level transaction, as well. Based on the votes of all registered
resources,
then calls either
commit_subtransaction
or
rollback_subtransaction
.
This scheme only works successfully if all resources registered within a given subtransaction are
instances of
the
ArjunaSubtranAwareResource
interface, and that after a resource tells the
coordinator it can prepare, it does not change its mind.
When resources are registered with a transaction, the transaction maintains them within a list, called the
intentions list.
At termination time, the transaction uses the intentions list to drive
each resource appropriately, to commit or abort. However, you have no control over the order in which resources
are called, or whether previously-registered resources should be replaced with newly registered resources. The
interface
ArjunaOTS::OTSAbstractRecord
gives you this level of control.
Example 3.17. OTSAbstractRecord
interface OTSAbstractRecord : ArjunaSubtranAwareResource
{
readonly attribute long typeId;
readonly attribute string uid;
boolean propagateOnAbort ();
boolean propagateOnCommit ();
boolean saveRecord ();
void merge (in OTSAbstractRecord record);
void alter (in OTSAbstractRecord record);
boolean shouldAdd (in OTSAbstractRecord record);
boolean shouldAlter (in OTSAbstractRecord record);
boolean shouldMerge (in OTSAbstractRecord record);
boolean shouldReplace (in OTSAbstractRecord record);
};
typeId |
returns the record type of the instance. This is one of the values of the enumerated type Record_type . |
uid |
a stringified Uid for this record. |
propagateOnAbort |
by default, instances of
|
propagateOnCommit |
returning
|
saveRecord |
returning
|
merge |
used when two records need to merge together. |
alter |
used when a record should be altered. |
shouldAdd |
returns
|
shouldMerge |
returns
|
shouldReplace |
returns
|
When inserting a new record into the transaction’s intentions list, uses the following algorithm:
if a record with the same type and uid has already been inserted, then the methods
shouldAdd
, and related methods, are invoked to determine whether this record should
also be added.
If no such match occurs, then the record is inserted in the intentions list based on the
type
field, and ordered according to the uid. All of the records with the same type
appear ordered in the intentions list.
OTSAbstractRecord
is derived from
ArjunaSubtranAwareResource
. Therefore, all instances of
OTSAbstractRecord
inherit the benefits of this interface.
In terms of the OTS,
AtomicTransaction
is the preferred interface to the OTS
protocol engine. It is equivalent to
CosTransactions::Current
, but with more
emphasis on easing application development. For example, if an instance of
AtomicTransaction
goes out of scope before it is terminates, the transaction
automatically rolls back.
CosTransactions::Current
cannot provide this
functionality. When building applications using ,
use
AtomicTransaction
for
the added benefits it provides. It is located in the
com.arjuna.ats.jts.extensions.ArjunaOTS
package.
Example 3.18. AtomicTransaction
public class AtomicTransaction
{
public AtomicTransaction ();
public void begin () throws SystemException, SubtransactionsUnavailable,
NoTransaction;
public void commit (boolean report_heuristics) throws SystemException,
NoTransaction, HeuristicMixed,
HeuristicHazard,TransactionRolledBack;
public void rollback () throws SystemException, NoTransaction;
public Control control () throws SystemException, NoTransaction;
public Status get_status () throws SystemException;
/* Allow action commit to be supressed */
public void rollbackOnly () throws SystemException, NoTransaction;
public void registerResource (Resource r) throws SystemException, Inactive;
public void
registerSubtransactionAwareResource (SubtransactionAwareResource)
throws SystemException, NotSubtransaction;
public void
registerSynchronization(Synchronization s) throws SystemException,
Inactive;
};
Table 3.7. AtomicTransaction's Methods
begin |
Starts an action |
commit |
Commits an action |
rollback |
Abort an action |
Transaction nesting is determined dynamically. Any transaction started within the scope of another running transaction is nested.
The
TopLevelTransaction
class, which is derived from
AtomicTransaction
, allows creation of nested top-level transactions. Such
transactions allow non-serializable and potentially non-recoverable side effects to be initiated from within a
transaction, so use them with caution. You can create nested top-level transactions with a combination of the
CosTransactions::TransactionFactory
and the
suspend
and
resume
methods of
CosTransactions::Current
. However, the
TopLevelTransaction
class provides a more user-friendly interface.
AtomicTransaction
and
TopLevelTransaction
are completely compatible
with
CosTransactions::Current
. You an use the two transaction mechanisms
interchangeably within the same application or object.
AtomicTransaction
and
TopLevelTransaction
are similar to
CosTransactions::Current
. They both simplify the interface between you and the OTS.
However, you gain two advantages by using
AtomicTransaction
or
TopLevelTransaction
.
The ability to create nested top-level transactions which are automatically associated with the current thread. When the transaction ends, the previous transaction associated with the thread, if any, becomes the thread’s current transaction.
Instances of
AtomicTransaction
track scope, and if such an instance goes out of scope
before it is terminated, it is automatically aborted, along with its children.
When using TXOJ in a distributed manner,
requires you to use interposition between client and object. This
requirement also exists if the application is local, but the transaction manager is remote. In the case of
implicit context propagation, where the application object is derived from
CosTransactions::TransactionalObject
,
you do not need to do anything
further.
automatically provides interposition. However, where implicit propagation is not supported by the
ORB, or your application does not use it, you must take additional action to enable interposition.
The class
com.arjuna.ats.jts.ExplicitInterposition
allows an application to create a local
control object which acts as a local coordinator, fielding registration requests that would normally be passed
back to the originator. This surrogate registers itself with the original coordinator, so that it can correctly
participate in the commit protocol. The application thread context becomes the surrogate transaction
hierarchy. Any transaction context currently associated with the thread is lost. The interposition lasts for
the
lifetime of the explicit interposition object, at which point the application thread is no longer associated with
a transaction context. Instead, it is set to
null
.
interposition is intended only for those situations where the transactional object and the transaction occur within different processes, rather than being co-located. If the transaction is created locally to the client, do not use the explicit interposition class. The transaction is implicitly associated with the transactional object because it resides within the same process.
Example 3.19. ExplicitInterposition
public class ExplicitInterposition
{
public ExplicitInterposition ();
public void registerTransaction (Control control) throws InterpositionFailed, SystemException;
public void unregisterTransaction () throws InvalidTransaction,
SystemException;
};
A transaction context can be propagated between client and server in two ways: either as a reference to the client’s transaction Control, or explicitly sent by the client. Therefore, there are two ways in which the interposed transaction hierarchy can be created and registered. For example, consider the class Example which is derived from LockManager and has a method increment:
Example 3.20. ExplicitInterposition Example
public boolean increment (Control control)
{
ExplicitInterposition inter = new ExplicitInterposition();
try
{
inter.registerTransaction(control);
}
catch (Exception e)
{
return false;
}
// do real work
inter.unregisterTransaction(); // should catch exceptions!
// return value dependant upon outcome
}
if the
Control
passed to the
register
operation of
ExplicitInterposition
is
null
, no exception is thrown. The system
assumes that the client did not send a transaction context to the server. A transaction created within the object
will thus be a top-level transaction.
When the application returns, or when it finishes with the interposed hierarchy, the program should call
unregisterTransaction
to disassociate the thread of control from the hierarchy. This
occurs automatically when the
ExplicitInterposition
object is garbage collected. However,
since this may be after the transaction terminates,
assumes the thread is still associated with the
transaction and issues a warning about trying to terminate a transaction while threads are still active within it.
This example illustrates the concepts and the implementation details for a simple client/server example using implicit context propagation and indirect context management.
This example only includes a single unit of work within the scope of the transaction. consequently, only a one-phase commit is needed.
The client and server processes are both invoked using the
implicit propagation
and
interposition
command-line options.
For the purposes of this worked example, a single method implements the
DemoInterface
interface. This method is used in the DemoClient program.
Example 3.21. idl interface
#include <idl/CosTransactions.idl>
#pragma javaPackage ""
module Demo
{
exception DemoException {};
interface DemoInterface : CosTransactions::TransactionalObject
{
void work() raises (DemoException);
};
};
This section deals with the pieces needed to implement the example interface.
The example overrides the methods of the
Resource
implementation class. The
DemoResource
implementation includes the placement of
System.out.println
statements at judicious points, to highlight when a particular
method is invoked.
Only a single unit of work is included within the scope of the transaction. Therefore, the
prepare
or
commit
methods should never be invoked, but the
commit_one_phase
method should be invoked.
Example 3.22. DemoResource
1 import org.omg.CosTransactions.*;
2 import org.omg.CORBA .SystemException;
3
4 public class DemoResource extends org.omg.CosTransactions .ResourcePOA
5 {
6 public Vote prepare() throws HeuristicMixed, HeuristicHazard,
7 SystemException
8 {
9 System.out.println("prepare called");
10
11 return Vote.VoteCommit;
12 }
13
14 public void rollback() throws HeuristicCommit, HeuristicMixed,
15 HeuristicHazard, SystemException
16 {
17 System.out.println("rollback called");
18 }
19
20 public void commit() throws NotPrepared, HeuristicRollback,
21 HeuristicMixed, HeuristicHazard, SystemException
22 {
23 System.out.println("commit called");
24 }
25
26 public void commit_one_phase() throws HeuristicHazard, SystemException
27 {
28 System.out.println("commit_one_phase called");
29 }
30
31 public void forget() throws SystemException
32 {
33 System.out.println("forget called");
34 }
35 }
At this stage, the
Demo.idl
has been processed by the ORB’s idl compiler to generate the
necessary client and server package.
Line 14 returns the transactional context for the
Current
pseudo object. After
obtaining a
Control
object, you can derive the Coordinator object (line 16).
Lines 17 and 19 create a resource for the transaction, and then inform the ORB that the resource is ready to receive incoming method invocations.
Line 20 uses the
Coordinator
to register a
DemoResource
object
as a participant in the transaction. When the transaction terminates, the resource receives requests to commit
or rollback the updates performed as part of the transaction.
Example 3.23. Transactional implementation
1 import Demo.*;
2 import org.omg.CosTransactions.*;
3 import com.arjuna.ats.jts.*;
4 import com.arjuna.orbportability.*;
5
6 public class DemoImplementation extends Demo.DemoInterfacePOA
7 {
8 public void work() throws DemoException
9 {
10 try
11 {
12
13 Control control = OTSManager.get_current().get_control();
14
15 Coordinator coordinator = control.get_coordinator();
16 DemoResource resource = new DemoResource();
17
18 ORBManager.getPOA().objectIsReady(resource);
19 coordinator.register_resource(resource);
20
21 }
22 catch (Exception e)
23 {
24 throw new DemoException();
25 }
26 }
27
28 }
First, you need to to initialize the ORB and the POA. Lines 10 through 14 accomplish these tasks.
The servant class
DemoImplementation
contains the implementation code for the
DemoInterface
interface. The servant services a particular client request. Line
16 instantiates a servant object for the subsequent servicing of client requests.
Once a servant is instantiated, connect the servant to the POA, so that it can recognize the invocations on it, and pass the invocations to the correct servant. Line 18 performs this task.
Lines 20 through to 21 registers the service through the default naming mechanism. More information about the options available can be found in the ORB Portability Guide.
If this registration is successful, line 23 outputs a
sanity check
message.
Finally, line 25 places the server process into a state where it can begin to accept requests from client processes.
Example 3.24. DemoServer
1 import java.io.*;
2 import com.arjuna.orbportability.*;
3
4 public class DemoServer
5 {
6 public static void main (String[] args)
7 {
8 try
9 {
10 ORB myORB = ORB.getInstance("test").initORB(args, null);
11 RootOA myOA = OA.getRootOA(myORB).myORB.initOA();
12
13 ORBManager.setORB(myORB);
14 ORBManager.setPOA(myOA);
15
16 DemoImplementation obj = new DemoImplementation();
17
18 myOA.objectIsReady(obj);
19
20 Services serv = new Services(myORB);
21 serv.registerService(myOA.corbaReference(obj), "DemoObjReference", null);
22
23 System.out.println("Object published.");
24
25 myOA.run();
26 }
27 catch (Exception e)
28 {
29 System.err.println(e);
30 }
31 }
32 }
After the server compiles, you can use the command line options defined below to start a server
process. By
specifying the usage of a filter on the command line, you can override settings in the
TransactionService.properties
file.
if you specify the interposition filter, you also imply usage of implicit context propagation.
The client, like the server, requires you to first initialize the ORB and the POA. Lines 14 through 18 accomplish these tasks.
After a server process is started, you can obtain the object reference through the default
publication
mechanism used to publish it in the server. This is done in lines 20 and 21. Initially the reference is an
instance of
Object
. However, to invoke a method on the servant object, you need to
narrow this instance to an instance of the
DemoInterface
interface. This is
shown in line 21.
Once we have a reference to this servant object, we can start a transaction (line 23), perform a unit of work (line 25) and commit the transaction (line 27).
Example 3.25. DemoClient
1 import Demo.*;
2 import java.io.*;
3 import com.arjuna.orbportability.*;
4 import com.arjuna.ats.jts.*;
5 import org.omg.CosTransactions.*;
6 import org.omg.*;
7
8 public class DemoClient
9 {
10 public static void main(String[] args)
11 {
12 try
13 {
14 ORB myORB = ORB.getInstance("test").initORB(args, null);
15 RootOA myOA = OA.getRootOA(myORB).myORB.initOA();
16
17 ORBManager.setORB(myORB);
18 ORBManager.setPOA(myOA);
19
20 Services serv = new Services(myORB);
21 DemoInterface d = (DemoInterface) DemoInterfaceHelper.narrow(serv.getService("DemoObjReference"));
22
23 OTS.get_current().begin();
24
25 d.work();
26
27 OTS.get_current().commit(true);
28 }
29 catch (Exception e)
30 {
31 System.err.println(e);
32 }
33 }
34 }
The sequence diagram illustrates the method invocations that occur between the client and server. The following aspects are important:
You do not need to pass the transactional context as a parameter in method
work
,
since you are using implicit context propagation.
Specifying the use of interposition when the client and server processes are started, by using appropriate filters and interceptors, creates an interposed coordinator that the servant process can use, negating any requirement for cross-process invocations. The interposed coordinator is automatically registered with the root coordinator at the client.
The resource that commits or rolls back modifications made to the transactional object is associated, or registered, with the interposed coordinator.
The
commit
invocation in the client process calls the root coordinator. The root
coordinator calls the interposed coordinator, which in turn calls the
commit_one_phase
method for the resource.
The server process first stringifies the servant instance, and writes the servant IOR to a temporary file. The first line of output is the sanity check that the operation was successful.
In this simplified example, the coordinator object has only a single registered resource.
Consequently, it
performs a
commit_one_phase
operation on the resource object, instead of performing a
prepare
operation, followed by a
commit
or
rollback
.
The output is identical, regardless of whether implicit context propagation or interposition is used, since interposition is essentially performance aid. Ordinarily, you may need to do a lot of marshaling between a client and server process.
These settings are defaults, and you can override them at run-time by using property variables, or in the
properties file in the
etc/
directory of the installation.
Unless a CORBA object is derived from
CosTransactions::TransactionalObject
,you
do not need to propagate any
context. In order to preserve distribution transparency,
defaults to always propagating a
transaction context when calling remote objects, regardless of whether they are marked as transactional
objects. You can override this by setting the
com.arjuna.ats.jts.alwaysPropagateContext
property variable to
NO
.
If an object is derived from
CosTransactions::TransactionalObject
,
and no
client context is present when an invocation is made,
transmits a null context. Subsequent
transactions begun by the object are top-level. If a context is required, then set the
com.arjuna.ats.jts.needTranContext
property variable to
YES
,
in which
case
raises the
TransactionRequired
exception.
needs a persistent object store, so that it can record information about transactions in the event
of
failures. If all transactions complete successfully, this object store has no entries. The default location
for this must be set using the
ObjectStoreEnvironmentBean.objectStoreDir
variable in the
properties file.
If you use a separate transaction manager for
Current
, its location is obtained
from the
CosServices.cfg
file.
CosServices.cfg
is located at runtime
by the
OrbPortabilityEnvironmentBean
properties
initialReferencesRoot
and
initialReferencesFile
. The former is a directory, defaulting to the current working
directory. The latter is a file name, relative to the directory. The default value is
CosServices.cfg
.
Checked transactions are not enabled by default. This means that threads other than the transaction
creator may
terminate the transaction, and no check is made to ensure all outstanding requests have finished prior to
transaction termination. To override this, set the
JTSEnvironmentBean.checkedTransactions
property variable to
YES
.
As of 4.5, transaction timeouts are unified across all transaction components and are controlled by ArjunaCore. The old JTS configuration property com.arjuna.ats.jts.defaultTimeout still remains but is deprecated.
if a value of
0
is specified for the timeout of a top-level transaction, or no timeout is
specified,
does not impose any timeout on the transaction. To override this default timeout, set the
CoordinatorEnvironmentBean.defaultTimeout
property variable to the required timeout value
in seconds.
assures complete, accurate business transactions for any Java based applications, including those written for the Jakarta EE and EJB frameworks.
is a 100% Java implementation of a distributed transaction management system based on the Jakarta EE Java Transaction Service (JTS) standard. Our implementation of the JTS utilizes the Object Management Group's (OMG) Object Transaction Service (OTS) model for transaction interoperability as recommended in the Jakarta EE and EJB standards. Although any JTS-compliant product will allow Java objects to participate in transactions, one of the key features of is it's 100% Java implementation. This allows to support fully distributed transactions that can be coordinated by distributed parties.
runs can be run both as an embedded distributed service of an application server (e.g. WildFly Application Server), affording the user all the added benefits of the application server environment such as real-time load balancing, unlimited linear scalability and unmatched fault tolerance that allows you to deliver an always-on solution to your customers. It is also available as a free-standing Java Transaction Service.
In addition to providing full compliance with the latest version of the JTS specification, leads the market in providing many advanced features such as fully distributed transactions and ORB portability with POA support.
is tested on HP-UX 11i, Red Hat Linux, Windows Server 2003, and Sun Solaris 10, using Sun's JDK 5. It should howerver work on any system with JDK 5 or 6.
The Java Transaction API support for comes in two flavours:
Key features
This trail map will help you get started with running product. It is structured as follows:
In addition to the trails listed above, a set of trails giving more explanation on concept around transaction processing and standards, and also a quick access to section explaining how to configure are listed in the section "Additional Trails".
Note: When running the local JTS transactions part of the trailmap, you will need to start the recovery manager: java com.arjuna.ats.arjuna.recovery.RecoveryManager -test
This model consists of the follwng components (illustrated in Figure 1)
Figure 1 - The X/Open DTP model
The functions that each RM provides for the TM are called the xa_*() functions. For example the TM calls xa_start( ) in each participating RM to start an RM-internal transaction as part of a new global transaction. Later, the TM may call in sequence xa_end() xa_prepare( ) and xa_commit() to coordinate a (successful in this case) two-phase commit protocol. The functions that the TM provides for each RM are called the ax_*( ) functions. For example an RM calls ax_reg( ) to register dynamically with the TM.
XA is a bidirectional interface between resource managers and transaction managers. This interface specifies two sets of functions. The first set, called as xa_*() functions are implemented by resource managers for use by the transaction manager.
Table 1 - XA Interface of X/Open DTP Model for the transaction manager
Function | Purpose |
xa_start | Directs a resource manager to associate the subsequent requests by application programs to a transaction identified by the supplied identifier. |
xa_end | Ends the association of a resource manager with the transaction. |
xa_prepare | Prepares the resource manager for the commit operation. Issued by the transaction manager in the first phase of the two-phase commit operation. |
xa_commit | Commits the transactional operations. Issued by the transaction manager in the second phase of the two-phase commit operation. |
xa_recover | Retrieves a list of prepared and heuristically committed or heuristically rolled back transactions |
xa_forget | Forgets the heuristic transaction associated with the given transaction identifier |
The second set of functions, called as ax_*() functions, are implemented by the transaction manager for use by resource managers.
Table 2 - XA Interface of X/Open DTP Model for resource managers
Function | Purpose |
ax_reg() | Dynamically enlists with the transaction manager. |
ax_unreg() | Dynamically delists from the transaction manager. |
Transaction management is one of the most crucial requirements for enterprise application development. Most of the large enterprise applications in the domains of finance, banking and electronic commerce rely on transaction processing for delivering their business functionality.
Enterprise applications often require concurrent access to distributed data shared amongst multiple components, to perform operations on data. Such applications should maintain integrity of data (as defined by the business rules of the application) under the following circumstances:
In such cases, it may be required that a group of operations on (distributed) resources be treated as one unit of work. In a unit of work, all the participating operations should either succeed or fail and recover together. This problem is more complicated when
In either case, it is required that success or failure of a unit of work be maintained by the application. In case of a failure, all the resources should bring back the state of the data to the previous state ( i.e., the state prior to the commencement of the unit of work).
From the programmer's perspective a transaction is a scoping mechanism for a collection of actions which must complete as a unit. It provides a simplified model for exception handling since only two outcomes are possible:
To illustrate the reliability expected by the application let’s consider the funds transfer example which is familiar to all of us.
The Money transfer involves two operations: Deposit and WithdrawalThe complexity of implementation doesn't matter; money moves from one place to another. For instance, involved accounts may be either located in a same relational table within a database or located on different databases.
A Simple transfer consists on moving money from savings to checking while a Complex transfer can be performed at the end- of- day according to a reconciliation between international banks
The concept of a transaction, and a transaction manager (or a transaction processing service) simplifies construction of such enterprise level distributed applications while maintaining integrity of data in a unit of work.
A transaction is a unit of work that has the following properties:
These properties, called as ACID properties, guarantee that a transaction is never incomplete, the data is never inconsistent, concurrent transactions are independent, and the effects of a transaction are persistent.
A collection of actions is said to be transactional if they possess the ACID properties. These properties are assumed to be ensured, in the presence of failures; if actions involved within the transaction are performed by a Transactional System. A transaction system includes a set of components where each of them has a particular role. Main components are described below.
Application Programs are clients for the transactional resources. These are the programs with which the application developer implements business transactions. With the help of the transaction manager, these components create global transactions and operate on the transactional resources with in the scope of these transactions. These components are not responsible for implementing mechanisms for preserving ACID properties of transactions. However, as part of the application logic, these components generally make a decision whether to commit or rollback transactions.
Application responsibilities could be summarized as follow:
A resource manager is in general a component that manages persistent and stable data storage system, and participates in the two phase commit and recovery protocols with the transaction manager.
A resource manager is typically a driver that provides two sets of interfaces: one set for the application components to get connections and operating, and the other set for participating in two phase commit and recovery protocols coordinated by a transaction manager. This component may also, directly or indirectly, register resources with the transaction manager so that the transaction manager can keep track of all the resources participating in a transaction. This process is called as resource enlistment.
Resource Manager responsibilities could be summarized as follow
The transaction manager is the core component of a transaction processing environment. Its main responsibilities are to create transactions when requested by application components, allow resource enlistment and delistment, and to manage the two-phase commit or recovery protocol with the resource managers.
A typical transactional application begins a transaction by issuing a request to a transaction manager to initiate a transaction. In response, the transaction manager starts a transaction and associates it with the calling thread. The transaction manager also establishes a transaction context. All application components and/or threads participating in the transaction share the transaction context. The thread that initially issued the request for beginning the transaction, or, if the transaction manager allows, any other thread may eventually terminate the transaction by issuing a commit or rollback request.
Before a transaction is terminated, any number of components and/or threads may perform transactional operations on any number of transactional resources known to the transaction manager. If allowed by the transaction manager, a transaction may be suspended or resumed before finally completing the transaction.
Once the application issues the commit request, the transaction manager prepares all the resources for a commit operation, and based on whether all resources are ready for a commit or not, issues a commit or rollback request to all the resources.
Resource Manager responsibilities could be summarized as follow:
A transaction that involves only one transactional resource, such a database, is considered as local transaction , while a transaction that involves more than one transactional resource that need to be coordinated to reach a consistent state is considered as a distributed transaction.
A transaction can be specified by what is known as transaction demarcation. Transaction demarcation enables work done by distributed components to be bound by a global transaction. It is a way of marking groups of operations to constitute a transaction.
The most common approach to demarcation is to mark the thread executing the operations for transaction processing. This is called as programmatic demarcation. The transaction so established can be suspended by unmarking the thread, and be resumed later by explicitly propagating the transaction context from the point of suspension to the point of resumption.
The transaction demarcation ends after a commit or a rollback request to the transaction manager. The commit request directs all the participating resources managers to record the effects of the operations of the transaction permanently. The rollback request makes the resource managers undo the effects of all operations on the transaction.
Since multiple application components and resources participate in a transaction, it is necessary for the transaction manager to establish and maintain the state of the transaction as it occurs. This is usually done in the form of transaction context.
Transaction context is an association between the transactional operations on the resources, and the components invoking the operations. During the course of a transaction, all the threads participating in the transaction share the transaction context. Thus the transaction context logically envelops all the operations performed on transactional resources during a transaction. The transaction context is usually maintained transparently by the underlying transaction manager.
Resource enlistment is the process by which resource managers inform the transaction manager of their participation in a transaction. This process enables the transaction manager to keep track of all the resources participating in a transaction. The transaction manager uses this information to coordinate transactional work performed by the resource managers and to drive two-phase and recovery protocol. At the end of a transaction (after a commit or rollback) the transaction manager delists the resources.
This protocol between the transaction manager and all the resources enlisted for a transaction ensures that either all the resource managers commit the transaction or they all abort. In this protocol, when the application requests for committing the transaction, the transaction manager issues a prepare request to all the resource managers involved. Each of these resources may in turn send a reply indicating whether it is ready for commit or not. Only The transaction manager issue a commit request to all the resource managers, only when all the resource managers are ready for a commit. Otherwise, the transaction manager issues a rollback request and the transaction will be rolled back.
Basically, the Recovery is the mechanism which preserves the transaction atomicity in presence of failures. The basic technique for implementing transactions in presence of failures is based on the use of logs. That is, a transaction system has to record enough information to ensure that it can be able to return to a previous state in case of failure or to ensure that changes committed by a transaction are properly stored.
In addition to be able to store appropriate information, all participants within a distributed transaction must log similar information which allow them to take a same decision either to set data in their final state or in their initial state.
Two techniques are in general used to ensure transaction's atomicity. A first technique focuses on manipulated data, such the Do/Undo/Redo protocol (considered as a recovery mechanism in a centralized system), which allow a participant to set its data in their final values or to retrieve them in their initial values. A second technique relies on a distributed protocol named the two phases commit, ensuring that all participants involved within a distributed transaction set their data either in their final values or in their initial values. In other words all participants must commit or all must rollback.
In addition to failures we refer as centralized such system crashes, communication failures due for instance to network outages or message loss have to be considered during the recovery process of a distributed transaction.
In order to provide an efficient and optimized mechanism to deal with failure, modern transactional systems typically adopt a “presume abort” strategy, which simplifies the transaction management.
The presumed abort strategy can be stated as «when in doubt, abort». With this strategy, when the recovery mechanism has no information about the transaction, it presumes that the transaction has been aborted.
A particularity of the presumed-abort assumption allows a coordinator to not log anything before the commit decision and the participants do not to log anything before they prepare. Then, any failure which occurs before the 2pc starts lead to abort the transaction. Furthermore, from a coordinator point of view any communication failure detected by a timeout or exception raised on sending prepare is considered as a negative vote which leads to abort the transaction. So, within a distributed transaction a coordinator or a participant may fail in two ways: either it crashes or it times out for a message it was expecting. When a coordinator or a participant crashes and then restarts, it uses information on stable storage to determine the way to perform the recovery. As we will see it the presumed-abort strategy enable an optimized behavior for the recovery.The importance of common interfaces between participants, as well as the complexity of their implementation, becomes obvious in an open systems environment. For this aim various distributed transaction processing standards have been developed by international standards organizations. Among these organizations, We list three of them which are mainly considered in the product:
OTS is based on the Open Group's DTP model and is designed so that it can be implemented using a common kernel for both the OTS and Open Group APIs. In addition to the functions defined by DTP, OTS contains enhancements specifically designed to support the object environment. Nested transactions and explicit propagation are two examples.
The CORBA model also makes some of the functions in DTP unnecessary so these have been consciously omitted. Static registration and the communications resource manager are unnecessary in the CORBA environment.
A key feature of OTS is its ability to share a common transaction with XA compliant resource managers. This permits the incremental addition of objects into an environment of existing procedural applications.
Figure 1 - OTS Architecture
The OTS architecture, shown in Figure 1, consists of the following components:
An object may require transactions to be either explicitly or implicitly propagated to its operations.
In the code fragments below, a transaction originator uses indirect context management and implicit transaction propagation; txn_crt is an example of an object supporting the Current interface. The client uses the begin operation to start the transaction whichbecomes implicitly associated with the originator's thread of control.
...
txn_crt.begin();
// should test the exceptions that might be raised
...
// the client issues requests, some of which involve
// transactional objects;
BankAccount.makeDeposit(deposit);
...
txn_crt.commit(false)
The program commits the transaction associated with the client thread. The report_heuristics argument is set to false so no report will be made by the Transaction Service about possible heuristic decisions.
In the following example, a transaction originator uses direct context management and explicit transaction propagation. The client uses a factory object supporting the CosTransactions::TransactionFactory interface to create a new transaction and uses the returned Control object to retrieve the Ter mi nat or and Coordinator objects.
...
CosTransactions::Control ctrl;
CosTransactions::Terminator ter;
CosTransactions::Coordinator coo;
coo = TFactory.create(0);
ter = ctrl.get_terminator();
...
transactional_object.do_operation(arg, c);
...
t.commit(false);
The client issues requests, some of which involve transactional objects, in this case explicit propagation of the context is used. The Control object reference is passed as an explicit parameter of the request; it is declared in the OMG IDL of the interface. The transaction originator uses the Terminator object to commit the transaction; the report_heuristics argument is set to false: so no report will be made by the Transaction Service about possible heuristic decisions.
The main difference between direct and indirect context management is the effect on the invoking thread's transaction context. If using indirect (i.e., invoking operations through the Current pseudo object), then the thread's transaction context will be modified automatically by the OTS, e.g., if begin is called then the thread's notion of the current transaction will be modified to the newly created transaction; when that is terminated, the transaction previously associated with the thread (if any) will be restored as the thread's context (assuming subtransactions are supported by the OTS implementation). However, if using direct management, no changes to the threads transaction context are performed by the OTS: the application programmer assumes responsibility for this.
Figure 2 - OTS interfaces and their interactions
Table 1 - OTS Interfaces and their role.
Interface | Role and operations |
Current |
|
TransactionFactory |
Explicit transaction creation
|
Control |
Explicit transaction context management
|
Terminator | Commit (commit) or rollback (rollback) a transaction in a direct transaction management mode |
Coordinator |
|
RecoveryCoordinator | Allows to coordinate recovery in case of failure ( replay_completion ) |
Resource | Participation in two-phase commit and recovery protocol ( prepare, rollback, commit, commit_one_phase, forget ) |
Synchronization | Application synchronization before beginning and after completion of two-phase commit ( before_completion, after_completion ) |
SubtransactionAwareResource | Commit or rollback a subtransaction ( commit_subtransaction, rollback_subtransaction) |
TransactionalObject | A marker interface to be implemented by all transactional objects (no operation defined) |
JTS specifies the implementation of a Java transaction manager. This transaction manager supports the JTA, using which application servers can be built to support transactional Java applications. Internally the JTS implements the Java mapping of the OMG OTS specifications.
The JTA specifies an architecture for building transactional application servers and defines a set of interfaces for various components of this architecture. The components are: the application, resource managers, and the application server, as shown in the slide.
The JTS thus provides a new architecture for transactional application servers and applications, while complying to the OMG OTS 1.1 interfaces internally. This allows the JTA compliant applications to interoperate with other OTS 1.1 complaint applications through the standard IIOP.
As shown in the Figure 1, in the Java transaction model, the Java application components can conduct transactional operations on JTA compliant resources via the JTS. The JTS acts as a layer over the OTS. The applications can therefore initiate global transactions to include other OTS transaction managers, or participate in global transactions initiated by other OTS compliant transaction managers.
Figure 1 - The JTA/JTS transaction model
The Java Transaction Service is architected around an application server and a transaction manager. The architecture is shown in Figure 2.
Figure 2 - The JTA/JTS Architecture
The JTS architecture consists of the following components:
Figure 3 - JTA Interfaces
Flag | Purpose |
STATUS_ACTIVE | Transaction is active (started but not prepared) |
STATUS_COMMITTED | Transaction is committed |
STATUS_COMMITTING | Transaction is in the process of committing. |
STATUS_MARKED_ROLLBACK | Transaction is marked for rollback. |
STATUS_NO_TRANSACTION | There is no transaction associated with the current Transaction, UserTransaction or TransactionManager objects. |
STATUS_PREPARED | Voting phase of the two phase commit is over and the transaction is prepared. |
STATUS_PREPARING | Transaction is in the process of preparing. |
STATUS_ROLLEDBACK | Outcome of the transaction has been determined as rollback. It is likely that heuristics exists. |
STATUS_ROLLING_BACK | Transaction is in the process of rolling back. |
STATUS_UNKNOWN | A transaction exists but its current status can not be determined. This is a transient condition |
Table 1: Transaction Status Flags
The jakarta.transaction.Transaction, jakarta.transaction.TransactionManager, and jakarta.transaction.UserTransaction interfaces provide a getStatus method that returns one of the above status flags.
The application component can then use this object to begin, commit and rollback transactions. In this approach, association between the calling thread and the transaction, and transaction context propagation are handled transparently by the transaction manager.
Usage:
// Get a UserTransaction object
// Begin a transaction
userTransaction.begin();
// Transactional operations ...
// End the transaction
userTransaction.commit();
Usage
// Begin a transaction
Transaction t = TransactionManager.begin();
// Transactional operations ...
// End the transaction
TransactionManager.commit();
Jakarta Transactions is much more closely integrated with the XA concept of resources than the arbitrary objects. For each resource in-use by the application, the application server invokes the enlistResource method with an XAResource object which identifies the resource in use.
The enlistment request results in the transaction manager informing the resource manager to start associating the transaction with the work performed through the corresponding resource. The transaction manager is responsible for passing the appropriate flag in its XAResource.start method call to the resource manager.
The delistResource method is used to disassociate the specified resource from the transaction context in the target object. The application server invokes the method with the two parameters: the XAResource object that represents the resource, and a flag to indicate whether the operation is due to the transaction being suspended (TMSUSPEND), a portion of the work has failed (TMFAIL), or a normal resource release by the application (TMSUCCESS).
The de-list request results in the transaction manager informing the resource manager to end the association of the transaction with the target XAResource. The flag value allows the application server to indicate whether it intends to come back to the same resource whereby the resource states must be kept intact. The transaction manager passes the appropriate flag value in its XAResource.end method call to the underlying resource manager.
The application server can enlist and delist resource managers with the transaction manager using the jakarta.transaction.Transaction interface
UsageResource enlistment is in general done by the application server when an application requests it for a connection to a transactional resource.
// ... an implementation of the application server
// Get a reference to the underlying TransactionManager object.
...
// Get the current Transaction object from the TransactionManager.
transaction = transactionManager.getTransaction();
// Get an XAResource object from a transactional resource.
...
// Create a Transaction object.
...
// Enlist the resource
transaction.enlistResource(xaResource);...
// Return the connection to the application.
...
Resource delistment is done similarly after the application closes connections to transactional resources.
The jakarta.transaction.Transaction interface provides the registerSynchronization method to register jakarta.transaction.Synchronization objects with the transaction manager. The transaction manager then uses the synchronization protocol and calls the beforeCompletion and afterCompletion methods before and after the two phase commit process.
The EJB framework specifies construction, deployment and invocation of components called as enterprise beans. The EJB specification classifies enterprise beans into two categories: entity beans and session beans. While entity beans abstract persistent domain data, session beans provide for session specific application logic. Both types of beans are maintained by EJB compliant servers in what are called as containers. A container provides the run time environment for an enterprise bean. Figure 4 shows a simplified architecture of transaction management in EJB compliant application servers.
Figure 4 - EJB and Transactions
An enterprise bean is specified by two interfaces: the home interface and the remote interface. The home interface specifies how a bean can created or found. With the help of this interface, a client or another bean can obtain a reference to a bean residing in a container on an EJB server. The remote interface specifies application specific methods that are relevant to entity or session beans.
Clients obtain references to home interfaces of enterprise beans via the Java Naming and Directory Interface (JNDI) mechanism. An EJB server should provide a JNDI implementation for any naming and directory server. Using this reference to the home interface, a client can obtain a reference to the remote interface. The client can then access methods specified in the remote interface. The EJB specification specifies the Java Remote Method Invocation (RMI) as the application level protocol for remote method invocation. However, an implementation can use IIOP as the wire-level protocol.
In Figure 5, the client first obtains a reference to the home interface, and then a reference to an instance of Bean A via the home interface. The same procedure is applicable for instance of Bean A to obtain a reference and invoke methods on an instance of Bean B.
The EJB framework allows both programmatic and declarative demarcation of transactions. Declarative demarcation is needed for all enterprise beans deployed on the EJB. In addition, EJB clients can also initiative and end transactions programmatically.
The container performs automatic demarcation depending on the transaction attributes specified at the time of deploying an enterprise bean in a container. The following attributes determine how transactions are created.
Java Data Base Connectivity, provide Java programs with a way to connect to and use relational databases. The JDBC API lets you invoke SQL commands from Java programming language methods. In simplest terms, JDBC allows to do three things
The following code fragment gives a simple example of these three steps:
Connection con = DriverManager.getConnection(
"jdbc:myDriver:wombat", "myLogin", "myPassword");
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM Table1");
while (rs.next()) {
int x = rs.getInt("a");
String s = rs.getString("b");
float f = rs.getFloat("c");
}
Before the version 2.0 of JDBC, only local transactions controlled by the transaction manager of the DBMS is possible. To code a JDBC transaction, you invoke the commit and rollback methods of the java.sql.Connection interface. The beginning of a transaction is implicit. A transaction begins with the first SQL statement that follows the most recent commit, rollback, or connect statement. (This rule is generally true, but may vary with DBMS vendor.). The following example illustrates how transactions are managed by the JDBC API.
public void withdraw (double amount) {
try {
//A connection opened with JDBC is an AUTO COMMIT mode meaning
// that the commitment is automatically performed when the connection
// is closed
//setAutoCommit to false disable this feature
connection.setAutoCommit(false);
//perform an SQL update to Withdraw money from account
connection.commit();
} catch (Exception ex) {
try {
connection.rollback();
throw new Exception("Transaction failed: " + ex.getMessage());
} catch (Exception sqx) {
throw new Exception(...}
}
}
}
From the version 2.0, a JDBC driver can be involved within a distributed transaction since it supports the XAResource interface that allows to participate to the 2PC protocol. An application that need to include more than one database can create a JTA transaction. To demarcate a JTA transaction, the application program invokes the begin, commit, and rollback methods of the jakarta.transaction.UserTransaction interface. The following code, that can be applied to a bean-managed transaction, demonstrates the UserTransaction methods. The begin and commit invocations delimit the updates to the database. If the updates fail, the code invokes the rollback method and throws an Exception.
public void transfer(double amount) {
UserTransaction ut = context.getUserTransaction();
try {
ut.begin();
// Perform SQL command to debit account 1
// Perform SQL command to debit account 2
ut.commit();
} catch (Exception ex) {
try {
ut.rollback();
} catch (Exception ex1) {
throw new Exception ("Rollback failed: " + ex1.getMessage());
}
throw new Exception ("Transaction failed: " + ex.getMessage());
}
}
This trail provides information on the way to configure environmental variables needed to define the behaviour of transactional applications managed with . Basically, the behaviour of the product is configurable through property attributes. Although these property attributes may be specified as command line arguments, it is more convenient to organise and initialise them through properties files.
The properties file named jbossts-properties.xml and located under the <ats_installation_directory>/etc directory is organised as a collection of property names.
<property> name="a_name" value="a_value" </property>
Some properties must be specified by the developer while others do not need to be defined and can be used with their default values. Basically the properties file that does not provide default values to all its properties is the jbossts-properties.xml.
The following table describes some properties in the jbossts-properties.xml, where:
Name | Description | Possible Value | Default Value |
com.arjuna.ats.arjuna.objectstore.localOSRoot | By default, all object states will be stored within the "defaultStore" subdirectory of the object store root. However, this subdirectory can be changed by setting the localOSRoot property variable accordingly | Directory name | defaultStore |
com.arjuna.ats.arjuna.objectstore.objectStoreDir | Specify the location of the ObjectStore | Directory name | PutObjectStoreDirHere |
com.arjuna.ats.arjuna.common.varDir | needs to be able to write temporary files to a well known location during execution. By default this location is var. However, by setting the varDir property variable this can be overridden. | Directory name | var/tmp |
The location of the ObjectStore is specified in via the properrty com.arjuna.ats.arjuna.objectstore.objectStoreDir that can be passed with the java flag "-D". For convenience this property is defined in the properties file jbossts-properties.xml, and its value is set during the installation. At any time, the location of the ObjectStore may be changed.
Sometimes it is desirable, mainly in case of debugging, to have some form of output during execution to trace internal actions performed. uses the logging tracing mechanism provided by the Arjuna Common Logging Framework (CLF) version 2.4, which provides a high level interface that hides differences that exist between logging APIs such Jakarta log4j, JDK 1.4 logging API or dotnet logging API.
With the CLF applications make logging calls on commonLogger objects. These commonLogger objects pass log messages to Handler for publication. Both commonLoggers and Handlers may use logging Levels to decide if they are interested in a particular log message. Each log message has an associated log Level, that gives the importance and urgency of a log message. The set of possible Log Levels are DEBUG, INFO, WARN, ERROR and FATAL. Defined Levels are ordered according to their integer values as follows: DEBUG < INFO < WARN < ERROR < FATAL.
The CLF provides an extension to filter logging messages according to finer granularity an application may define. That is, when a log message is provided to the commonLogger with the DEBUG level, additional conditions can be specified to determine if the log message is enabled or not.
Note : These conditions are applied if and only the DEBUG level is enabled and the log request performed by the application specifies debugging granularity.When enabled, Debugging is filtered conditionally on three variables:
According to these variables the Common Logging Framework defines three interfaces. A particular product may implement its own classes according to its own finer granularity. uses the default Debugging level and the default Visibility level provided by CLF, but it defines its own Facility Code. uses the default level assigned to its commonLoggers objects (DEBUG). However, it uses the finer debugging features to disable or enable debug messages. Finer values used by the are defined below:
Debug Level | Value | Description |
NO_DEBUGGING | 0x0000 | A commonLogger object assigned with this values discard all debug requests |
CONSTRUCTORS | 0x0001 | Diagnostics from constructors |
DESTRUCTORS | 0x0002 | Diagnostics from finalizers. |
CONSTRUCT_AND_DESTRUCT | CONSTRUCTORS | DESTRUCTORS | Diagnostics from constructors and finalizers |
FUNCTIONS | 0x010 | Diagnostics from functions |
OPERATORS | 0x020 | Diagnostics from operators, such as equals |
FUNCS_AND_OPS | FUNCTIONS | OPERATORS | Diagnostics from functions and operations. |
ALL_NON_TRIVIAL | CONSTRUCT_AND_DESTRUCT | FUNCTIONS | OPERATORS | Diagnostics from all non-trivial operations |
TRIVIAL_FUNCS | 0x0100 | Diagnostics from trivial functions. |
TRIVIAL_OPERATORS: | 0x0200 | Diagnostics from trivial operations, and operators. |
ALL_TRIVIAL | TRIVIAL_FUNCS | TRIVIAL_OPERATORS | Diagnostics from all trivial operations |
FULL_DEBUGGING | 0xffff | Full diagnostics. |
Debug Level | Value | Description |
VIS_NONE | 0x0000 | No Diagnostic |
VIS_PRIVATE | 0x0001 | only from private methods. |
VIS_PROTECTED | 0x0002 | only from protected methods. |
VIS_PUBLIC | 0x0004 | only from public methods. |
VIS_PACKAGE | 0x0008 | only from package methods. |
VIS_ALL | 0xffff | Full Diagnostic |
Facility Code Level | Value | Description |
FAC_ATOMIC_ACTION | 0x00000001 | atomic action core module |
FAC_BUFFER_MAN | 0x00000004 | state management (buffer) classes |
FAC_ABSTRACT_REC | 0x00000008 | abstract records |
FAC_OBJECT_STORE | 0x00000010 | object store implementations |
FAC_STATE_MAN | 0x00000020 | state management and StateManager) |
FAC_SHMEM | 0x00000040 | shared memory implementation classes |
FAC_GENERAL | 0x00000080 | general classes |
FAC_CRASH_RECOVERY | 0x00000800 | detailed trace of crash recovery module and classes |
FAC_THREADING | 0x00002000 | threading classes |
FAC_JDBC | 0x00008000 | JDBC 1.0 and 2.0 support |
FAC_RECOVERY_NORMAL | 0x00040000 | normal output for crash recovery manager |
To ensure appropriate output, it is necessary to set some of the finer debug properties explicitly as follows:
<properties>
<!-- CLF 2.4 properties -->
<property
name="com.arjuna.common.util.logging.DebugLevel"
value="0x00000000"/>
<property
name="com.arjuna.common.util.logging.FacilityLevel"
value="0xffffffff"/>
<property
name="com.arjuna.common.util.logging.VisibilityLevel"
value="0xffffffff"/>
<property
name="com.arjuna.common.util.logger"
value="log4j"/>
</properties>
By default, debugging messages are not enabled since the DebugLevel is set to NO_DEBUGGING (0x00000000). You can enable debugging by providing one of the appropriate value listed above - for instance with you wish to see all internal actions performed by the RecoveryManager to recover transactions from a failure set the DebugLevel to FULL_DEBUGGING (0xffffffff) and the FacilityCode Level FAC_CRASH_RECOVERY.
Note : To enable finger debug messages, the logging level should be set to the DEBUG level as described below.
From the program point of view a same API is used whatever the underlying logging mechanism, but from a configuration point of view is that the user is totally responsible for the configuration of the underlying logging system. Hence, the properties of the underlying log system are configured in a manner specific to that log system, e.g., a log4j.properties file in the case that log4j logging is used. To set the logging level to the DEBUG value, the log4j.properties file can be edited to set that value.
The property com.arjuna.common.util.logger allows to select the underlying logging system. Possible value are listed in the following table.
Property Value | Description |
log4j | Log4j logging (log4j classes must be available in the classpath); configuration through the log4j.properties file, which is picked up from the CLASSPATH or given through a System property: log4j.configuration |
jdk14 | JDK 1.4 logging API (only supported on JVMs of version 1.4 or higher). Configuration is done through a file logging.properties in the jre/lib directory. |
simple | Selects the simple JDK 1.1 compatible console-based logger provided by Jakarta Commons Logging |
csf | Selects CSF-based logging (CSF embeddor must be available) |
jakarta | Uses the default log system selection algorithm of the Jakarta Commons Logging framework |
dotnet |
Selects a .net logging implementation
Since a dotnet logger is not currently implemented, this is currently identical to simple. Simple is a purely JDK1.1 console-based log implementation. |
avalon | Uses the Avalon Logkit implementation |
noop | Disables all logging |
Many ORBs currently in use support different versions of CORBA and/or the Java language mapping.
only supports the new Portable Object Adapter (POA) architecture described in the CORBA 2.3 specification as a replacement for the Basic Object Adapter (BOA). Unlike the BOA, which was weakly specified and led to a number of different (and often conflicting) implementations, the POA was deliberately designed to reduce the differences between ORB implementations, and thus minimise the amount of re-coding that would need to be done when porting applications from one ORB to another. However, there is still scope for slight differences between ORB implementations, notably in the area of threading. Note, instead of talking about the POA, this manual will consider the Object Adapter
(OA).
Because must be able to run on a number of different ORBs, we have developed an ORB portability interface which allows entire applications to be moved between ORBs with little or no modifications. This portability interface is available to the application programmer in the form of several Java classes.
The ORB class provided in the package com.arjuna.orbportability.ORB shown below provides a uniform way of using the ORB. There are methods for obtaining a reference to the ORB, and for placing the application into a mode where it listens for incoming connections. There are also methods for registering application specific classes to be invoked before or after ORB initialisation.
public class ORB
{
public static ORB getInstance(String uniqueId);
// given the various parameters,this method initialises the ORB and
// retains a reference to it within the ORB class.
public synchronized void initORB () throws SystemException;
public synchronized void initORB (Applet a, Properties p)
throws SystemException;
public synchronized void initORB (String[] s, Properties p)
throws SystemException;
//The orb method returns a reference to the ORB.
//After shutdown is called this reference may be null.
public synchronized org.omg.CORBA.ORB orb ();
public synchronized boolean setOrb (org.omg.CORBA.ORB theORB);
// If supported, this method cleanly shuts down the ORB.
// Any pre- and post- ORB shutdown classes which
//have been registered will also be called.
public synchronized void shutdown ();
public synchronized boolean addAttribute (Attribute p);
public synchronized void addPreShutdown (PreShutdown c);
public synchronized void addPostShutdown (PostShutdown c);
public synchronized void destroy () throws SystemException;
//these methods place the ORB into a listening mode,
//where it waits for incoming invocations.
public void run ();
public void run (String name);
};
Note, some of the methods are not supported on all ORBs, and in this situation, a suitable exception will be thrown. The ORB class is a factory class which has no public constructor. To create an instance of an ORB you must call the getInstance method passing a unique name as a parameter. If this unique name has not been passed in a previous call to getInstance you will be returned a new ORB instance. Two invocations of getInstance made with the same unique name, within the same JVM, will return the same ORB instance.
The OA classes shown below provide a uniform way of using Object Adapters (OA). There are methods for obtaining a reference to the OA. There are also methods for registering application specific classes to be invoked before or after OA initialisation. Note, some of the methods are not supported on all ORBs, and in this situation, a suitable exception will be thrown. The OA class is an abstract class and provides the basic interface to an Object Adapter. It has two sub-classes RootOA and ChildOA, these classes expose the interfaces specific to the root Object Adapter and a child Object Adapter respectively. From the RootOA you can obtain a reference to the RootOA for a given ORB by using the static method getRootOA. To create a ChildOA instance use the createPOA method on the RootOA.
As described below, the OA class and its sub-classes provide most operations provided by the POA as specified in the POA specification.
public abstract class OA
{
public synchronized static RootOA getRootOA(ORB associatedORB);
public synchronized void initPOA () throws SystemException;
public synchronized void initPOA (String[] args) throws SystemException;
public synchronized void initOA () throws SystemException;
public synchronized void initOA (String[] args) throws SystemException;
public synchronized ChildOA createPOA (String adapterName,
PolicyList policies) throws AdapterAlreadyExists, InvalidPolicy;
public synchronized org.omg.PortableServer.POA rootPoa ();
public synchronized boolean setPoa (org.omg.PortableServer.POA thePOA);
public synchronized org.omg.PortableServer.POA poa (String adapterName);
public synchronized boolean setPoa (String adapterName,
org.omg.PortableServer.POA thePOA);
...
};
public class RootOA extends OA
{
public synchronized void destroy() throws SystemException;
public org.omg.CORBA.Object corbaReference (Servant obj);
public boolean objectIsReady (Servant obj, byte[] id);
public boolean objectIsReady (Servant obj);
public boolean shutdownObject (org.omg.CORBA.Object obj);
public boolean shutdownObject (Servant obj);
};
public class ChildOA extends OA
{
public synchronized boolean setRootPoa (POA thePOA);
public synchronized void destroy() throws SystemException;
public org.omg.CORBA.Object corbaReference (Servant obj);
public boolean objectIsReady (Servant obj, byte[] id)
throws SystemException;
public boolean objectIsReady (Servant obj) throws SystemException;
public boolean shutdownObject (org.omg.CORBA.Object obj);
public boolean shutdownObject (Servant obj);
};
The following example illustrates how to use the ORB Portability API to create
import com.arjuna.orbportability.ORB;
import com.arjuna.orbportability.OA;
public static void main(String[] args)
{
try
{
// Create an ORB instance
ORB orb = ORB.getInstance("orb_test");
OA oa = OA.getRootOA( orb ); // Get the root POA
orb.initORB(args, null); // Initialize the ORB
oa.initOA(args); // Initialize the OA
// Do Work
oa.destroy(); // destroy the OA
orb.shutdown(); // Shutdown the ORB
}
catch(Exception e) {}
};
If using such a JDK (from its version 1.2.2) in conjunction with another ORB it is necessary to tell the JVM which ORB to use. This happens by specifying the org.omg.CORBA.ORBClass and org.omg.CORBA.ORBSingletonClass properties. If used, ORB Portability classes will ensure that these properties are automatically set when required, i.e., during ORB initialisation.
The ORB portability library attempts to detect which ORB is in use, it does this by looking for the ORB implementation class for each ORB it supports. This means that if there are classes for more than one ORB in the classpath the wrong ORB can be detected. Therefore it is best to only have one ORB in your classpath. If it is necessary to have multiple ORBs in the classpath then the property com.arjuna.orbportability.orbImplementation must be set to the value specified in the table below.
ORB | Property Value |
JacORB v2.0 | com.arjuna.orbportability.internal.orbspecific.jacorb.orb.implementations.jacorb_2_0 |
For additional details on the features provided by the ORB Portability API refer to the documentation provided by the distribution.
The failure recovery subsystem of will ensure that results of a transaction are applied consistently to all resources affected by the transaction, even if any of the application processes or the machine hosting them crash or lose network connectivity. In the case of machine (system) crash or network failure, the recovery will not take place until the system or network are restored, but the original application does not need to be restarted recovery responsibility is delegated to the Recovery Manager process (see below). Recovery after failure requires that information about the transaction and the resources involved survives the failure and is accessible afterward: this information is held in the ActionStore, which is part of the ObjectStore. If the ObjectStore is destroyed or modified, recovery may not be possible.
Until the recovery procedures are complete, resources affected by a transaction that was in progress at the time of the failure may be inaccessible. For database resources, this may be reported as tables or rows held by "in-doubt transactions".
The Recovery Manager functions by:
java com.arjuna.ats.arjuna.recovery.RecoveryManager
If the -test flag is used with the Recovery Manager then it will display a "Ready" message when initialised,
i.e.,
java com.arjuna.ats.arjuna.recovery.RecoveryManager -test
On initialization the Recovery Manager first loads in configuration information
via a properties file. This configuration includes a number of recovery
activators and recovery modules, which are then dynamically loaded.
Each recovery activator, which implements the com.arjuna.ats.arjuna.recovery.RecoveryActivator interface, is used to instantiate a recovery class related to the underlying communication protocol. Indeed, since the version 3.0 of , the Recovery Manager is not specifically tied to an Object Request Broker or ORB, which is to specify a recovery instance able to manage the OTS recovery protocol the new interface RecoveryActivator is provided to identify specific transaction protocol. For instance, when used with OTS, the RecoveryActivitor has the responsibility to create a RecoveryCoordinator object able to respond to the replay_completion operation.
All RecoveryActivator instances inherit the same interface. They are loaded via the following recovery extension property:
<property
name="com.arjuna.ats.arjuna.recovery.recoveryActivator_<number>"
value="RecoveryClass"/>
For instance the RecoveryActivator provided in the distribution of JTS/OTS, which shall not be commented, is as follow :
<property
name="com.arjuna.ats.arjuna.recovery.recoveryActivator_1"
value="com.arjuna.ats.internal.jts.
orbspecific.recovery.RecoveryEnablement"/>
Each recovery module, which implements the com.arjuna.ats.arjuna.recovery.RecoveryModule
interface, is used to recover a different type of transaction/resource,
however each recovery module inherits the same basic behaviour.
Recovery consists of two separate passes/phases separated by two timeout periods. The first pass examines the object store for potentially failed transactions; the second pass performs crash recovery on failed transactions. The timeout between the first and second pass is known as the backoff period. The timeout between the end of the second pass and the start of the first pass is the recovery period. The recovery period is larger than the backoff period.
The Recovery Manager invokes the first pass upon each recovery module, applies the backoff period timeout, invokes the second pass upon each recovery module and finally applies the recovery period timeout before restarting the first pass again.
The recovery modules are loaded via the following recovery extension property:
com.arjuna.ats.arjuna.recovery.recoveryExtension<number>=<RecoveryClass>
The default RecoveryExtension settings are:
<property name="com.arjuna.ats.arjuna.recovery.recoveryExtension1"
value="com.arjuna.ats.internal.
arjuna.recovery.AtomicActionRecoveryModule"/>
<property name="com.arjuna.ats.arjuna.recovery.recoveryExtension2"
value="com.arjuna.ats.internal.
txoj.recovery.TORecoveryModule"/>
<property name="com.arjuna.ats.arjuna.recovery.recoveryExtension3"
value="com.arjuna.ats.internal.
jts.recovery.transactions.TopLevelTransactionRecoveryModule"/>
<property name="com.arjuna.ats.arjuna.recovery.recoveryExtension4"
value="com.arjuna.ats.internal.
jts.recovery.transactions.ServerTransactionRecoveryModule"/>
com.arjuna.ats.arjuna.recovery.recoveryBackoffPeriod (default 10 secs)
com.arjuna.ats.arjuna.recovery.periodicRecovery (default 120 secs)
The RecoveryManager calls the scan() method on each loaded ExpiryScanner implementation at an interval determined by the property com.arjuna.ats.arjuna.recovery.expiryScanInterval. This value is given in hours default is 12. An EXPIRY_SCAN_INTERVAL value of zero will suppress any expiry scanning. If the value as supplied is positive, the first scan is performed when RecoveryManager starts; if the value is negative, the first scan is delayed until after the first interval (using the absolute value)
The default ExpiryScanner is:
<property
name="com.arjuna.ats.arjuna.recovery.
expiryScannerTransactionStatusManager"
value="com.arjuna.ats.internal.arjuna.recovery.
ExpiredTransactionStatusManagerScanner"/>
The following table summarize properties used by the Recovery Manager. These properties are defined by default the properties file named RecoveryManager-properties.xml.
Name | Description | Possible Value | Default Value |
com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod | Interval in seconds between initiating the periodic recovery modules | Value in seconds | 120 |
com.arjuna.ats.arjuna.recovery.recoveryBackoffPeriod | Interval in seconds between first and second pass of periodic recovery | Value in seconds | 10 |
com.arjuna.ats.arjuna.recovery.recoveryExtensionX | Indicates a periodic recovery module to use. X is the occurence number of the recovery module among a set of recovery modules. These modules are invoked in sort-order of names | The class name of the periodic recovery module | provides a set classes given in the RecoveryManager-properties.xml file |
com.arjuna.ats.arjuna.recovery.recoveryActivator_X | Indicates a recovery activator to use. X is the occurence number of the recovery activator among a set of recovery activators. | The class name of the periodic recovery activator | provide one class that manages the recovery protocol specified by the OTS specification |
com.arjuna.ats.arjuna.recovery.expiryScannerXXX | Expiry scanners to use (order of invocation is random). Names must begin with "com.arjuna.ats.arjuna.recovery.expiryScanner" | Class name | provides one class given in the RecoveryManager-properties.xml file |
com.arjuna.ats.arjuna.recovery.expiryScanInterval | Interval, in hours, between running the expiry scanners. This can be quite long. The absolute value determines the interval - if the value is negative, the scan will NOT be run until after one interval has elapsed. If positive the first scan will be immediately after startup. Zero will prevent any scanning. | Value in hours | 12 |
com.arjuna.ats.arjuna.recovery.transactionStatusManagerExpiryTime | Age, in hours, for removal of transaction status manager item. This should be longer than any ts-using process will remain running. Zero = Never removed. | Value in Hours | 12 |
com.arjuna.ats.arjuna.recovery.transactionStatusManagerPort | Use this to fix the port on which the TransactionStatusManager listens | Port number (short) | use a free port |
To ensure that your installation is fully operational, we will run the simple demo.
Please follow these steps before running the transactional applications
Ensure that jar files appear before jacorb jar files.
java com.arjuna.demo.simple.HelloServer
java com.arjuna.demo.simple.HelloClient
In the client window you should see the following lines:
Creating a transaction !
Call the Hello Server !
Commit transaction
Done
In the server, which must be stopped by hand, you should see:
Hello - called within a scope of a transaction
More details on the way to configure the behavior of can be found in the section on configuration.
JDK releases from 1.2.2 onwards include a minimum ORB implementation from Sun. If using such a JDK in conjunction with another ORB it is necessary to tell the JVM which ORB to use. This happens by specifying the org.omg.CORBA.ORBClass and org.omg.CORBA.ORBSingletonClass properties. In earlier versions of the it was necessary to specify these properties explicitly, either on the command line of in the properties file. However, it is no longer a requirement to do this, as the ORB Portability classes will ensure that these properties are automatically set when required. Of course it is still possible to specify these values explicitly (and necessary if not using the ORB initialization methods)
Transaction management is one of the most crucial requirements for enterprise application development. Most of the large enterprise applications in the domains of finance, banking and electronic commerce rely on transaction processing for delivering their business functionality.
Enterprise applications often require concurrent access to distributed data shared amongst multiple components, to perform operations on data. Such applications should maintain integrity of data (as defined by the business rules of the application) under the following circumstances:
In such cases, it may be required that a group of operations on (distributed) resources be treated as one unit of work. In a unit of work, all the participating operations should either succeed or fail and recover together. This problem is more complicated when
In either case, it is required that success or failure of a unit of work be maintained by the application. In case of a failure, all the resources should bring back the state of the data to the previous state ( i.e., the state prior to the commencement of the unit of work).
From the programmer's perspective a transaction is a scoping mechanism for a collection of actions which must complete as a unit. It provides a simplified model for exception handling since only two outcomes are possible:
To illustrate the reliability expected by the application let’s consider the funds transfer example which is familiar to all of us.
The Money transfer involves two operations: Deposit and WithdrawalThe complexity of implementation doesn't matter; money moves from one place to another. For instance, involved accounts may be either located in a same relational table within a database or located on different databases.
A Simple transfer consists on moving money from savings to checking while a Complex transfer can be performed at the end- of- day according to a reconciliation between international banks
The concept of a transaction, and a transaction manager (or a transaction processing service) simplifies construction of such enterprise level distributed applications while maintaining integrity of data in a unit of work.
A transaction is a unit of work that has the following properties:
These properties, called as ACID properties, guarantee that a transaction is never incomplete, the data is never inconsistent, concurrent transactions are independent, and the effects of a transaction are persistent.
A collection of actions is said to be transactional if they possess the ACID properties. These properties are assumed to be ensured, in the presence of failures; if actions involved within the transaction are performed by a Transactional System. A transaction system includes a set of components where each of them has a particular role. Main components are described below.
Application Programs are clients for the transactional resources. These are the programs with which the application developer implements business transactions. With the help of the transaction manager, these components create global transactions and operate on the transactional resources with in the scope of these transactions. These components are not responsible for implementing mechanisms for preserving ACID properties of transactions. However, as part of the application logic, these components generally make a decision whether to commit or rollback transactions.
Application responsibilities could be summarized as follow:
A resource manager is in general a component that manages persistent and stable data storage system, and participates in the two phase commit and recovery protocols with the transaction manager.
A resource manager is typically a driver that provides two sets of interfaces: one set for the application components to get connections and operating, and the other set for participating in two phase commit and recovery protocols coordinated by a transaction manager. This component may also, directly or indirectly, register resources with the transaction manager so that the transaction manager can keep track of all the resources participating in a transaction. This process is called as resource enlistment.
Resource Manager responsibilities could be summarized as follow
The transaction manager is the core component of a transaction processing environment. Its main responsibilities are to create transactions when requested by application components, allow resource enlistment and delistment, and to manage the two-phase commit or recovery protocol with the resource managers.
A typical transactional application begins a transaction by issuing a request to a transaction manager to initiate a transaction. In response, the transaction manager starts a transaction and associates it with the calling thread. The transaction manager also establishes a transaction context. All application components and/or threads participating in the transaction share the transaction context. The thread that initially issued the request for beginning the transaction, or, if the transaction manager allows, any other thread may eventually terminate the transaction by issuing a commit or rollback request.
Before a transaction is terminated, any number of components and/or threads may perform transactional operations on any number of transactional resources known to the transaction manager. If allowed by the transaction manager, a transaction may be suspended or resumed before finally completing the transaction.
Once the application issues the commit request, the transaction manager prepares all the resources for a commit operation, and based on whether all resources are ready for a commit or not, issues a commit or rollback request to all the resources.
Resource Manager responsibilities could be summarized as follow:
A transaction that involves only one transactional resource, such a database, is considered as local transaction , while a transaction that involves more than one transactional resource that need to be coordinated to reach a consistent state is considered as a distributed transaction.
A transaction can be specified by what is known as transaction demarcation. Transaction demarcation enables work done by distributed components to be bound by a global transaction. It is a way of marking groups of operations to constitute a transaction.
The most common approach to demarcation is to mark the thread executing the operations for transaction processing. This is called as programmatic demarcation. The transaction so established can be suspended by unmarking the thread, and be resumed later by explicitly propagating the transaction context from the point of suspension to the point of resumption.
The transaction demarcation ends after a commit or a rollback request to the transaction manager. The commit request directs all the participating resources managers to record the effects of the operations of the transaction permanently. The rollback request makes the resource managers undo the effects of all operations on the transaction.
Since multiple application components and resources participate in a transaction, it is necessary for the transaction manager to establish and maintain the state of the transaction as it occurs. This is usually done in the form of transaction context.
Transaction context is an association between the transactional operations on the resources, and the components invoking the operations. During the course of a transaction, all the threads participating in the transaction share the transaction context. Thus the transaction context logically envelops all the operations performed on transactional resources during a transaction. The transaction context is usually maintained transparently by the underlying transaction manager.
Resource enlistment is the process by which resource managers inform the transaction manager of their participation in a transaction. This process enables the transaction manager to keep track of all the resources participating in a transaction. The transaction manager uses this information to coordinate transactional work performed by the resource managers and to drive two-phase and recovery protocol. At the end of a transaction (after a commit or rollback) the transaction manager delists the resources.
This protocol between the transaction manager and all the resources enlisted for a transaction ensures that either all the resource managers commit the transaction or they all abort. In this protocol, when the application requests for committing the transaction, the transaction manager issues a prepare request to all the resource managers involved. Each of these resources may in turn send a reply indicating whether it is ready for commit or not. Only The transaction manager issue a commit request to all the resource managers, only when all the resource managers are ready for a commit. Otherwise, the transaction manager issues a rollback request and the transaction will be rolled back.
Basically, the Recovery is the mechanism which preserves the transaction atomicity in presence of failures. The basic technique for implementing transactions in presence of failures is based on the use of logs. That is, a transaction system has to record enough information to ensure that it can be able to return to a previous state in case of failure or to ensure that changes committed by a transaction are properly stored.
In addition to be able to store appropriate information, all participants within a distributed transaction must log similar information which allow them to take a same decision either to set data in their final state or in their initial state.
Two techniques are in general used to ensure transaction's atomicity. A first technique focuses on manipulated data, such the Do/Undo/Redo protocol (considered as a recovery mechanism in a centralized system), which allow a participant to set its data in their final values or to retrieve them in their initial values. A second technique relies on a distributed protocol named the two phases commit, ensuring that all participants involved within a distributed transaction set their data either in their final values or in their initial values. In other words all participants must commit or all must rollback.
In addition to failures we refer as centralized such system crashes, communication failures due for instance to network outages or message loss have to be considered during the recovery process of a distributed transaction.
In order to provide an efficient and optimized mechanism to deal with failure, modern transactional systems typically adopt a “presume abort” strategy, which simplifies the transaction management.
The presumed abort strategy can be stated as «when in doubt, abort». With this strategy, when the recovery mechanism has no information about the transaction, it presumes that the transaction has been aborted.
A particularity of the presumed-abort assumption allows a coordinator to not log anything before the commit decision and the participants do not to log anything before they prepare. Then, any failure which occurs before the 2pc starts lead to abort the transaction. Furthermore, from a coordinator point of view any communication failure detected by a timeout or exception raised on sending prepare is considered as a negative vote which leads to abort the transaction. So, within a distributed transaction a coordinator or a participant may fail in two ways: either it crashes or it times out for a message it was expecting. When a coordinator or a participant crashes and then restarts, it uses information on stable storage to determine the way to perform the recovery. As we will see it the presumed-abort strategy enable an optimized behavior for the recovery.The importance of common interfaces between participants, as well as the complexity of their implementation, becomes obvious in an open systems environment. For this aim various distributed transaction processing standards have been developed by international standards organizations. Among these organizations, We list three of them which are mainly considered in the product:
assures complete, accurate business transactions for any Java based applications, including those written for the Jakarta EE and EJB frameworks.
is a 100% Java implementation of a distributed transaction management system based on the Jakarta EE Java Transaction Service (JTS) standard. Our implementation of the JTS utilizes the Object Management Group's (OMG) Object Transaction Service (OTS) model for transaction interoperability as recommended in the Jakarta EE and EJB standards. Although any JTS-compliant product will allow Java objects to participate in transactions, one of the key features of is it's 100% Java implementation. This allows to support fully distributed transactions that can be coordinated by distributed parties.
runs can be run both as an embedded distributed service of an application server (e.g. WildFly Application Server), affording the user all the added benefits of the application server environment such as real-time load balancing, unlimited linear scalability and unmatched fault tolerance that allows you to deliver an always-on solution to your customers. It is also available as a free-standing Java Transaction Service.
In addition to providing full compliance with the latest version of the JTS specification, leads the market in providing many advanced features such as fully distributed transactions and ORB portability with POA support.
works on a number of operating systems including Red Hat linux, Sun Solaris and Microsoft Windows XP. It requires a Java 5 or later environment.
The Java Transaction API support for comes in two flavours:
The sample application consists of a banking application that involves a bank able to manage accounts on behalf of clients. Clients can obtain information on accounts and perform operations such credit, withdraw and transfer money from one account to an other.
Figure 1 - The Banking Applications
Each operation provided to the client leads to the creation of a transaction; therefore in order to commit or rollback changes made on an account, a resource is associated with the account to participate to the transaction commitment protocol. According to the final transaction decision, the resource is able to set the Account either to its initial state (in case of rollback) or to the final state (in case of commit). From the transactional view, Figure 2 depicts of transactional components.
Figure 2 - The Banking Application and the transactional Component
Assuming that the product has been installed, this trail provides a set of examples that show how to build transactional applications. Two types of transactional applications are presented, those using the JTA interface and those accessing to the JTS (OTS) interfaces.
Please follow these steps before running the transactional applications
<property
name="com.arjuna.ats.jta.jtaTMImplementation"
value="com.arjuna.ats.internal.jta.transaction.
arjunacore.TransactionManagerImple"/>
<property
name="com.arjuna.ats.jta.jtaUTImplementation"
value="com.arjuna.ats.internal.jta.transaction.
arjunacore.UserTransactionImple"/>
<property
name="com.arjuna.ats.jta.jtaTMImplementation"
value="com.arjuna.ats.internal.jta.transaction.
jts.TransactionManagerImple"/> <property
name="com.arjuna.ats.jta.jtaUTImplementation"
value="com.arjuna.ats.internal.jta.transaction.
jts.UserTransactionImple"/>
Using JTA to create a distributed transaction need the creation of an ORB instance as done by a JTS application (see JTS versions of the banking application), the difference is in the interface used to demarcate and control transactions.
To illustrate the programming interfaces possibilities enabled by , the banking application is provided in several versions: a version that uses the JTA API and a second that uses JTS/OTS interfaces.
This trail focuses to understanding concepts related to the creation of transactions and the behavior of the commitment protocol, while the next trail illustrates the similar application with persistent data.
Program Applications that create transactions using te JTA interface may invoke as well local services as remote services. When a remote invocation need to be performed, the current transactional context need to be propagated to the remote service in order to involve it to the transaction in progress. allows the possibility to provide such feature using the facilities provided by JTS and ORB. More precisely need to be configured to determine in which type of transaction, local or distributed, the JTA interface is used.
To launch the JTA version of the Banking application, which creates only local transactions, execute the following java program:
java com.arjuna.demo.jta.localbank.BankClient
Once one of the program given above is launched the following lines are displayed:
------------------------------------------------- Bank client ------------------------------------------------- Select an option : 0. Quit 1. Create a new account. 2. Get an account information. 3. Make a transfer. 4. Credit an account. 5. Withdraw from an account Your choice :
After introducing your choice, the appropriate operation is performed by the Bank object, to get the requested account, and by the account to execute the credit or withdraw or to return the current balance. Let's consider the following execution.
Enter the number 1 as your choice, then give the name "Foo" as the account name and "1000" as an initial value of the account to create. You should get the following lines:
Your choice : 1 - Create a new account - ------------------------ Name : Foo Initial balance : 1000 Beginning a User transaction to create account XA_START[] Attempt to commit the account creation transaction XA_END[] XA_COMMIT (ONE_PHASE)[]
In the same way create a second account with the name "Bar" and the initial balance set to 500.
As a choice now, enter "3" to make a transfer (300) from "Foo" to "Bar".
Your choice : 3 - Make a transfer - ------------------- Take money from : Foo Put money to : Bar Transfert amount : 300 Beginning a User transaction to get balance XA_START[] XA_START[] XA_END[] XA_PREPARE[] XA_END[] XA_PREPARE[] XA_COMMIT[] XA_COMMIT[]
Any attempt to manipulate an account that it doesn't exist leads to throw the NotExistingAccount exception and to rollback the transaction in progress. For instance, let's withdraw money from an account FooBar not previously created.
Your choice : 5 - Withdraw from an Account - ---------------------------- Give the Account name : FooBar Amount to withdraw : 200 Beginning a User transaction to withdraw from an account The requested account does not exist! ERROR - jakarta.transaction.RollbackException
From an architectural point of view of JTA, the bank client is considered as an application program able to manage transactions via the jakarta.transaction.UserTransaction interface. The following portion of code illustrates how a JTA transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate objects instances (such UserTransaction).
Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the src/com/arjuna/demo/jta/localbank/BankClient.java)
package com.arjuna.demo.jta.localbank;
public class BankClient
{
private Bank _bank;
// This operation is used to make a transfer
//from an account to another account
private void makeTransfer()
{
System.out.print("Take money from : ");
String name_supplier = input();
System.out.print("Put money to : ");
String name_consumer = input();
System.out.print("Transfer amount : ");
String amount = input();
float famount = 0;
try
{
famount = new Float( amount ).floatValue();
}
catch ( java.lang.Exception ex )
{
System.out.println("Invalid float number, abort operation...");
return;
}
try
{
//the following instruction asks a specific
//class to obtain a UserTransaction instance
jakarta.transaction.UserTransaction userTran =
com.arjuna.ats.jta.UserTransaction.userTransaction();
System.out.println("Beginning a User transaction to get balance");
userTran.begin();
Account supplier = _bank.get_account( name_supplier );
Account consumer = _bank.get_account( name_consumer );
supplier.debit( famount );
consumer.credit( famount );
userTran.commit( );
}
catch (Exception e)
{
System.err.println("ERROR - "+e);
}
}
......
}
The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object
package com.arjuna.demo.jta.localbank;
public class Bank {
private java.util.Hashtable _accounts;
public Bank()
{
_accounts = new java.util.Hashtable();
}
public Account create_account( String name )
{
Account acc = new Account(name);
_accounts.put( name, acc );
return acc;
}
public Account get_account(String name)
throws NotExistingAccount
{
Account acc = ( Account ) _accounts.get( name );
if ( acc == null )
throw new NotExistingAccount("The Account requested does not exist");
return acc;
}
}
The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.
The AccountResource object is in fact an object that implements the javax.transaction.xa.XAResource, then able to participate to the transaction commitment. For this aim, the Account object has to register or enlist the AccountResource object as a participant after having obtaining the reference of the jakarta.transaction.Transaction object via the jakarta.transaction.TransactionManager object
package com.arjuna.demo.jta.localbank;
public class Account
{
float _balance;
AccountResource accRes = null;
public Account(String name)
{
_name = name;
_balance = 0;
}
public float balance()
{
return getXAResource().balance();;
}
public void credit( float value )
{
getXAResource().credit( value );
}
public void debit( float value )
{
getXAResource().debit( value );
}
public AccountResource getXAResource()
{
try
{
jakarta.transaction.TransactionManager transactionManager =
com.arjuna.ats.jta.TransactionManager.transactionManager();
jakarta.transaction.Transaction currentTrans =
transactionManager.getTransaction();
if (accRes == null) {
currentTrans.enlistResource(
accRes = new AccountResource(this, _name) );
}
currentTrans.delistResource( accRes, XAResource.TMSUCCESS );
}
catch (Exception e)
{
System.err.println("ERROR - "+e);
}
return accRes;
}
...
}
The AccountResource class that implements the javax.transaction.xa.XAResource interface provides similar methods as the Account class (credit, withdraw and balance) but also all methods specified by the javax.transaction.xa.XAResource. The following portion of code describes how the methods prepare, commit and rollback are implemented.
public class AccountResource implements XAResource
{
public AccountResource(Account account, String name )
{
_name = name;
_account = account;
_initial_balance = account._balance;
_current_balance = _initial_balance;
}
public float balance()
{
return _current_balance;
}
public void credit( float value )
{
_current_balance += value;
}
public void debit( float value )
{
_current_balance -= value;
}
public void commit(Xid id, boolean onePhase) throws XAException
{
//The value of the associated Account object is modified
_account._balance = _current_balance;
}
public int prepare(Xid xid) throws XAException
{
if ( _initial_balance == _current_balance ) //account not modified
return (XA_RDONLY);
if ( _current_balance < 0 )
throw new XAException(XAException.XA_RBINTEGRITY);
//If the integrity of the account is corrupted then vote rollback
return (XA_OK); //return OK
}
public void rollback(Xid xid) throws XAException
{
//Nothing is done
}
private float _initial_balance;
private float _current_balance;
private Account _account;
}
}
Full source code for the banking application with JTA is included to provide you with a starting point for experimentation.
The JTS version of the Banking application means that the Object Request Broker will be used. The distribution is provided to work with the bundled JacORB version
To describe the possibilities provided by to build a transactional application according to the programming models defined by the OTS specification, the Banking Application is programmed in different ways.
JTS Local Transactions>
JTS Distributed Transactions
The JTS version of the Banking application means that the Object Request Broker will be used. The distribution is provided to work with the bundled JacORB version
Note : Ensure that the jacorb jar files are added in your CLASSPATH
To launch the JTS version of the Banking application, execute the following java program
java com.arjuna.demo.jts.localbank.BankClient
Once one of the program given above is launched the following lines are displayed:
------------------------------------------------- Bank client ------------------------------------------------- Select an option : 0. Quit 1. Create a new account. 2. Get an account information. 3. Make a transfer. 4. Credit an account. 5. Withdraw from an account Your choice :
After introducing your choice, the appropriate operation is performed by the Bank object, to get the requested account, and by the account to execute the credit or withdraw or to return the current balance. Let's consider the following execution.
Enter the number 1 as your choice, then give the name "Foo" as the account name and "1000" as an initial value of the account to create. You should get the following lines:
Your choice : 1 - Create a new account - ------------------------ Name : Foo Initial balance : 1000 Beginning a User transaction to create account [ Connected to 192.168.0.2:4799 from local port 4924 ] Attempt to commit the account creation transaction /[ Resource for Foo : Commit one phase ]
In the same way create a second account with the name "Bar" and the initial balance set to 500.
As a choice now, enter "3" to make a transfer (300) from "Foo" to "Bar".
Your choice : 3 - Make a transfer - ------------------- Take money from : Foo Put money to : Bar Transfer amount : 300 Beginning a User transaction to Transfer money [ Resource for Foo : Prepare ] [ Resource for Bar : Prepare ] [ Resource for Foo : Commit ] [ Resource for Bar : Commit ]
Any attempt to manipulate an account that it doesn't exist leads to throw the NotExistingAccount exception and to rollback the transaction in progress. For instance, let's withdraw money from an account FooBar not previously created.
Your choice : 5 - Withdraw from an Account - ---------------------------- Give the Account name : FooBar Amount to withdraw : 200 Beginning a User transaction to withdraw from an account The requested account does not exist! ERROR - org.omg.CORBA.TRANSACTION_ROLLEDBACK: minor code: 50001 completed: No
By default does not use a separate transaction manager server: transaction managers are co-located with each application process to improve performance and improve application fault-tolerance. When running applications which require a separate transaction manager, you must set the com.arjuna.ats.jts.transactionManager property variable, in the "(jbossts_install_dir)/etc/jbossts-properties.xml file, to YES.
In a separate window, the stand-alone Transaction Server is launched as follow:
java com.arjuna.ats.jts.TransactionServer [-test]
The option -test allows to see the message "Ready" when the Transaction Server is started.
The Banking application presented above gives the same output.
The JTS version of the Banking application means that the Object Request Broker will be used. The distribution is provided to work with the bundled JacORB version
Note : Ensure that the jacorb jar files are added in your CLASSPATH
java com.arjuna.ats.arjuna.recovery.RecoveryManager
java com.arjuna.demo.jts.remotebank.BankServer
java com.arjuna.demo.jts.remotebank.BankClient
java com.arjuna.demo.jts.explicitremotebank.BankServer
java com.arjuna.demo.jts.explicitremotebank.BankClient
In both cases (implicit and explicit), the Bank Server, which can be stopped by hand, displays the following lines:
The bank server is now ready...
In both cases (implicit and Explicit), the Bank Client window displays the following lines:
------------------------------------------------- Bank client ------------------------------------------------- Select an option : 0. Quit 1. Create a new account. 2. Get an account information. 3. Make a transfer. 4. Credit an account. 5. Withdraw from an account Your choice :
After entering your choice, the appropriate operation is performed by the remote Bank object, to get the requested account, and by the account to execute the credit or withdraw or to return the current balance. Let's consider the following execution.
Enter the number 1 as your choice, then give the name "Foo" as the account name and "1000" as an initial value of the account to create. You should get in the server window a result that terminates with the following line
[ Resource for Foo : Commit one phase ]
In the same way create a second account with the name "Bar" and the initial balance set to 500.
As a choice now, enter in the client window "3" to make a transfer (300) from "Foo" to "Bar".
Your choice : 3 - Make a transfer - ------------------- Take money from : Foo Put money to : Bar Transfer amount : 300
In the Server window you should see a result with the following lines
[ Resource for Foo : Prepare ] [ Resource for Bar : Prepare ] [ Resource for Foo : Commit ] [ Resource for Bar : Commit ]
Any attempt to manipulate an account that it doesn't exist leads to throw the NotExistingAccount exception and to rollback the transaction in progress. For instance, let's withdraw money from an account FooBar not previously created.
Your choice : 5 - Withdraw from an Account - ---------------------------- Amount to withdraw : 200 Beginning a User transaction to withdraw from an account The requested account does not exist! ERROR - org.omg.CORBA.TRANSACTION_ROLLEDBACK: minor code: 50001 completed: No
By default does not use a separate transaction manager server: transaction managers are co-located with each application process to improve performance and improve application fault-tolerance. When running applications which require a separate transaction manager, you must set the com.arjuna.ats.jts.transactionManager property variable, in the jbossts-properties.xml file, to YES.
In a separate window, the stand-alone Transaction Server is launched as follow:
java com.arjuna.ats.jts.TransactionServer [-test]
The option -test allows to see the message "Ready" when the Transaction Server is started.
The Banking application presented above gives the same output.
It is possible to run the Transaction Service and recovery manager processes on a different machine and have clients access these centralized services in a hub-and-spoke style architecture.
All that must be done is to provide the clients with enough information to contact the transaction service (such as the ORB's NameService). However, configuring the ORB is beyond the remit of this trailmap and so we shall opt for a simpler mechanism wherby the transaction services IOR is shared by access to a common file.
This trailmap stage assumes that the transaction service has been appropriately installed and configured (the setenv.[bat|sh] script has been ran) onto two hosts (for the purpose of explanation we shall refer to these hosts as host1 and host2).
java com.arjuna.ats.arjuna.recovery.RecoveryManager [-test]
java com.arjuna.ats.jts.TransactionServer [-test]
Open a command prompt on host2 and copy the CosServices.cfg file from the <narayana-jts_install_root>/etc directory on host1.
For example, using the popular scp package, open a shell prompt and issue the following command:
scp user @ host1:<ats_root>/etc/CosServices.cfg <host2_ats_root>/etc/
NOTE: See the section above entitled "Using a stand-alone Transaction Server" for more information on how to configure these application to use a remote transaction service.
java com.arjuna.demo.jts.remotebank.BankServer
java com.arjuna.demo.jts.remotebank.BankClient
java com.arjuna.demo.jts.explicitremotebank.BankServer
java com.arjuna.demo.jts.explicitremotebank.BankClient
From an architectural point of view of JTS, the bank client is considered as an application program able to manage transactions either in a direct or indirect management mode, respectively with the interfaces org.omg.CosTransactions.TransactionFactory and org.omg.CosTransactions.Terminator or with the org.omg.CosTransactions.Current interface. Transactions created by the client in the Banking application are done in the indirect mode.
The following portion of code illustrates how a JTS transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate objects instances (such Current).
Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the ../src/com/arjuna/demo/jts/localbank/BankClient.java)
package com.arjuna.demo.jta.localbank; import com.arjuna.ats.jts.OTSManager; import com.arjuna.ats.internal.jts.ORBManager;
public class BankClient { private Bank _bank; //Initialised on BankClient initializations .... // This operation is used to make a transfer from an account to another account private void makeTransfer() { System.out.print("Take money from : "); String name_supplier = input(); System.out.print("Put money to : "); String name_consumer = input(); System.out.print("Transfert amount : "); String amount = input(); float famount = 0; try { famount = new Float( amount ).floatValue(); } catch ( java.lang.Exception ex ) { System.out.println("Invalid float number, abort operation..."); return; } try { //the following instruction asks a specific class to obtain a Current instance Current current = OTSManager.get_current(); System.out.println("Beginning a User transaction to get balance"); current.begin(); Account supplier = _bank.get_account( name_supplier ); Account consumer = _bank.get_account( name_consumer ); supplier.debit( famount ); consumer.credit( famount ); current.commit( ); } catch (Exception e) { System.err.println("ERROR - "+e); } }
Since JTS is used invocations against an ORB are needed, such ORB and Object Adapter instantiation and initialisation. To ensure a better portability, the ORB Portability API provides a set of methods that can be used as described below.
public static void main( String [] args ) { try { myORB = ORB.getInstance("test");// Create an ORB instance myOA = OA.getRootOA(myORB); //Obtain the Root POA myORB.initORB(args, null); //Initialise the ORB myOA.initOA(); //Initialise the POA // The ORBManager is a class provided by to facilitate the association // of the ORB/POA with the transaction service ORBManager.setORB(myORB); ORBManager.setPOA(myOA); .... } catch(Exception e) { e.printStackTrace(System.err); } }
The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object
package com.arjuna.demo.jta.localbank; public class Bank { private java.util.Hashtable _accounts; public Bank() { _accounts = new java.util.Hashtable(); } public Account create_account( String name ) { Account acc = new Account(name); _accounts.put( name, acc ); return acc; } public Account get_account(String name) throws NotExistingAccount { Account acc = ( Account ) _accounts.get( name ); if ( acc == null ) throw new NotExistingAccount("The Account requested does not exist"); return acc; } }
The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.
The AccountResource object is in fact an object that implements the org.omg.CosTransactions.Resource, then able to participate to the transaction commitment. For this aim, the Account object has to register the AccountResource object as a participant, after having obtaining the reference of the org.omg.CosTransactions.Coordinator object , itself obtained via the org.omg.CosTransactions.Control object
package com.arjuna.demo.jta.localbank; public class Account { float _balance; AccountResource accRes = null; public Account(String name ) { _name = name; _balance = 0; } public float balance() { return getResource().balance();; } public void credit( float value ) { getResource().credit( value ); } public void debit( float value ) { getResource().debit( value ); } public AccountResource getResource() { try { if (accRes == null) { accRes = new AccountResource(this, _name) ; Resource ref = org.omg.CosTransactions.ResourceHelper.narrow(ORBManager.getPOA().corbaReference(accRes)); // Note above the possibilities provided by the ORBManager to access the POA then to obtain // the CORBA reference of the created AccountResource object RecoveryCoordinator recoverycoordinator = OTSManager.get_current().get_control(). get_coordinator().register_resource(ref); } } catch (Exception e) { System.err.println("ERROR - "+e); } return accRes; } ... }
To be considered as a org.omg.CosTransactions.Resource, the AccountResource class shall extends the class org.omg.CosTransactions.ResourcePOA generated by the CORBA IDL compiler. The AccountRessource provides similar methods as the Account class (credit, withdraw and balance) with the appropriate methods to participate to the 2PC protocol. The following portion of code describes how the methods prepare, commit and rollback are implemented.
public class AccountResource extends org.omg.CosTransactions.ResourcePOA { public AccountResource(Account account, String name ) { _name = name; _account = account; _initial_balance = account._balance; _current_balance = _initial_balance; } public float balance() { return _current_balance; } public void credit( float value ) { _current_balance += value; } public void debit( float value ) { _current_balance -= value; } public org.omg.CosTransactions.Vote prepare() throws org.omg.CosTransactions.HeuristicMixed, org.omg.CosTransactions.HeuristicHazard { if ( _initial_balance == _current_balance ) return org.omg.CosTransactions.Vote.VoteReadOnly; if ( _current_balance < 0 ) return org.omg.CosTransactions.Vote.VoteRollback; return org.omg.CosTransactions.Vote.VoteCommit; } public void rollback() throws org.omg.CosTransactions.HeuristicCommit, org.omg.CosTransactions.HeuristicMixed, org.omg.CosTransactions.HeuristicHazard { //Nothing to do } public void commit() throws org.omg.CosTransactions.NotPrepared, org.omg.CosTransactions.HeuristicRollback, org.omg.CosTransactions.HeuristicMixed, org.omg.CosTransactions.HeuristicHazard { _account._balance = _current_balance; } public void commit_one_phase() throws org.omg.CosTransactions.HeuristicHazard { _account._balance = _current_balance; } .....
private float _initial_balance; private float _current_balance; private Account _account; }
Full source code for the banking application is included to provide you with a starting point for experimentation.
The bank client is an application program able to manage transactions either in a direct or indirect management mode, respectively with the interfaces org.omg.CosTransactions.TransactionFactory and org.omg.CosTransactions.Terminator or with the org.omg.CosTransactions.Current interface. Transactions created by the client in the Banking application are done in the indirect mode.
Invoking a remote object within a CORBA environment means that the remote object implements a CORBA interface defined in a CORBA idl file. The following Bank.idl describes the interfaces then the possible kind of distributed CORBA objects involved in the banking application. There is no any interface that inherits the CosTransactions::TransactionalObject interface, which means that for any remote invocations the transactional context is normally not propagated. However, since the Account object may have to register Resource objects that participate to transaction completion, a context is needed. In the following Bank.idl file operations defined in the Account interface have explicitly in their signature the CosTransactions::Control argument meaning that it passed explicitly by the caller - in this case the Bank Client program.
module arjuna {
module demo {
module jts {
module explicitremotebank {
interface Account :
{
float balance(in CosTransactions::Control ctrl);
void credit( in CosTransactions::Control ctrl, in float value );
void debit( in CosTransactions::Control ctrl, in float value );
};
exception NotExistingAccount
{ };
interface Bank
{
Account create_account( in string name );
Account get_account( in string name )
raises( NotExistingAccount );
};
};
};
};
};
The following portion of code illustrates how a JTS transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate objects instances (such Current).
Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the src/com/arjuna/demo/jts/explicitremotebank/BankClient.java)
package com.arjuna.demo.jta.remotebank;
import com.arjuna.ats.jts.OTSManager;
public class BankClient
{
private Bank _bank;
....
// This operation is used to make a transfer
//from an account to another account
private void makeTransfer()
{
//get the name of the supplier(name_supplier) and
// the consumer(name_consumer)
// get the amount to transfer (famount)
...
try
{
//the following instruction asks a specific
// class to obtain a Current instance
Current current = OTSManager.get_current();
System.out.println("Beginning a User transaction to get balance");
current.begin();
Account supplier = _bank.get_account( name_supplier );
Account consumer = _bank.get_account( name_consumer );
supplier.debit( current.get_control(), famount );
//The Control is explicitly propagated
consumer.credit( current.get_control(), famount );
current.commit( );
}
catch (Exception e)
{
...
}
}
Since JTS is used invocations against an ORB are needed, such ORB and Object Adapter instantiation and initialisation. To ensure a better portability, the ORB Portability API provides a set of methods that can be used as described below.
public static void main( String [] args )
{
....
myORB = ORB.getInstance("test");// Create an ORB instance
myORB.initORB(args, null); //Initialise the ORB
org.omg.CORBA.Object obj = null;
try
{
//Read the reference string from a file then convert to Object
....
obj = myORB.orb().string_to_object(stringTarget);
}
catch ( java.io.IOException ex )
{
...
}
Bank bank = BankHelper.narrow(obj);
....
}
The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object. The following lines decribe the implementation of the Bank CORBA object
public class BankImpl extends BankPOA {
public BankImpl(OA oa)
{
_accounts = new java.util.Hashtable();
_oa = oa;
}
public Account create_account( String name )
{
AccountImpl acc = new AccountImpl(name);
_accounts.put( name, acc );
return com.arjuna.demo.jts.remotebank.AccountHelper.
narrow(_oa.corbaReference(acc));
}
public Account get_account(String name)
throws NotExistingAccount
{
AccountImpl acc = ( AccountImpl ) _accounts.get( name );
if ( acc == null )
throw new NotExistingAccount("The Account requested does not exist");
return com.arjuna.demo.jts.remotebank.AccountHelper.
narrow(_oa.corbaReference(acc));
}
private java.util.Hashtable _accounts;// Accounts created by the Bank
private OA _oa;
}
After having defined an implementation of the Bank object, we should now create an instance and make it available for client requests. This is the role of the Bank Server that has the responsibility to create the ORB and the Object Adapater instances, then the Bank CORBA object that has its object reference stored in a file well known by the bank client. The following lines describe how the Bank server is implemented.
public class BankServer
{
public static void main( String [] args )
{
ORB myORB = null;
RootOA myOA = null;
try
{
myORB = ORB.getInstance("ServerSide");
myOA = OA.getRootOA(myORB);
myORB.initORB(args, null);
myOA.initOA();
....
BankImpl bank = new BankImpl(myOA);
String reference = myORB.orb().
object_to_string(myOA.corbaReference(bank));
//Store the Object reference in the file
...
System.out.println("The bank server is now ready...");
myOA.run();
}
}
The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.
The AccountResource object is in fact an object that implements the org.omg.CosTransactions.Resource, then able to participate to the transaction commitment. For this aim, the Account object has to register the AccountResource object as a participant, after having obtaining the reference of the org.omg.CosTransactions.Coordinator object , itself obtained via the org.omg.CosTransactions.Control object
package com.arjuna.demo.jta.remotebank;
import org.omg.CosTransactions.*;
import ....
public class AccountImpl extends AccountPOA
{
float _balance;
AccountResource accRes = null;
public Account(String name )
{
_name = name;
_balance = 0;
}
public float balance(Control ctrl)
{
return getResource(ctrl).balance();;
}
public void credit(Control ctrl, float value )
{
getResource(ctrl).credit( value );
}
public void debit(Control ctrl, float value )
{
getResource(ctrl).debit( value );
}
public AccountResource getResource(Control control)
{
try
{
if (accRes == null) {
accRes = new AccountResource(this, _name) ;
//The invocation on the ORB illustrates the fact that the same
//ORB instance created by the Bank Server is returned.
ref = org.omg.CosTransactions.ResourceHelper.
narrow(OA.getRootOA(ORB.getInstance("ServerSide")).
corbaReference(accRes));
RecoveryCoordinator recoverycoordinator =
control.get_coordinator().register_resource(ref);
}
}
catch (Exception e){...}
return accRes;
}
...
}
To be considered as a org.omg.CosTransactions.Resource, the AccountResource class shall extends the class org.omg.CosTransactions.ResourcePOA generated by the CORBA IDL compiler. The AccountRessource provides similar methods as the Account class (credit, withdraw and balance) with the appropriate methods to participate to the 2PC protocol. The following portion of code describes how the methods prepare, commit and rollback are implemented.
public class AccountResource extends org.omg.CosTransactions.ResourcePOA
{
public AccountResource(Account account, String name )
{
_name = name;
_account = account;
_initial_balance = account._balance;
_current_balance = _initial_balance;
}
public float balance()
{
return _current_balance;
}
public void credit( float value )
{
_current_balance += value;
}
public void debit( float value )
{
_current_balance -= value;
}
public org.omg.CosTransactions.Vote prepare()
throws org.omg.CosTransactions.HeuristicMixed,
org.omg.CosTransactions.HeuristicHazard
{
if ( _initial_balance == _current_balance )
return org.omg.CosTransactions.Vote.VoteReadOnly;
if ( _current_balance < 0 )
return org.omg.CosTransactions.Vote.VoteRollback;
return org.omg.CosTransactions.Vote.VoteCommit;
}
public void rollback()
throws org.omg.CosTransactions.HeuristicCommit,
org.omg.CosTransactions.HeuristicMixed,
org.omg.CosTransactions.HeuristicHazard
{
//Nothing to do
}
public void commit()
throws org.omg.CosTransactions.NotPrepared,
org.omg.CosTransactions.HeuristicRollback,
org.omg.CosTransactions.HeuristicMixed,
org.omg.CosTransactions.HeuristicHazard
{
_account._balance = _current_balance;
}
public void commit_one_phase()
throws org.omg.CosTransactions.HeuristicHazard
{
_account._balance = _current_balance;
}
.....
private float _initial_balance;
private float _current_balance;
private Account _account;
}
Full source code for the banking application is included to provide you with a starting point for experimentation.
The bank client is an application program able to manage transactions either in a direct or indirect management mode, respectively with the interfaces org.omg.CosTransactions.TransactionFactory and org.omg.CosTransactions.Terminator or with the org.omg.CosTransactions.Current interface. Transactions created by the client in the Banking application are done in the indirect mode.
Invoking a remote object within a CORBA environment means that the remote object implements a CORBA interface defined in a CORBA idl file. The following Bank.idl describes the interfaces then the possible kind of distributed CORBA objects involved in the banking application. Only the Account interface inherits the CosTransactions::TransactionalObject interface, this means that an Account CORBA object is expected to invoked within a scope of transaction and the transactional context is implicitly propagated.
module arjuna {
module demo {
module jts {
module remotebank {
interface Account : CosTransactions::TransactionalObject
{
float balance();
void credit( in float value );
void debit( in float value );
};
exception NotExistingAccount
{ };
interface Bank
{
Account create_account( in string name );
Account get_account( in string name )
raises( NotExistingAccount );
};
};
};
};
};
The following portion of code illustrates how a JTS transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate standard JTS API objects instances (such Current).
Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the src/com/arjuna/demo/jts/localbank/BankClient.java)
package com.arjuna.demo.jta.remotebank;
import com.arjuna.ats.jts.OTSManager;
public class BankClient
{
private Bank _bank;
....
// This operation is used to make a transfer
// from an account to another account
private void makeTransfer()
{
//get the name of the supplier(name_supplier)
// and the consumer(name_consumer)
// get the amount to transfer (famount)
...
try
{
//the following instruction asks a
// specific class
// to obtain a Current instance
Current current = OTSManager.get_current();
System.out.println("Beginning a User
transaction to get balance");
current.begin();
Account supplier = _bank.get_account( name_supplier );
Account consumer = _bank.get_account( name_consumer );
supplier.debit( famount );
consumer.credit( famount );
current.commit( );
}
catch (Exception e)
{
...
}
}
Since JTS is used invocations against an ORB are needed, such ORB and Object Adapter instantiation and initialisation. To ensure a better portability, the ORB Portability API provides a set of methods that can be used as described below.
public static void main( String [] args )
{ ....
myORB = ORB.getInstance("test");
myORB.initORB(args, null); //Initialise the ORB
org.omg.CORBA.Object obj = null;
try
{
//Read the reference string from
// a file then convert to Object
....
obj = myORB.orb().string_to_object(stringTarget);
}
catch ( java.io.IOException ex )
{
...
}
Bank bank = BankHelper.narrow(obj);
....
}
The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object. The following lines decribe the implementation of the Bank CORBA object
public class BankImpl extends BankPOA {
public BankImpl(OA oa)
{
_accounts = new java.util.Hashtable();
_oa = oa;
}
public Account create_account( String name )
{
AccountImpl acc = new AccountImpl(name);
_accounts.put( name, acc );
return com.arjuna.demo.jts.remotebank.AccountHelper.
narrow(_oa.corbaReference(acc));
}
public Account get_account(String name)
throws NotExistingAccount
{
AccountImpl acc = ( AccountImpl ) _accounts.get( name );
if ( acc == null )
throw new NotExistingAccount("The Account requested
does not exist");
return com.arjuna.demo.jts.remotebank.AccountHelper.
narrow(_oa.corbaReference(acc));
}
private java.util.Hashtable _accounts;
// Accounts created by the Bank
private OA _oa;
}
After having defined an implementation of the Bank object, we should now create an instance and make it available for client requests. This is the role of the Bank Server that has the responsibility to create the ORB and the Object Adapater instances, then the Bank CORBA object that has its object reference stored in a file well known by the bank client. The following lines describe how the Bank server is implemented.
public class BankServer
{
public static void main( String [] args )
{
ORB myORB = null;
RootOA myOA = null;
try
{
myORB = ORB.getInstance("ServerSide");
myOA = OA.getRootOA(myORB);
myORB.initORB(args, null);
myOA.initOA();
....
BankImpl bank = new BankImpl(myOA);
String reference = myORB.orb().
object_to_string(myOA.corbaReference(bank));
//Store the Object reference in the file
...
System.out.println("The bank server is now ready...");
myOA.run();
}
}
The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.
The AccountResource object is in fact an object that implements the org.omg.CosTransactions.Resource, then able to participate to the transaction commitment. For this aim, the Account object has to register the AccountResource object as a participant, after having obtaining the reference of the org.omg.CosTransactions.Coordinator object , itself obtained via the org.omg.CosTransactions.Control object
package com.arjuna.demo.jta.remotebank;
import ....
public class AccountImpl extends AccountPOA
{
float _balance;
AccountResource accRes = null;
public Account(String name )
{
_name = name;
_balance = 0;
}
public float balance()
{
return getResource().balance();;
}
public void credit( float value )
{
getResource().credit( value );
}
public void debit( float value )
{
getResource().debit( value );
}
public AccountResource getResource()
{
try
{
if (accRes == null) {
accRes = new AccountResource(this, _name) ;
//The invocation on the ORB illustrates the
// fact that the same ORB instance created
// by the Bank Server is returned.
ref = org.omg.CosTransactions.ResourceHelper.
narrow(OA.getRootOA(ORB.getInstance("ServerSide")).
corbaReference(accRes));
RecoveryCoordinator recoverycoordinator = OTSManager.get_current().
get_control().get_coordinator().register_resource(ref);
}
}
catch (Exception e)
{....}
return accRes;
}
...
}
To be considered as a org.omg.CosTransactions.Resource, the AccountResource class shall extends the class org.omg.CosTransactions.ResourcePOA generated by the CORBA IDL compiler. The AccountResource provides similar methods as the Account class (credit, withdraw and balance) with the appropriate methods to participate to the 2PC protocol. The following portion of code describes how the methods prepare, commit and rollback are implemented.
public class AccountResource
extends org.omg.CosTransactions.ResourcePOA
{
public AccountResource(Account account, String name )
{
_name = name;
_account = account;
_initial_balance = account._balance;
_current_balance = _initial_balance;
}
public float balance()
{
return _current_balance;
}
public void credit( float value )
{
_current_balance += value;
}
public void debit( float value )
{
_current_balance -= value;
}
public org.omg.CosTransactions.Vote prepare()
throws org.omg.CosTransactions.HeuristicMixed,
org.omg.CosTransactions.HeuristicHazard
{
if ( _initial_balance == _current_balance )
return org.omg.CosTransactions.Vote.VoteReadOnly;
if ( _current_balance < 0 )
return org.omg.CosTransactions.Vote.VoteRollback;
return org.omg.CosTransactions.Vote.VoteCommit;
}
public void rollback()
throws org.omg.CosTransactions.HeuristicCommit,
org.omg.CosTransactions.HeuristicMixed,
org.omg.CosTransactions.HeuristicHazard
{
//Nothing to do
}
public void commit()
throws org.omg.CosTransactions.NotPrepared,
org.omg.CosTransactions.HeuristicRollback,
org.omg.CosTransactions.HeuristicMixed,
org.omg.CosTransactions.HeuristicHazard
{
_account._balance = _current_balance;
}
public void commit_one_phase()
throws org.omg.CosTransactions.HeuristicHazard
{
_account._balance = _current_balance;
}
....
private float _initial_balance;
private float _current_balance;
private Account _account;
}
Full source code for the banking application is included to provide you with a starting point for experimentation.
From an architectural point of view of JTS, the bank client is considered as an application program able to manage transactions either in a direct or indirect management mode, respectively with the interfaces org.omg.CosTransactions.TransactionFactory and org.omg.CosTransactions.Terminator or with the org.omg.CosTransactions.Current interface. Transactions created by the client in the Banking application are done in the indirect mode.
The following portion of code illustrates how a JTS transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate objects instances (such Current).
Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the src/com/arjuna/demo/jts/localbank/BankClient.java)
package com.arjuna.demo.jta.localbank; import com.arjuna.ats.jts.OTSManager; public class BankClient { private Bank _bank; .... // This operation is used to make //a transfer from an account to another account private void makeTransfer() { System.out.print("Take money from : "); String name_supplier = input(); System.out.print("Put money to : "); String name_consumer = input(); System.out.print("Transfert amount : "); String amount = input(); float famount = 0; try { famount = new Float( amount ).floatValue(); } catch ( java.lang.Exception ex ) { System.out.println("Invalid float number, abort operation..."); return; } try { //the following instruction asks a specific // class to obtain a Current instance Current current = OTSManager.get_current(); System.out.println("Beginning a User transaction to get balance"); current.begin(); Account supplier = _bank.get_account( name_supplier ); Account consumer = _bank.get_account( name_consumer ); supplier.debit( famount ); consumer.credit( famount ); current.commit( ); } catch (Exception e) { System.err.println("ERROR - "+e); } }
Since JTS is used invocations against an ORB are needed, such ORB and Object Adapter instantiation and initialisation. To ensure a better portability, the ORB Portability API provides a set of methods that can be used as described below.
public static void main( String [] args ) { try { // Create an ORB instance myORB = ORB.getInstance("test"); //Obtain the Root POA myOA = OA.getRootOA(myORB); //Initialise the ORB myORB.initORB(args, null); //Initialise the POA myOA.initOA(); .... } catch(Exception e) { ....} }
The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object
package com.arjuna.demo.jta.localbank;
public class Bank {
private java.util.Hashtable _accounts;
public Bank()
{
_accounts = new java.util.Hashtable();
}
public Account create_account( String name )
{
Account acc = new Account(name);
_accounts.put( name, acc );
return acc;
}
public Account get_account(String name)
throws NotExistingAccount
{
Account acc = ( Account ) _accounts.get( name );
if ( acc == null )
throw new NotExistingAccount("The Account
requested does not exist");
return acc;
}
}
The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.
The AccountResource object is in fact an object that implements the org.omg.CosTransactions.Resource, then able to participate to the transaction commitment. For this aim, the Account object has to register the AccountResource object as a participant, after having obtaining the reference of the org.omg.CosTransactions.Coordinator object , itself obtained via the org.omg.CosTransactions.Control object
package com.arjuna.demo.jta.localbank;
public class Account
{
float _balance;
AccountResource accRes = null;
public Account(String name )
{
_name = name;
_balance = 0;
}
public float balance()
{
return getResource().balance();;
}
public void credit( float value )
{
getResource().credit( value );
}
public void debit( float value )
{
getResource().debit( value );
}
public AccountResource getResource()
{
try
{
if (accRes == null) {
accRes = new AccountResource(this, _name) ;
Resource ref = org.omg.CosTransactions.ResourceHelper.
narrow(OA.getRootOA(ORB.getInstance("test")).corbaReference(accRes));
RecoveryCoordinator recoverycoordinator = OTSManager.get_current().
get_control().get_coordinator().register_resource(ref);
}
}
catch (Exception e)
{...}
return accRes;
}
...
}
To be considered as a org.omg.CosTransactions.Resource, the AccountResource class shall extends the class org.omg.CosTransactions.ResourcePOA generated by the CORBA IDL compiler. The AccountRessource provides similar methods as the Account class (credit, withdraw and balance) with the appropriate methods to participate to the 2PC protocol. The following portion of code describes how the methods prepare, commit and rollback are implemented.
public class AccountResource extends org.omg.CosTransactions.ResourcePOA
{
public AccountResource(Account account, String name )
{
_name = name;
_account = account;
_initial_balance = account._balance;
_current_balance = _initial_balance;
}
public float balance()
{
return _current_balance;
}
public void credit( float value )
{
_current_balance += value;
}
public void debit( float value )
{
_current_balance -= value;
}
public org.omg.CosTransactions.Vote prepare()
throws org.omg.CosTransactions.HeuristicMixed,
org.omg.CosTransactions.HeuristicHazard
{
if ( _initial_balance == _current_balance )
return org.omg.CosTransactions.Vote.VoteReadOnly;
if ( _current_balance < 0 )
return org.omg.CosTransactions.Vote.VoteRollback;
return org.omg.CosTransactions.Vote.VoteCommit;
}
public void rollback()
throws org.omg.CosTransactions.HeuristicCommit,
org.omg.CosTransactions.HeuristicMixed,
org.omg.CosTransactions.HeuristicHazard
{
//Nothing to do
}
public void commit()
throws org.omg.CosTransactions.NotPrepared,
org.omg.CosTransactions.HeuristicRollback,
org.omg.CosTransactions.HeuristicMixed,
org.omg.CosTransactions.HeuristicHazard
{
_account._balance = _current_balance;
}
public void commit_one_phase()
throws org.omg.CosTransactions.HeuristicHazard
{
_account._balance = _current_balance;
}
.....
private float _initial_balance;
private float _current_balance;
private Account _account;
}
Full source code for the banking application is included to provide you with a starting point for experimentation.
The way the banking application is built and deployed in the previous trail does not it make it persistent, in such way that any created account can be retrieved later after stopping the bank server or if the application crashes; moreover, it does not allow concurrent access to accounts without leading to inconsistent values.
Two ways will be presented in this trail on the way to build the banking application as a persistent and sharable application:
ArjunaCore exploits object-oriented techniques to present programmers with a toolkit of Java classes from which application classes can inherit to obtain desired properties, such as persistence and concurrency control. These classes form a hierarchy, part of which is shown below.
Figure 1 - ArjunaCore class hierarchy.
Apart from specifying the scopes of transactions, and setting appropriate locks within objects, the application programmer does not have any other responsibilities: ArjunaCore and Transactional Objects for Java (TXOJ) guarantee that transactional objects will be registered with, and be driven by, the appropriate transactions, and crash recovery mechanisms are invoked automatically in the event of failures.
Making an object persistent and recoverable means that we shall be able to store its final state or to retrieve its initial state according to the final status of a transaction even in the presence of failures. ArjunaCore provides a set of techniques to save to and to retrieve from the Object Store states of objects. All objects made persistent with these ArjunaCore mechanisms are assigned unique identifiers (instances of the Uid class), when they are created, and this is to identify them within the object store. Due to common functionality for persistency and recovery required by several applications, objects are stored and retrieved from the object store using the same mechanism: the classes OutputObjectState and InputObjecState.
At the root of the class hierarchy, given in Figure 1, is the class StateManager. This class is responsible for object activation and deactivation and object recovery. The simplified signature of the class is:
public abstract class StateManager { public boolean activate (); public boolean deactivate (boolean commit); public Uid get_uid (); // object’s identifier. // methods to be provided by a derived class public boolean restore_state (InputObjectState os); public boolean save_state (OutputObjectState os); protected StateManager (); protected StateManager (Uid id); };
Objects are assumed to be of three possible flavours. They may simply be recoverable, in which case StateManager will attempt to generate and maintain appropriate recovery information for the object. Such objects have lifetimes that do not exceed the application program that creates them. Objects may be recoverable and persistent, in which case the lifetime of the object is assumed to be greater than that of the creating or accessing application, so that in addition to maintaining recovery information StateManager will attempt to automatically load (unload) any existing persistent state for the object by calling the activate (deactivate) operation at appropriate times. Finally, objects may possess none of these capabilities, in which case no recovery information is ever kept nor is object activation/deactivation ever automatically attempted.
According to the its activation or deactivation a transactional object for Java move from a passive state to an active state and vice-versa. The fundamental life cycle of a persistent object in TXOJ is shown in Figure 2.
Figure 2 - The life cycle of a persistent object.
While deactivating and activating a transactional object for java, the operations save_state and restore_state are respectively invoked. These operations must be implemented by the programmer since StateManager cannot detect user level state changes. This gives the programmer the ability to decide which parts of an object’s state should be made persistent. For example, for a spreadsheet it may not be necessary to save all entries if some values can simply be recomputed. The save_state implementation for a class Example that has two integer member variables called A and B and one String member variable called C could simply be:
public boolean save_state(OutputObjectState o)
{
if (!super.save_state(o))
return false;
try
{
o.packInt(A);
o.packInt(B);
o.packString(C));
}
catch (Exception e)
{
return false;
}
return true;
}
while, the corresponding restore_state implementation allowing to retrieve similar values is:
public boolean restore_state(InputObjectState o)
{
if (!super.restore_state(o))
return false;
try
{
A = o.unpackInt();
B = o.unpackInt();
S = o.unpackString());
}
catch (Exception e)
{
return false;
}
return true;
}
Classes OutputObjectState and InputObjectState provide respectively operations to pack and unpack instances of standard Java data types. In other words for a standard Java data type, for instance Long or Short, there are corresponding methods to pack and unpack, i.e., packLong or packShort and unpackLong or unpackShort.
Note: it is necessary for all save_state and restore_state methods to call super.save_state and super.restore_state. This is to cater for improvements in the crash recovery mechanisms.
The concurrency controller is implemented by the class LockManager which provides sensible default behaviour while allowing the programmer to override it if deemed necessary by the particular semantics of the class being programmed. The primary programmer interface to the concurrency controller is via the setlock operation. By default, the runtime system enforces strict two-phase locking following a multiple reader, single writer policy on a per object basis. However, as shown in Figure 1, by inheriting from the Lock class it is possible for programmers to provide their own lock implementations with different lock conflict rules to enable type specific concurrency control.
Lock acquisition is (of necessity) under programmer control, since just as StateManager cannot determine if an operation modifies an object, LockManager cannot determine if an operation requires a read or write lock. Lock release, however, is under control of the system and requires no further intervention by the programmer. This ensures that the two-phase property can be correctly maintained.
public abstract class LockManager extends StateManager
{
public LockResult setlock (Lock toSet, int retry, int timeout);
};
The LockManager class is primarily responsible for managing requests to set a lock on an object or to release a lock as appropriate. However, since it is derived from StateManager, it can also control when some of the inherited facilities are invoked. For example, LockManager assumes that the setting of a write lock implies that the invoking operation must be about to modify the object. This may in turn cause recovery information to be saved if the object is recoverable. In a similar fashion, successful lock acquisition causes activate to be invoked.
The code below shows how we may try to obtain a write lock on an object:
public class Example extends LockManager
{
public boolean foobar ()
{
AtomicAction A = new AtomicAction;
/*
* The ArjunaCore AtomicAction class is here used to create
* a transaction. Any interface provided by the JTA or
* JTS interfaces that allow to create transactions can
* be used in association with the Locking mechanisms
* described in this trail.
*/
boolean result = false;
A.begin();
if (setlock(new Lock(LockMode.WRITE), 0) == Lock.GRANTED)
{
/*
* Do some work, and TXOJ will
* guarantee ACID properties.
*/
// automatically aborts if fails
if (A.commit() == AtomicAction.COMMITTED)
{
result = true;
}
}
else
A.rollback();
return result;
}
}
More details on Transactional Object For Java can be found in the ArjunaCore Programming Guide.
The banking application consists of a Bank object that contains a list of Account object, which in turn have a String (name) and a float (the value) as member variables. It appears clearly that from the persistent point of view, an Account Object need to store its name and its current balance or value, while the Bank Object need to store the list of accounts that it manages.
The banking application with Transactional Object for Java (TXOJ) is configured to use JTS interfaces as the API to create the transaction, then an ORB to deploy it. The distribution is provided to work with the bundled JacORB version
Note : Ensure that the jacorb jar files are added in your CLASSPATH
- Start the Server
java com.arjuna.demo.jts.txojbank.BankServer
- In a separate window, start the client
java com.arjuna.demo.jts.txojbank.BankClient
As for the demonstrations presented in the previous trails, the same menu is presented for the client with a set of operations such creating an account, credit/withdraw money to/from an account and making a transfer.
Building the banking application with TXOJ tools
Since a distributed version has been adopted to present the application with Transactional Object for Java, an IDL file named Bank.idl described below is needed. The difference with the Bank.idl presented in previous trails is the fact that the Bank interface inherits the CosTransactions::TransactionalObject interface. Since we consider now that a Bank object need to modify its list in a transactional, we consider now a Bank object as a CORBA transactional.
module arjuna {
module demo {
module jts {
module txojbank {
interface Account : CosTransactions::TransactionalObject
{
float balance();
void credit( in float value );
void debit( in float value );
};
exception NotExistingAccount
{ };
interface Bank : CosTransactions::TransactionalObject
{
Account create_account( in string name );
Account get_account( in string name )
raises( NotExistingAccount );
};
};
};
};
};
Basically the client program (src/com/arjuna/demo/jts/txojbank/BankClient.java) is equivalent to the one described in the distributed jts version with implicit propagation, the difference is on the package name.
To take benefit from the persistency and locking mechanism provided by ArjunaCore, a user class can inherit from the appropriate class (StateManager for recovery, and LockManager for recovery and concurrency control). The AccountImpl class that implements the Account interface inherits the LockManager and implements the AccountOperations interface generated by the CORBA IDL compiler. Since multiple inheritance is not allowed in Java, inheriting the AccountPOA class, as made in simple jts remote version, in addition to the LockManager is not possible. That we use in this version a CORBA TIE mechanism to associate a servant to an CORBA object reference.
The Java interface definition of the AccountImpl class is given below:
public class AccountImpl extends LockManager implements AccountOperations
{
float _balance;
String _name;
public AccountImpl(String name );
public AccountImpl(Uid uid);
public void finalize ();
public float balance();
public void credit( float value );
public void debit( float value );
public boolean save_state (OutputObjectState os, int ObjectType);
public boolean restore_state (InputObjectState os, int ObjectType);
public String type();
}
public void finalize ()
{
super.terminate();
}
public String type ()
{
return "/StateManager/LockManager/BankingAccounts";
}
To use an existing persistent object requires the use of a special constructor that is required to take the Uid of the persistent object; the implementation of such a constructor is given below:
public AccountImpl(Uid uid)
{
super(uid);
// Invoking super will lead to invoke the
//restore_state method of this AccountImpl class
}
There is no particular behaviour applied by the Constructor with the Uid parameter The following constructor is used for a new Account creation.
public AccountImpl(String name )
{
super(ObjectType.ANDPERSISTENT);
_name = name;
_balance = 0;
}
The destructor of the queue class is only required to call the terminate operation of LockManager.
The implementations of save_state and restore_state are relatively simple for this example:
public boolean save_state (OutputObjectState os, int ObjectType)
{
if (!super.save_state(os, ObjectType))
return false;
try
{
os.packString(_name);
os.packFloat(_balance);
return true;
}
catch (Exception e)
{
return false;
}
}
public boolean restore_state (InputObjectState os, int ObjectType)
{
if (!super.restore_state(os, ObjectType))
return false;
try
{
_name = os.unpackString();
_balance = os.unpackFloat();
return true;
}
catch (Exception e)
{
return false;
}
}
Because the AccountImpl class is derived from the LockManager class, the operation type should be:
public float balance()
{
float result = 0;
if (setlock(new Lock(LockMode.READ), 0) == LockResult.GRANTED)
{
result = _balance;
}
...
return result;
}
Since the balance operation consists only to get the current balance, acquiring a lock in READ mode is enough. This is not the case of the credit and debit methods that need to modify the current balance, that is a WRITE mode is needed.
public void credit( float value )
{
if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
{
_balance += value;
}
...
}
public void debit( float value )
{
if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
{
_balance -= value;
}
...
}
Full source code for the src/com/arjuna/demo/jts/txojbank/AccountImpl.java">AccountImpl class is included to provide you with a starting point for experimentation.
To take benefit from the persistency and locking mechanism provided by ArjunaCore, a user class can inherit from the appropriate class (StateManager for recovery, and LockManager for recovery and concurrency control). The BankImpl class that implements the Bank interface inherits the LockManager and implements the BankOperations interface generated by the CORBA IDL compiler. Since multiple inheritance is not allowed in Java, inheriting the BankPOA class, as made in simple jts remote version, in addition to the LockManager is not possible. That we use in this version a CORBA TIE mechanism to associate a servant to an CORBA object reference.
The Java interface definition of the BankImpl class is given below:
public class BankImpl extends LockManager implements BankOperations
{
public BankImpl(OA oa);
public BankImpl(Uid uid, OA oa);
public BankImpl(Uid uid);
public Account create_account( String name );
public Account get_account( String name );
public boolean save_state (OutputObjectState os, int ObjectType);
public boolean restore_state (InputObjectState os, int ObjectType);
public String type();
public static final int ACCOUNT_SIZE = 10;
// ACCOUNT_SIZE is the maximum number of accounts
private String [] accounts;
private int numberOfAccounts;
private ORB _orb;
private OA _oa;
private java.util.Hashtable _accounts; //The list of accounts
}
To use an existing persistent object requires the use of a special constructor that is required to take the Uid of the persistent object; the implementation of such a constructor is given below:
public BankImpl(Uid uid)
{
super(uid);
_accounts = new java.util.Hashtable();
numberOfAccounts = 0;
accounts = new String[ACCOUNT_SIZE];
}
The following constructor is invoked during the first creation of the Bank Object.
public BankImpl(OA oa)
{ super(ObjectType.ANDPERSISTENT);
_accounts = new java.util.Hashtable();
_oa = oa;
numberOfAccounts = 0;
accounts = new String[ACCOUNT_SIZE];
}
The following constructor is invoked on successive BankServer restart. A bank already exists and should be recreated. Invoking super or the constructor of the inherited class leads to execute the restore_state method, described below, of the BankImpl class to rebuild the list of accounts previously created, if any.
public BankImpl(Uid uid, OA oa)
{ super(uid);
_accounts = new java.util.Hashtable();
_oa = oa;
numberOfAccounts = 0;
accounts = new String[ACCOUNT_SIZE];
}
The destructor of the queue class is only required to call the terminate operation of LockManager.
public void finalize () { super.terminate(); }
public Account create_account( String name )
{
AccountImpl acc;
AccountPOA account = null;
//Attempt to obtain the lock for change
if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
{
//Check if the maximum number of accounts is not reached
if (numberOfAccounts < ACCOUNT_SIZE)
{
acc = new AccountImpl(name); //Create a new account
//Use the TIE mechanism to create a CORBA object
account = new AccountPOATie(acc);
//Add the account to the list of accounts that
//facilitate to retrieve accounts
_accounts.put( name, acc);
//The Uid of the created account is put in the array
accounts[numberOfAccounts] = acc.get_uid().toString();
numberOfAccounts++;
}
}
return com.arjuna.demo.jts.txojbank.
AccountHelper.narrow(_oa.corbaReference(account));
}
public Account get_account(String name)
throws NotExistingAccount
{
// Only the hashtable list is used to retrieve the account
AccountImpl acc = ( AccountImpl ) _accounts.get( name );
AccountPOA account = new AccountPOATie(acc);
if ( acc == null )
throw new NotExistingAccount("The Account
requested does not exist");
return com.arjuna.demo.jts.txojbank.
AccountHelper.narrow(_oa.corbaReference(account));
}
public boolean save_state (OutputObjectState os, int ObjectType)
{
if (!super.save_state(os, ObjectType))
return false;
try
{
os.packInt(numberOfAccounts);
if (numberOfAccounts > 0)
{
// All Uid located in the array will be saved
for (int i = 0; i < numberOfAccounts; i++)
os.packString(accounts[i]);
}
return true;
}
catch (Exception e)
{
return false;
}
}
public boolean restore_state (InputObjectState os, int ObjectType)
{
if (!super.restore_state(os, ObjectType))
{
return false;
}
try
{
numberOfAccounts = os.unpackInt();
if (numberOfAccounts > 0)
{
for (int i = 0; i < numberOfAccounts; i++)
{
accounts[i] = os.unpackString();
//each stored Uid is re-used to recreate
//a stored account object
AccountImpl acc = new AccountImpl(new Uid(accounts[i]));
acc.activate();
//Once recreated the account object
//is activated and added to the list.
_accounts.put( acc.getName(), acc);
}
}
return true;
}
catch (Exception e)
{
return false;
}
}
public String type ()
{
return "/StateManager/LockManager/BankServer";
}
Full source code for the src/com/arjuna/demo/jts/txojbank/BankImpl.java">BankImpl class is included to provide you with a starting point for experimentation.
The role of the BankServer class is mainly to initialise the ORB and the Object Adapter and to create the default Bank object responsible to create banking accounts.
Globally the BankServer has the following structure.
... myORB = ORB.getInstance("ServerSide"); myOA = OA.getRootOA(myORB); myORB.initORB(args, null); myOA.initOA(); ...
This done using the ORB Portability API
... java.io.FileInputStream file = new java.io.FileInputStream("UidBankFile"); java.io.InputStreamReader input = new java.io.InputStreamReader(file); java.io.BufferedReader reader = new java.io.BufferedReader(input); String stringUid = reader.readLine(); file.close(); _bank = new BankImpl(new Uid(stringUid), myOA); boolean result =_bank.activate(); ...
... _bank = new BankImpl(myOA); java.io.FileOutputStream file = new java.io.FileOutputStream("UidBankFile"); java.io.PrintStream pfile=new java.io.PrintStream(file); pfile.println(_bank.get_uid().toString()); file.close(); ...
JTS supports the construction of both local and distributed transactional applications which access databases using the JDBC APIs. JDBC supports two-phase commit of transactions, and is similar to the XA X/Open standard. The JDBC support is found in the com.arjuna.ats.jdbc package.
The JTS approach to incorporating JDBC connections within transactions is to provide transactional JDBC drivers through which all interactions occur. These drivers intercept all invocations and ensure that they are registered with, and driven by, appropriate transactions. There is a single type of transactional driver through which any JDBC driver can be driven; obviously if the database is not transactional then ACID properties cannot be guaranteed. This driver is com.arjuna.ats.jdbc.TransactionalDriver, which implements the java.sql.Driver interface.
The driver may be directly instantiated and used within an application. For example:
TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver();
It can be registered with the JDBC driver manager (java.sql.DriverManager) by adding them to the Java system properties. The jdbc.drivers property contains a list of driver class names, separated by colons, that are loaded by the JDBC driver manager when it is initialised, for instance:
jdbc.drivers=foo.bar.Driver:mydata.sql.Driver:bar.test.myDriver
On running an application, it is the DriverManager's responsibility to load all the drivers found in the system property jdbc.drivers. For example, this is where the driver for the Oracle database may be defined. When opening a connection to a database it is the DriverManager' s role to choose the most appropriate driver from the previously loaded drivers.
A program can also explicitly load JDBC drivers at any time. For example, the my.sql.Driver is loaded with the following statement:
Class.forName("my.sql.Driver");
Calling Class.forName() will automatically register the driver with the JDBC driver manager. It is also possible to explicitly create an instance of the JDBC driver using the registerDriver method of the DriverManager. This is the case for instance for the TransactionalDriver that can be registered as follow:
TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver();
DriverManager.registerDriver(arjunaJDBC2Driver);
When you have loaded a driver, it is available for making a connection with a DBMS.
Once a driver is loaded and ready for a connection to be made, instances of a Connection class can be created using the getConnection method on the DriverManager, as follow:
Connection con = DriverManager.getConnection(url, username, password);
From its version 2.0, the JDBC API has introduced a new way to obtain instances of the Connection class. This is the case of the interfaces DataSource and XADataSource that creates transactional connections. When using a JDBC 2.0 driver, will use the appropriate DataSource whenever a connection to the database is made. It will then obtain XAResources and register them with the transaction via the JTA interfaces. It is these XAResources which the transaction service will use when the transaction terminates in order to drive the database to either commit or rollback the changes made via the JDBC connection.
There are two ways in which the JDBC 2.0 support can obtain XADataSources. These will be explained in the following sections. Note, for simplicity we shall assume that the JDBC 2.0 driver is instantiated directly by the application.
Java Naming and Directory Interface (JNDI)
To get the ArjunaJDBC2Driver class to use a JNDI registered XADataSource it is first necessary to create the XADataSource instance and store it in an appropriate JNDI implementation. Details of how to do this can be found in the JDBC 2.0 tutorial available at JavaSoft. An example is show below:
XADataSource ds = MyXADataSource();
Hashtable env = new Hashtable();
String initialCtx = PropertyManager.
getProperty("Context.INITIAL_CONTEXT_FACTORY");
env.put(Context.INITIAL_CONTEXT_FACTORY, initialCtx);
initialContext ctx = new InitialContext(env);
ctx.bind("jdbc/foo", ds);
Where the Context.INITIAL_CONTEXT_FACTORY property is the JNDI way of specifying the type of JNDI implementation to use.
Then the application must pass an appropriate connection URL to the JDBC 2.0 driver:
Properties dbProps = new Properties();
dbProps.setProperty(TransactionalDriver.userName, "user");
dbProps.setProperty(TransactionalDriver.password, "password");
TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver();
Connection connection = arjunaJDBC2Driver.
connect("jdbc:arjuna:jdbc/foo", dbProps);
The JNDI URL must be pre-pended with jdbc:arjuna: in order for the ArjunaJDBC2Driver to recognise that the DataSource must participate within transactions and be driven accordingly.
Dynamic class instantiation
Many JDBC implementations provide proprietary implementations of XADataSources that provide non-standard extensions to the specification. In order to allow the application to remain isolated from the actual JDBC 2.0 implementation it is using and yet continue to be able to use these extensions, hides the details of these proprietary implementations using dynamic class instantiation. In addition, the use of JNDI is not required when using this mechanism because the actual implementation of the XADataSource will be directly instantiated, albeit in a manner which will not tie an application or driver to a specific implementation. therefore has several classes which are for specific JDBC implementations, and these can be selected at runtime by the application setting the dynamicClass property appropriately:
Database Type Property Name Cloudscape 3.6 com.arjuna.ats.internal.jdbc.drivers.cloudscape_3_6 Sequelink 5.1 com.arjuna.ats.internal.jdbc.drivers.sequelink_5_1 Oracle 8.1.6 com.arjuna.ats.internal.jdbc.drivers.oracle_8_1_6 SQL Server 2000 com.arjuna.ats.internal.jdbc.drivers.sqlserver_2_2
The application code must specify which dynamic class the TransactionalDriver should instantiate when setting up the connection:
Properties dbProps = new Properties();
dbProps.setProperty(TransactionalDriver.userName, "user");
dbProps.setProperty(TransactionalDriver.password, "password");
dbProps.setProperty(TransactionalDriver.dynamicClass,
"com.arjuna.ats.internal.jdbc.drivers.sequelink_5_0");
TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver();
Connection connection = arjunaJDBC2Driver.connect("jdbc:arjuna:
sequelink://host:port;databaseName=foo",dbProperties);
Note on properties used by the com.arjuna.ats.jdbc.TransactionalDriver class
Once the connection has been established (for example, using the java.sql.DriverManager.getConnection method), all operations on the connection will be monitored by . Once created, the driver and any connection can be used in the same way as any other JDBC driver or connection.
connections can be used within multiple different transactions simultaneously, i.e., different threads, with different notions of the current transaction, may use the same JDBC connection. does connection pooling for each transaction within the JDBC connection. So, although multiple threads may use the same instance of the JDBC connection, internally this may be using a different connection instance per transaction. With the exception of close, all operations performed on the connection at the application level will only be performed on this transaction-specific connection.
will automatically register the JDBC driver connection with the transaction via an appropriate resource . When the transaction terminates, this resource will be responsible for either committing or rolling back any changes made to the underlying database via appropriate calls on the JDBC driver.
More details on the way to manage applications using the JDBC API can be found in the Programming Guide.
In regards to the its structure in the previous trails, the banking application described here has been slightly simplified. In this version creating local JTA transactions, accounts managed by a bank object are in fact instances or tuples within a SQL relational table named "accounts". When the Bank object is requested for instance to create an account or to get information on an account, the Bank object performs SQL statement such SQL INSERT or SQL SELECT.
Executing the demonstration consists to launch the folowing program
java com.arjuna.demo.jta.jdbcbank.BankClient -host <hostName>
-port portNumber -username <userName> -dbName <DBName>
-password <password> -clean|-create
Where:
Note Due to an issue with Oracle, it is possible that an XA exception is thrown when attempting to perform this test (see Release Notes). If an xa error is returned you can use the following property property com.arjuna.ats.jdbc.isolationLevel set to TRANSACTION_READ_COMMITTED .
This property can be added in previous command as follow:
java -Dcom.arjuna.ats.jdbc.isolationLevel=TRANSACTION_READ_COMMITTED
com.arjuna.demo.jta.jdbcbank.BankClient -host <hostName>
-port portNumber -userName <userName>
-password <password> -clean|-create
The following Banking application illustrates some methods that use the JDBC API. In this application, the way to create a jdbc connection is made via an XADataSource obtained with JNDI operations, es explained in the previous trail jdbc introduction The BankClient class instantiates an XADataSource and bind it to a jndi naming in order to be retrieved to create transactional connections. This portion of code illustrates how this made against oracle (tested on version 9i). A similar code could tested against an other database by providng the appropriate XADataSource implementation. Details of the BankClient class can be found in the file src/com/arjuna/demo/jta/jdbcbank/BankClient.java
package com.arjuna.demo.jta.jdbcbank;
import javax.naming.*;
import java.util.Hashtable;
import oracle.jdbc.xa.client.OracleXADataSource;
import com.arjuna.ats.jdbc.common.jdbcPropertyManager;
public class BankClient
{
.....
public static void main(String[] args)
{
//Provide the apporopriate information to access the database
for (int i = 0; i < args.length; i++)
{
if (args[i].compareTo("-host") == 0)
host = args[i + 1]
if (args[i].compareTo("-port") == 0)
port = args[i + 1];
if (args[i].compareTo("-username") == 0)
user = args[i + 1];
if (args[i].compareTo("-password") == 0)
password = args[i + 1];
if (args[i].compareTo("-dbName") == 0)
dbName = args[i + 1];
....
}
try
{
// create DataSource
OracleXADataSource ds = new OracleXADataSource();
ds.setURL("jdbc:oracle:thin:@"+host+":"+port+":"+dbName);
// now stick it into JNDI
Hashtable env = new Hashtable();
env.put (Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.fscontext.RefFSContextFactory");
env.put (Context.PROVIDER_URL, "file:/tmp/JNDI");
InitialContext ctx = new InitialContext(env);
ctx.rebind("jdbc/DB", ds);
}
catch (Exception ex)
{ }
//Set the jndi information to be user by the Arjuna JDBC Property Manager
jdbcPropertyManager.propertyManager.setProperty("Context.INITIAL_CONTEXT_FACTORY",
"com.sun.jndi.fscontext.RefFSContextFactory");
jdbcPropertyManager.propertyManager.setProperty("Context.PROVIDER_URL",
"file:/tmp/JNDI");
Bank bank = new Bank();
BankClient client = new BankClient(bank);
}
While the BankClient class is responsible to obtain information to access the database, tocreate the XADataSource and bind it to jndi, and also to get order from a user (create_account, debit, transfer, ..), the Bank class is resposnible to create jdbc connections to perform user's requests. The Bank class is illustarted below where. All methods are not illusrated here but have a similar behavior; they could be found in details in the src/com/arjuna/demo/jta/jdbcbank/Bank.java">Bank.java program. Note that for simplicity, much error checking code has been removed.
public Bank()
{
try
{
DriverManager.registerDriver(new TransactionalDriver());
dbProperties = new Properties();
dbProperties.put(TransactionalDriver.userName, user);
dbProperties.put(TransactionalDriver.password, password);
arjunaJDBC2Driver = new TransactionalDriver(); //
create_table();
}
catch (Exception e)
{
e.printStackTrace();
System.exit(0);
}
_accounts = new java.util.Hashtable();
reuseConnection = true;
}
public void create_account( String _name, float _value )
{
try
{
Connection conne = arjunaJDBC2Driver.connect("jdbc:arjuna:jdbc/DB", dbProperties);
Statement stmtx = conne.createStatement(); // tx statement
stmtx.executeUpdate
("INSERT INTO accounts (name, value)
VALUES ('"+_name+"',"+_value+")");
}
catch (SQLException e)
{
e.printStackTrace();
}
}
public float get_balance(String _name)
throws NotExistingAccount
{
float theBalance = 0;
try
{
Connection conne = arjunaJDBC2Driver.connect("jdbc:arjuna:jdbc/DB", dbProperties);
Statement stmtx = conne.createStatement(); // tx statement
ResultSet rs = stmtx.executeQuery
("SELECT value from accounts
WHERE name = '"+_name+"'");
while (rs.next()) {
theBalance = rs.getFloat("value");
}
}
catch (SQLException e)
{
e.printStackTrace();
throw new NotExistingAccount("The Account requested does not exist");
}
return theBalance;
}
...
}
Recovery is the mechanism which preserves the transaction atomicity in presence of failures. The basic technique for implementing transactions in presence of failures is based on the use of logs. That is, a transaction system has to record enough information to ensure that it can be able to return to a previous state in case of failure or to ensure that changes committed by a transaction are properly stored.
ensures that results of a transaction are applied consistently to all resources involved in a transaction, even in the presence of failure. To recover from failure, relies on its Recovery Manager.
Basically, the Recovery Manager is a daemon process that invokes a set of well known Recovery Modules periodically in two steps; a first to determine transactions in doubt state and a second step to continue the completion of those transactions found in the first step. Since different type of resources may be involved in a transaction, different type of Recovery Modules may exist. provides several type of modules that manage resources according to their position in the transaction tree (root, subordinate, leaf) or the nature of the data itself, transactional object for java or XAResource as seen in the previous trail.
Whatever the nature of the involved resource, recovery is based on information or logs held in the Object Store, which contains specific subdirectory holding information according to the nature of the participant.
This section provides only brief information on running the recovery manager from provided scripts. For complete information on the recovery manager (including how to configure it), see the recovery information.
To run the Recovery Manager as a Windows service, simply:
Note: This directory also contains the uninstall script which is ran in the same manner.
To launch the Recovery Manager as a Windows process, simply:
The recovery manager provides support for recovering XAResources whether or not they are Serializable. XAResources that do implement the Serializable interface are handled without requiring additional programmer defined classes. For those XAResources that need to recover but which cannot implement Serializable, it is possible to provide a small class which is used to help recover them.
This example shows the recovery manager recovering a Serializable XAResource and a non-Serializable XAResource.
When recovering from failures, requires the ability to reconnect to the resource managers that were in use prior to the failures in order to resolve any outstanding transactions. In order to recreate those connections for non-Serializable XAResources it is necessary to provide implementations of the following interface com.arjuna.ats.jta.recovery.XAResourceRecovery.
To inform the recovery system about each of the XAResourceRecovery instances, it is necessary to specify their class names through property variables in the jbossts-properties.xml file. Any property variable which starts with the name XAResourceRecovery will be assumed to represent one of these instances, and its value should be the class name.
When running XA transaction recovery it is necessary to tell which types of Xid it can recover. Each Xid that creates has a unique node identifier encoded within it and will only recover transactions and states that match a specified node identifier. The node identifier to use should be provided to via a property that starts with the name com.arjuna.ats.jta.xaRecoveryNode (multiple values may be provided). A value of * will force to recover (and possibly rollback) all transactions irrespective of their node identifier and should be used with caution.
The recovery module for the non-Serializable XAResource must be deployed in order to provide support to recover the non-Serializable XAResource. If this step was missed out the Serializable XAResource would recover OK but would have no knowledge of the non-Serializable XAResource and so it could not recover it. To register the non-Serializable XAResource XAResourceRecovery module, add an entry to the jbossts-properties.xml.
Under the element <properties depends="jts" name="jta">, add:
<property name="com.arjuna.ats.jta.recovery.XAResourceRecovery1" value= "com.arjuna.demo.recovery.xaresource.NonSerializableExampleXAResourceRecovery"/> <property name="com.arjuna.ats.jta.xaRecoveryNode" value="*"/>
By default, the recovery manager is configured to perform a pass over resources to be recovered every two minutes. It will then wait for ten seconds before re-checking the resources. Although the test will run OK with this configuration, it is possible to configure the recovery manager scan times to reduce the time waiting. To configure the intervals, edit the jbossts-properties.xml as follows:
The recovery manager will work in the same manner for either the JTA or JTS implementation. By default is configured to use a JTS transaction manager, in order to configure it to use a JTA transaction manager a change must again be made to the jbossts-properties.xml. See "Testing JTA" for more information on how to configure the transaction manager to use JTA rather than JTS.
If you do change the transaction manager type remember to reconfigure the recovery manager as follows:
If you are using the ArjunaCore (raw JTA) transaction manager implementation comment out the element in jbossts-properties.xml containing the following text:
internal.jta.recovery.jts.XARecoveryModule
If you are using the JTS transaction manager implementation comment out the element in jbossts-properties.xml containing the following text:
internal.jta.recovery.arjunacore.XARecoveryModule
To launch the Test Recovery Module, execute the following java program
Note: As you can see, the Serializable XAResource does not need it's recover() method called as the transaction manager is aware of all the information about this resource.
WARNING: Implementing a RecoveryModule and AbstractRecord is a very advanced feature of the transaction service. It should only be performed by users familiar with the all the concepts used in the product. Please see the ArjunaCore guide for more information about RecoveryModules and AbstractRecords.
The following sample gives an overview how the Recovery Manager invokes a module to recover from failure. This basic sample does not aim to present a complete process to recover from failure, but mainly to illustrate the way to implement a recovery module. More details can be found in "Failure Recovery Guide".
The application used here consists to create an atomic transaction, to register a participant within the created transaction and finally to terminate it either by commit or abort. A set of arguments are provided:
- During the prepare phase, it writes a simple message - "I'm prepared" - on the disk such
The message is written in a well known file
- During the commit phase, it writes another message - "I'm committed" - in the same file
used during prepare
- If it receives an abort message, it removes from the disk the file used for prepare if any.
- if a crash has been decided for the test, then it crashes during the commit phase - the file remains
with the message "I'm prepared".
<property name="com.arjuna.ats.arjuna.recovery.recoveryExtension<i>" value="com.arjuna.demo.recoverymodule.SimpleRecoveryModule"/>
java com.arjuna.ats.arjuna.recovery.RecoveryManager -test
The failure recovery subsystem of
ensure that results of a transaction are applied
consistently to all
resources affected by the transaction, even if any of the application
processes or the hardware hosting them crash
or lose network connectivity. In the case of
hardware crashes or network failures, the recovery does not take place
until the system or
network are restored, but the original application does not need to be restarted. Recovery is
handled by the Recovery Manager process. For recover to take place, information about the
transaction and the
resources involved needs to survive the failure and be accessible afterward.
This information is held in the
ActionStore
, which is part of the
ObjectStore
. If the
ObjectStore
is destroyed or modified, recovery may not be possible.
Until the recovery procedures are complete, resources affected by a transaction which was in progress at the time of the failure may be inaccessible. Database resources may report this as as tables or rows held by in-doubt transactions . For TXOJ resources, an attempt to activate the Transactional Object, such as when trying to get a lock, fails.
Although some ORB-specific configuration is necessary to configure the ORB sub-system, the
basic settings are ORB-independent.
The configuration which applies to
is in the
RecoveryManager-properties.xml
file and
the
orportability-properties.xml
file. Contents of each file are below.
Example 3.27. RecoverManager-properties.xml
<entry key="RecoveryEnvironmentBean.recoveryActivatorClassNames">
com.arjuna.ats.internal.jts.orbspecific.recovery.RecoveryEnablement
</entry>
Example 3.28. orportability-properties.xml
<entry key="com.arjuna.orbportability.orb.PostInit2">com.arjuna.ats.internal.jts.recovery.RecoveryInit</entry>
These entries cause instances of the named classes to be loaded. The named classes then load
the ORB-specific
classes needed and perform other initialization. This enables failure recovery
for transactions initiated by or
involving applications using this property file. The default
RecoveryManager-properties.xml
file and
orportability-properties.xml
with the distribution include these entries.
Failure recovery is NOT supported with the JavaIDL ORB that is part of JDK. Failure recovery is supported for JacOrb only.
To disable recovery, remove or comment out the
RecoveryEnablement
line in the property file.
Recovery of XA resources accessed via JDBC is handled by the
XARecoveryModule
. This
module includes both
transaction-initiated
and
resource-initiated
recovery.
Transaction-initiated recovery is possible where the particular transaction branch
progressed far enough for
a
JTA_ResourceRecord
to be written in the ObjectStore. The record contains the
information needed to link the
transaction to information known by the rest of
in the database.
Resource-initiated recovery is necessary for branches where a failure occurred after the
database made a
persistent record of the transaction, but before the
JTA_ResourceRecord
was
written. Resource-initiated recovery is also necessary for datasources for which it
is impossible to hold
information in the
JTA_ResourceRecord
that allows the recreation in the
RecoveryManager of the
XAConnection
or
XAResource
used in the
original application.
Transaction-initiated recovery is automatic. The
XARecoveryModule
finds the
JTA_ResourceRecord
which needs recovery, using the two-pass mechanism described
above. It then uses the normal
recovery mechanisms to find the status of the transaction the resource was
involved in, by
running
replay_completion
on the
RecoveryCoordinator
for the transaction branch. Next, it creates or recreates the
appropriate
XAResource
and issues
commit
or
rollback
on it as appropriate. The
XAResource
creation uses the
same database name, username, password, and other information as the
application.
Resource-initiated recovery must be specifically configured, by supplying the
RecoveryManager
with the appropriate information for it to interrogate all the
XADataSources
accessed by any
application. The access to each
XADataSource
is handled by a class that implements the
com.arjuna.ats.jta.recovery.XAResourceRecovery
interface. Instances of this class
are dynamically loaded, as controlled by property
JTAEnvironmentBean.xaResourceRecoveryInstances
.
The
XARecoveryModule
uses the
XAResourceRecovery
implementation to
get an
XAResource
to the target datasource. On each invocation of
periodicWorkSecondPass
, the recovery module issues an
XAResource.recover
request. This request returns a list of the transaction identifiers
that are known to the
datasource and are in an in-doubt state. The list of these in-doubt Xids is compared
across
multiple passes, using
periodicWorkSecondPass-es
. Any Xid that appears in both
lists, and for which no
JTA_ResourceRecord
is found by the intervening
transaction-initiated recovery, is assumed to belong to a
transaction involved in a crash before any
JTA_Resource_Record
was written, and a
rollback
is issued for
that transaction on the
XAResource
.
This double-scan mechanism is used because it is possible the Xid was obtained from the datasource just as the original application process was about to create the corresponding JTA_ResourceRecord. The interval between the scans should allow time for the record to be written unless the application crashes (and if it does, rollback is the right answer).
An
XAResourceRecovery
implementation class can contain all the information needed to
perform recovery to a specific
datasource. Alternatively, a single class can handle multiple datasources which
have some
similar features. The constructor of the implementation class must have an empty parameter
list,
because it is loaded dynamically. The interface includes an
initialise
method, which
passes in further information as a
string
. The content of the string is taken from the property
value that provides the class name.
Everything after the first semi-colon is passed as the value of the
string. The
XAResourceRecovery
implementation class determines how to use the string.
An
XAResourceRecovery
implementation class,
com.arjuna.ats.internal.jdbc.recovery.BasicXARecovery
, supports resource-initiated recovery for any XADataSource. For this class, the string
received in method
initialise
is assumed to contain the number of connections to recover, and the name of the
properties
file containing the dynamic class name, the database username, the database password and the
database
connection URL. The following example is for an Oracle 8.1.6 database accessed via
the Sequelink 5.1 driver:
XAConnectionRecoveryEmpay=com.arjuna.ats.internal.jdbc.recovery.BasicXARecovery;2;OraRecoveryInfo
This implementation is only meant as an example, because it relies upon usernames and
passwords appearing in
plain text properties files. You can create your own implementations
of
XAConnectionRecovery
. See the javadocs and the example
com.arjuna.ats.internal.jdbc.recovery.BasicXARecovery
.
Example 3.29. XAConnectionRecovery implementation
package com.arjuna.ats.internal.jdbc.recovery;
import com.arjuna.ats.jdbc.TransactionalDriver;
import com.arjuna.ats.jdbc.common.jdbcPropertyManager;
import com.arjuna.ats.jdbc.logging.jdbcLogger;
import com.arjuna.ats.internal.jdbc.*;
import com.arjuna.ats.jta.recovery.XAConnectionRecovery;
import com.arjuna.ats.arjuna.common.*;
import com.arjuna.common.util.logging.*;
import java.sql.*;
import javax.sql.*;
import jakarta.transaction.*;
import javax.transaction.xa.*;
import java.util.*;
import java.lang.NumberFormatException;
/**
* This class implements the XAConnectionRecovery interface for XAResources.
* The parameter supplied in setParameters can contain arbitrary information
* necessary to initialise the class once created. In this instance it contains
* the name of the property file in which the db connection information is
* specified, as well as the number of connections that this file contains
* information on (separated by ;).
*
* IMPORTANT: this is only an *example* of the sorts of things an
* XAConnectionRecovery implementor could do. This implementation uses
* a property file which is assumed to contain sufficient information to
* recreate connections used during the normal run of an application so that
* we can perform recovery on them. It is not recommended that information such
* as user name and password appear in such a raw text format as it opens up
* a potential security hole.
*
* The db parameters specified in the property file are assumed to be
* in the format:
*
* DB_x_DatabaseURL=
* DB_x_DatabaseUser=
* DB_x_DatabasePassword=
* DB_x_DatabaseDynamicClass=
*
* DB_JNDI_x_DatabaseURL=
* DB_JNDI_x_DatabaseUser=
* DB_JNDI_x_DatabasePassword=
*
* where x is the number of the connection information.
*
* @since JTS 2.1.
*/
public class BasicXARecovery implements XAConnectionRecovery
{
/*
* Some XAConnectionRecovery implementations will do their startup work
* here, and then do little or nothing in setDetails. Since this one needs
* to know dynamic class name, the constructor does nothing.
*/
public BasicXARecovery () throws SQLException
{
numberOfConnections = 1;
connectionIndex = 0;
props = null;
}
/**
* The recovery module will have chopped off this class name already.
* The parameter should specify a property file from which the url,
* user name, password, etc. can be read.
*/
public boolean initialise (String parameter) throws SQLException
{
int breakPosition = parameter.indexOf(BREAKCHARACTER);
String fileName = parameter;
if (breakPosition != -1)
{
fileName = parameter.substring(0, breakPosition -1);
try
{
numberOfConnections = Integer.parseInt(parameter.substring(breakPosition +1));
}
catch (NumberFormatException e)
{
//Produce a Warning Message
return false;
}
}
PropertyManager.addPropertiesFile(fileName);
try
{
PropertyManager.loadProperties(true);
props = PropertyManager.getProperties();
}
catch (Exception e)
{
//Produce a Warning Message
return false;
}
return true;
}
public synchronized XAConnection getConnection () throws SQLException
{
JDBC2RecoveryConnection conn = null;
if (hasMoreConnections())
{
connectionIndex++;
conn = getStandardConnection();
if (conn == null)
conn = getJNDIConnection();
if (conn == null)
//Produce a Warning message
}
return conn;
}
public synchronized boolean hasMoreConnections ()
{
if (connectionIndex == numberOfConnections)
return false;
else
return true;
}
private final JDBC2RecoveryConnection getStandardConnection () throws SQLException
{
String number = new String(""+connectionIndex);
String url = new String(dbTag+number+urlTag);
String password = new String(dbTag+number+passwordTag);
String user = new String(dbTag+number+userTag);
String dynamicClass = new String(dbTag+number+dynamicClassTag);
Properties dbProperties = new Properties();
String theUser = props.getProperty(user);
String thePassword = props.getProperty(password);
if (theUser != null)
{
dbProperties.put(ArjunaJDBC2Driver.userName, theUser);
dbProperties.put(ArjunaJDBC2Driver.password, thePassword);
String dc = props.getProperty(dynamicClass);
if (dc != null)
dbProperties.put(ArjunaJDBC2Driver.dynamicClass, dc);
return new JDBC2RecoveryConnection(url, dbProperties);
}
else
return null;
}
private final JDBC2RecoveryConnection getJNDIConnection () throws SQLException
{
String number = new String(""+connectionIndex);
String url = new String(dbTag+jndiTag+number+urlTag);
String password = new String(dbTag+jndiTag+number+passwordTag);
String user = new String(dbTag+jndiTag+number+userTag);
Properties dbProperties = new Properties();
String theUser = props.getProperty(user);
String thePassword = props.getProperty(password);
if (theUser != null)
{
dbProperties.put(ArjunaJDBC2Driver.userName, theUser);
dbProperties.put(ArjunaJDBC2Driver.password, thePassword);
return new JDBC2RecoveryConnection(url, dbProperties);
}
else
return null;
}
private int numberOfConnections;
private int connectionIndex;
private Properties props;
private static final String dbTag = "DB_";
private static final String urlTag = "_DatabaseURL";
private static final String passwordTag = "_DatabasePassword";
private static final String userTag = "_DatabaseUser";
private static final String dynamicClassTag = "_DatabaseDynamicClass";
private static final String jndiTag = "JNDI_";
/*
* Example:
*
* DB2_DatabaseURL=jdbc\:arjuna\:sequelink\://qa02\:20001
* DB2_DatabaseUser=tester2
* DB2_DatabasePassword=tester
* DB2_DatabaseDynamicClass=
* com.arjuna.ats.internal.jdbc.drivers.sequelink_5_1
*
* DB_JNDI_DatabaseURL=jdbc\:arjuna\:jndi
* DB_JNDI_DatabaseUser=tester1
* DB_JNDI_DatabasePassword=tester
* DB_JNDI_DatabaseName=empay
* DB_JNDI_Host=qa02
* DB_JNDI_Port=20000
*/
private static final char BREAKCHARACTER = ';'; // delimiter for parameters
}
XAResource.recover
returns the list of all transactions that are in-doubt with in the
datasource. If multiple
recovery domains are used with a single datasource, resource-initiated recovery sees
transactions from other domains. Since it does not have a
JTA_ResourceRecord
available, it rolls back the transaction in the database, if the Xid appears in successive
recover calls. To
suppress resource-initiated recovery, do not supply an
XAConnectionRecovery
property, or
confine it to one recovery domain.
Property
OTS_ISSUE_RECOVERY_ROLLBACK
controls whether the
RecoveryManager
explicitly issues a rollback request when
replay_completion
asks for the status of a transaction that is unknown. According to
the
presume-abort
mechanism used by OTS and JTS, the transaction can be assumed to have
rolled back, and this
is the response that is returned to the
Resource
, including a
subordinate coordinator, in this case. The
Resource
should then apply that result to the
underlying resources. However, it is also legitimate for
the superior to issue a rollback, if
OTS_ISSUE_RECOVERY_ROLLBACK
is set to
YES
.
The OTS transaction identification mechanism makes it possible for a transaction coordinator
to hold a
Resource
reference that will never be usable. This can occur in two cases:
The process holding the
Resource
crashes before receiving the commit or rollback
request from the coordinator.
The
Resource
receives the commit or rollback, and responds. However, the message is
lost or the
coordinator process has crashed.
In the first case, the
RecoveryManager
for the
Resource
ObjectStore
eventually reconstructs a new
Resource
(with a
different CORBA object reference (IOR), and issues a
replay_completion
request
containing the new
Resource
IOR. The
RecoveryManager
for the
coordinator substitutes this in place of the original, useless one, and issues
commit
to the new reconstructed
Resource
. The
Resource
has to have been
in a commit state, or there would be no transaction intention list. Until
the
replay_completion
is received, the
RecoveryManager
tries to send
commit
to its
Resource
reference.–This will fail with a CORBA
System Exception. Which exception depends on the ORB
and other details.
In the second case, the
Resource
no longer exists. The
RecoveryManager
at the coordinator will never get through, and will receive System
Exceptions forever.
The
RecoveryManager
cannot distinguish these two cases by any protocol mechanism. There
is a perceptible cost in
repeatedly attempting to send the commit to an inaccessible
Resource
. In particular, the timeouts involved will extend the recovery iteration time,
and thus
potentially leave resources inaccessible for longer.
To avoid this, the
RecoveryManager
only attempts to send
commit
to a
Resource
a limited number of times. After that, it considers the transaction
assumed complete
. It retains the information about the transaction, by changing the object type
in the
ActionStore
, and if the
Resource
eventually does wake up
and a
replay_completion
request is received, the
RecoveryManager
activates the transaction and issues the commit request to the new Resource IOR. The number
of times the
RecoveryManager
attempts to issue
commit
as part of the periodic
recovery is controlled by the property variable
COMMITTED_TRANSACTION_RETRY_LIMIT
, and