JBoss.orgCommunity Documentation

Narayana Project Documentation

Mark Little

Jonathan Halliday

Andrew Dinn

Kevin Connor

Michael Musgrove

Gytis Trikleris

Amos Feng

Abstract

The Narayana Project Documentation contains information on how to use Narayana to develop applications that use transaction technology to manage business processes.


Preface
1. Document Conventions
1.1. Typographic Conventions
1.2. Pull-quote Conventions
1.3. Notes and Warnings
2. We Need Feedback!
1. Narayana Core
1.1. Overview
1.1.1. ArjunaCore
1.1.2. Saving object states
1.1.3. The object store
1.1.4. Recovery and persistence
1.1.5. The life cycle of a Transactional Object for Java
1.1.6. The concurrency controller
1.1.7. The transactional protocol engine
1.1.8. The class hierarchy
1.2. Using ArjunaCore
1.2.1. State management
1.2.2. Lock management and concurrency control
1.3. Advanced transaction issues with ArjunaCore
1.3.1. Last resource commit optimization (LRCO)
1.3.2. Heuristic outcomes
1.3.3. Nested transactions
1.3.4. Asynchronously committing a transaction
1.3.5. Independent top-level transactions
1.3.6. Transactions within save_state and restore_state methods
1.3.7. Garbage collecting objects
1.3.8. Transaction timeouts
1.4. Hints and tips
1.4.1. General
1.4.2. Direct use of StateManager
1.5. Constructing a Transactional Objects for Java application
1.5.1. Queue description
1.5.2. Constructors and finalizers
1.5.3. Required methods
1.5.4. The client
1.5.5. Comments
1.6. Failure Recovery
1.6.1. Embedding the Recovery Manager
1.6.2. Understanding Recovery Modules
1.6.3. Writing a Recovery Module
2. JTA
2.1. Administration
2.1.1. Introduction
2.1.2. Starting and Stopping the Transaction Manager
2.1.3. ObjectStore Management
2.1.4. Narayana Runtime Information
2.1.5. Failure Recovery Administration
2.1.6. Errors and Exceptions
2.1.7. Selecting the JTA implementation
2.2. Development
2.2.1. JDBC and Transactions
2.2.2. Examples
2.2.3. Using Narayana in application servers
2.3. Installation
2.3.1. Preparing Your System
2.3.2. Operating System Services
2.3.3. Logging
2.3.4. Additional JAR Requirements
2.3.5. Setting Properties
2.4. Quick Start to JTA
2.4.1. Introduction
2.4.2. Package layout
2.4.3. Setting properties
2.4.4. Demarcating Transactions
2.4.5. Local vs Distributed JTA implementations
2.4.6. JDBC and Transactions
2.4.7. Configurable options
3. JTS
3.1. Administration
3.1.1. Introduction
3.1.2. Starting and Stopping the Transaction Manager
3.1.3. OTS and Jakarta EE Transaction Service Management
3.1.4. Failure Recovery Administration
3.1.5. ORB-specific Configurations
3.1.6. Initializing Applications
3.2. Development
3.2.1. Transaction Processing Overview
3.2.2. Basics
3.2.3. Introduction to the OTS
3.2.4. Constructing an OTS application
3.2.5. interfaces for extending the OTS
3.2.6. Example
3.2.7. Trail map
3.2.8. Failure Recovery
3.2.9. JTA and JTS
3.2.10. ORB-specific configuration
3.3. ORB Portability
3.3.1. ORB Portability Introduction
3.3.2. ORB Portability API
3.4. Quick Start to JTS/OTS
3.4.1. Introduction
3.4.2. Package layout
3.4.3. Setting properties
3.4.4. Starting and terminating the ORB and BOA/POA
3.4.5. Specifying the object store location
3.4.6. Implicit transaction propagation and interposition
3.4.7. Obtaining Current
3.4.8. Transaction termination
3.4.9. Transaction factory
3.4.10. Recovery manager
4. XTS
4.1. Introduction
4.1.1. Managing service-Based Processes
4.1.2. Servlets
4.1.3. SOAP
4.1.4. Web Services Description Language (WDSL)
4.2. Transactions Overview
4.2.1. The Coordinator
4.2.2. The Transaction Context
4.2.3. Participants
4.2.4. ACID Transactions
4.2.5. Two Phase Commit
4.2.6. The Synchronization Protocol
4.2.7. Optimizations to the Protocol
4.2.8. Non-Atomic Transactions and Heuristic Outcomes
4.2.9. Interposition
4.2.10. A New Transaction Protocol
4.3. Overview of Protocols Used by XTS
4.3.1. WS-Coordination
4.3.2. WS-Transaction
4.3.3. Summary
5. Long Running Actions (LRA)
5.1. Overview
5.2. JAX-RS services
5.3. Non JAX-RS services
5.4. Examples
5.4.1. LRA Quickstart Examples
5.4.2. Participating in Long Running Actions
5.4.3. Making JAX-RS Invocations from JAX-RS Resource Methods
5.5. Runtime Integration
6. RTS
6.1. Overview
6.2. Transaction Model
6.2.1. Architecture
6.2.2. State Transitions
6.2.3. The Transaction Manager Resource
6.3. Client Responsibilities
6.3.1. Starting a Transaction
6.3.2. Obtaining The Transaction Status
6.3.3. Propagating the Context
6.3.4. Discovering Existing Transactions
6.3.5. Ending the Transaction
6.4. Service Responsibilities
6.4.1. Joining the Transaction
6.4.2. Leaving the Transaction
6.4.3. Preparing and Committing Work
6.4.4. Recovery
6.4.5. Pre- and Post- Two-Phase Commit Processing
6.5. Container Integration
6.5.1. Deploying as a Wildfly Subsystem
6.5.2. Deploying into a Servlet Container
6.6. Examples
6.6.1. Support For Java based Services
6.7. Interoperating With Other Transaction Models
6.7.1. JTA Bridge
6.7.2. Web Services Transactions
7. STM
7.1. An STM Example
7.2. Annotations
7.3. Containers, Volatility and Durability
7.4. Sharing STM Objects
7.5. State Management
7.6. Optimistic Concurrency Control
7.7. A Typical Use Case
8. Compensating transactions
8.1. Overview
8.2. Compensations Framework
8.2.1. CDI annotations
8.2.2. Recovery
8.2.3. Limitation
8.3. Resources
8.4. Notes
9. OSGi
9.1. Integrate with Karaf
9.1.1. Introduction
9.1.2. Quickstart
9.1.3. Admin Commands Support
10. Appendixes

This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.

In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes the Liberation Fonts set by default.

Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.

Mono-spaced Bold

Used to highlight system input, including shell commands, file names and paths. Also used to highlight keycaps and key combinations. For example:

To see the contents of the file my_next_bestselling_novel in your current working directory, enter the cat my_next_bestselling_novel command at the shell prompt and press Enter to execute the command.

The above includes a file name, a shell command and a keycap, all presented in mono-spaced bold and all distinguishable thanks to context.

Key combinations can be distinguished from keycaps by the hyphen connecting each part of a key combination. For example:

Press Enter to execute the command.

Press Ctrl+Alt+F2 to switch to the first virtual terminal. Press Ctrl+Alt+F1 to return to your X-Windows session.

The first paragraph highlights the particular keycap to press. The second highlights two key combinations (each a set of three keycaps with each set pressed simultaneously).

If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold. For example:

File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions.

Proportional Bold

This denotes words or phrases encountered on a system, including application names; dialog box text; labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example:

Choose SystemPreferencesMouse from the main menu bar to launch Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).

To insert a special character into a gedit file, choose ApplicationsAccessoriesCharacter Map from the main menu bar. Next, choose SearchFind… from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose EditPaste from the gedit menu bar.

The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.

Mono-spaced Bold Italic or Proportional Bold Italic

Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:

To connect to a remote machine using ssh, type ssh username@domain.name at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh john@example.com.

The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home.

To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release.

Note the words in bold italics above — username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.

Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:

Publican is a DocBook publishing system.

A transaction is a unit of work that encapsulates multiple database actions such that that either all the encapsulated actions fail or all succeed.

Transactions ensure data integrity when an application interacts with multiple datasources.

This chapter contains a description of the use of the ArjunaCore transaction engine and the Transactional Objects for Java (TXOJ) classes and facilities. The classes mentioned in this chapter are the key to writing fault-tolerant applications using transactions. Thus, they are described and then applied in the construction of a simple application. The classes to be described in this chapter can be found in the com.arjuna.ats.txoj and com.arjuna.ats.arjuna packages.

Stand-Alone Transaction Manager

Although Narayana can be embedded in various containers, such as WildFly Application Server, it remains a stand-alone transaction manager as well. There are no dependencies between the core Narayana and any container implementations.

In keeping with the object-oriented view, the mechanisms needed to construct reliable distributed applications are presented to programmers in an object-oriented manner. Some mechanisms need to be inherited, for example, concurrency control and state management. Other mechanisms, such as object storage and transactions, are implemented as ArjunaCore objects that are created and manipulated like any other object.

Note

When the manual talks about using persistence and concurrency control facilities it assumes that the Transactional Objects for Java (TXOJ) classes are being used. If this is not the case then the programmer is responsible for all of these issues.

ArjunaCore exploits object-oriented techniques to present programmers with a toolkit of Java classes from which application classes can inherit to obtain desired properties, such as persistence and concurrency control. These classes form a hierarchy, part of which is shown in Figure 1.1, “ArjunaCore Class Hierarchy” and which will be described later in this document.


Apart from specifying the scopes of transactions, and setting appropriate locks within objects, the application programmer does not have any other responsibilities: ArjunaCore and TXOJ guarantee that transactional objects will be registered with, and be driven by, the appropriate transactions, and crash recovery mechanisms are invoked automatically in the event of failures.

At the root of the class hierarchy is the class StateManager . StateManager is responsible for object activation and deactivation, as well as object recovery. Refer to Example 1.1, “ Statemanager for the simplified signature of the class.


Objects are assumed to be of three possible flavors.

Three Flavors of Objects

Recoverable

StateManager attempts to generate and maintain appropriate recovery information for the object. Such objects have lifetimes that do not exceed the application program that creates them.

Recoverable and Persistent

The lifetime of the object is assumed to be greater than that of the creating or accessing application, so that in addition to maintaining recovery information, StateManager attempts to automatically load or unload any existing persistent state for the object by calling the activate or deactivate operation at appropriate times.

Neither Recoverable nor Persistent

No recovery information is ever kept, nor is object activation or deactivation ever automatically attempted.

If an object is recoverable or recoverable and persistent , then StateManager invokes the operations save_state while performing deactivate , and restore_state while performing activate ,) at various points during the execution of the application. These operations must be implemented by the programmer since StateManager cannot detect user-level state changes. This gives the programmer the ability to decide which parts of an object’s state should be made persistent. For example, for a spreadsheet it may not be necessary to save all entries if some values can simply be recomputed. The save_state implementation for a class Example that has integer member variables called A, B and C might be implemented as in Example 1.2, “ save_state Implementation ” .


Note

it is necessary for all save_state and restore_state methods to call super.save_state and super.restore_state . This is to cater for improvements in the crash recovery mechanisms.

The concurrency controller is implemented by the class LockManager , which provides sensible default behavior while allowing the programmer to override it if deemed necessary by the particular semantics of the class being programmed. As with StateManager and persistence, concurrency control implementations are accessed through interfaces. As well as providing access to remote services, the current implementations of concurrency control available to interfaces include:

Local disk/database implementation

Locks are made persistent by being written to the local file system or database.

A purely local implementation

Locks are maintained within the memory of the virtual machine which created them. This implementation has better performance than when writing locks to the local disk, but objects cannot be shared between virtual machines. Importantly, it is a basic Java object with no requirements which can be affected by the SecurityManager.

The primary programmer interface to the concurrency controller is via the setlock operation. By default, the runtime system enforces strict two-phase locking following a multiple reader, single writer policy on a per object basis. However, as shown in Figure 1.1, “ArjunaCore Class Hierarchy” , by inheriting from the Lock class, you can provide your own lock implementations with different lock conflict rules to enable type specific concurrency control.

Lock acquisition is, of necessity, under programmer control, since just as StateManager cannot determine if an operation modifies an object, LockManager cannot determine if an operation requires a read or write lock. Lock release, however, is under control of the system and requires no further intervention by the programmer. This ensures that the two-phase property can be correctly maintained.

public class LockResult
{
    public static final int GRANTED;
    public static final int REFUSED;
    public static final int RELEASED;
};

public class ConflictType
{
    public static final int CONFLICT;
    public static final int COMPATIBLE;
    public static final int PRESENT;
};

public abstract class LockManager extends StateManager
{
    public static final int defaultRetry;
    public static final int defaultTimeout;
    public static final int waitTotalTimeout;

    public final synchronized boolean releaselock (Uid lockUid);
    public final synchronized int setlock (Lock toSet);
    public final synchronized int setlock (Lock toSet, int retry);
    public final synchronized int setlock (Lock toSet, int retry, int sleepTime);
    public void print (PrintStream strm);
    public String type ();
    public boolean save_state (OutputObjectState os, int ObjectType);
    public boolean restore_state (InputObjectState os, int ObjectType);

    protected LockManager ();
    protected LockManager (int ot);
    protected LockManager (int ot, int objectModel);
    protected LockManager (Uid storeUid);
    protected LockManager (Uid storeUid, int ot);
    protected LockManager (Uid storeUid, int ot, int objectModel);

    protected void terminate ();
};

The LockManager class is primarily responsible for managing requests to set a lock on an object or to release a lock as appropriate. However, since it is derived from StateManager , it can also control when some of the inherited facilities are invoked. For example, LockManager assumes that the setting of a write lock implies that the invoking operation must be about to modify the object. This may in turn cause recovery information to be saved if the object is recoverable. In a similar fashion, successful lock acquisition causes activate to be invoked.

Example 1.3, “ Example Class ” shows how to try to obtain a write lock on an object.


The transaction protocol engine is represented by the AtomicAction class, which uses StateManager to record sufficient information for crash recovery mechanisms to complete the transaction in the event of failures. It has methods for starting and terminating the transaction, and, for those situations where programmers need to implement their own resources, methods for registering them with the current transaction. Because ArjunaCore supports sub-transactions, if a transaction is begun within the scope of an already executing transaction it will automatically be nested.

You can use ArjunaCore with multi-threaded applications. Each thread within an application can share a transaction or execute within its own transaction. Therefore, all ArjunaCore classes are also thread-safe.


The principal classes which make up the class hierarchy of ArjunaCore are depicted below.

  • StateManager

    • LockManager

      • User-Defined Classes

    • Lock

      • User-Defined Classes

    • AbstractRecord

      • RecoveryRecord

      • LockRecord

      • RecordList

      • Other management record types

  • AtomicAction

    • TopLevelTransaction

  • Input/OutputObjectBuffer

    • Input/OutputObjectState

  • ObjectStore

Programmers of fault-tolerant applications will be primarily concerned with the classes LockManager , Lock , and AtomicAction . Other classes important to a programmer are Uid and ObjectState .

Most ArjunaCore classes are derived from the base class StateManager , which provides primitive facilities necessary for managing persistent and recoverable objects. These facilities include support for the activation and de-activation of objects, and state-based object recovery.

The class LockManager uses the facilities of StateManager and Lock to provide the concurrency control required for implementing the serializability property of atomic actions. The concurrency control consists of two-phase locking in the current implementation. The implementation of atomic action facilities is supported by AtomicAction and TopLevelTransaction .

Consider a simple example. Assume that Example is a user-defined persistent class suitably derived from the LockManager . An application containing an atomic transaction Trans accesses an object called O of type Example , by invoking the operation op1 , which involves state changes to O . The serializability property requires that a write lock must be acquired on O before it is modified. Therefore, the body of op1 should contain a call to the setlock operation of the concurrency controller.


Procedure 1.1.  Steps followed by the operation setlock

The operation setlock , provided by the LockManager class, performs the following functions in Example 1.5, “Simple Concurrency Control” .

  1. Check write lock compatibility with the currently held locks, and if allowed, continue.

  2. Call the StateManager operation activate . activate will load, if not done already, the latest persistent state of O from the object store, then call the StateManager operation modified , which has the effect of creating an instance of either RecoveryRecord or PersistenceRecord for O , depending upon whether O was persistent or not. The Lock is a WRITE lock so the old state of the object must be retained prior to modification. The record is then inserted into the RecordList of Trans.

  3. Create and insert a LockRecord instance in the RecordList of Trans .

Now suppose that action Trans is aborted sometime after the lock has been acquired. Then the rollback operation of AtomicAction will process the RecordList instance associated with Trans by invoking an appropriate Abort operation on the various records. The implementation of this operation by the LockRecord class will release the WRITE lock while that of RecoveryRecord or PersistenceRecord will restore the prior state of O .

It is important to realize that all of the above work is automatically being performed by ArjunaCore on behalf of the application programmer. The programmer need only start the transaction and set an appropriate lock; ArjunaCore and TXOJ take care of participant registration, persistence, concurrency control and recovery.

This section describes ArjunaCore and Transactional Objects for Java (TXOJ) in more detail, and shows how to use ArjunaCore to construct transactional applications.

Note: in previous releases ArjunaCore was often referred to as TxCore.

ArjunaCore needs to be able to remember the state of an object for several purposes, including recovery (the state represents some past state of the object), and for persistence (the state represents the final state of an object at application termination). Since all of these requirements require common functionality they are all implemented using the same mechanism - the classes Input/OutputObjectState and Input/OutputBuffer.

Example 1.6.  OutputBuffer and InputBuffer

public class OutputBuffer
{
    public OutputBuffer ();

    public final synchronized boolean valid ();
    public synchronized byte[] buffer();
    public synchronized int length ();

    /* pack operations for standard Java types */

    public synchronized void packByte (byte b) throws IOException;
    public synchronized void packBytes (byte[] b) throws IOException;
    public synchronized void packBoolean (boolean b) throws IOException;
    public synchronized void packChar (char c) throws IOException;
    public synchronized void packShort (short s) throws IOException;
    public synchronized void packInt (int i) throws IOException;
    public synchronized void packLong (long l) throws IOException;
    public synchronized void packFloat (float f) throws IOException;
    public synchronized void packDouble (double d) throws IOException;
    public synchronized void packString (String s) throws IOException;
};
public class InputBuffer
{
    public InputBuffer ();

    public final synchronized boolean valid ();
    public synchronized byte[] buffer();
    public synchronized int length ();

    /* unpack operations for standard Java types */

    public synchronized byte unpackByte () throws IOException;
    public synchronized byte[] unpackBytes () throws IOException;
    public synchronized boolean unpackBoolean () throws IOException;
    public synchronized char unpackChar () throws IOException;
    public synchronized short unpackShort () throws IOException;
    public synchronized int unpackInt () throws IOException;
    public synchronized long unpackLong () throws IOException;
    public synchronized float unpackFloat () throws IOException;
    public synchronized double unpackDouble () throws IOException;
    public synchronized String unpackString () throws IOException;
};

The InputBuffer and OutputBuffer classes maintain an internal array into which instances of the standard Java types can be contiguously packed or unpacked, using the pack or unpack operations. This buffer is automatically resized as required should it have insufficient space. The instances are all stored in the buffer in a standard form called network byte order to make them machine independent.



The object store provided with ArjunaCore deliberately has a fairly restricted interface so that it can be implemented in a variety of ways. For example, object stores are implemented in shared memory, on the Unix file system (in several different forms), and as a remotely accessible store. More complete information about the object stores available in ArjunaCore can be found in the Appendix.

Note

As with all ArjunaCore classes, the default object stores are pure Java implementations. to access the shared memory and other more complex object store implementations, you need to use native methods.

All of the object stores hold and retrieve instances of the class InputObjectState or OutputObjectState . These instances are named by the Uid and Type of the object that they represent. States are read using the read_committed operation and written by the system using the write_uncommitted operation. Under normal operation new object states do not overwrite old object states but are written to the store as shadow copies. These shadows replace the original only when the commit_state operation is invoked. Normally all interaction with the object store is performed by ArjunaCore system components as appropriate thus the existence of any shadow versions of objects in the store are hidden from the programmer.



When a transactional object is committing, it must make certain state changes persistent, so it can recover in the event of a failure and either continue to commit, or rollback. When using TXOJ , ArjunaCore will take care of this automatically. To guarantee ACID properties, these state changes must be flushed to the persistence store implementation before the transaction can proceed to commit. Otherwise, the application may assume that the transaction has committed when in fact the state changes may still reside within an operating system cache, and may be lost by a subsequent machine failure. By default, ArjunaCore ensures that such state changes are flushed. However, doing so can impose a significant performance penalty on the application.

To prevent transactional object state flushes, set the ObjectStoreEnvironmentBean.objectStoreSync variable to OFF .

ArjunaCore comes with support for several different object store implementations. The Appendix describes these implementations, how to select and configure a given implementation on a per-object basis using the ObjectStoreEnvironmentBean.objectStoreType property variable, and indicates how additional implementations can be provided.

The ArjunaCore class StateManager manages the state of an object and provides all of the basic support mechanisms required by an object for state management purposes. StateManager is responsible for creating and registering appropriate resources concerned with the persistence and recovery of the transactional object. If a transaction is nested, then StateManager will also propagate these resources between child transactions and their parents at commit time.

Objects are assumed to be of three possible flavors.

Three Flavors of Objects

Recoverable

StateManager attempts to generate and maintain appropriate recovery information for the object. Such objects have lifetimes that do not exceed the application program that creates them.

Recoverable and Persistent

The lifetime of the object is assumed to be greater than that of the creating or accessing application, so that in addition to maintaining recovery information, StateManager attempts to automatically load or unload any existing persistent state for the object by calling the activate or deactivate operation at appropriate times.

Neither Recoverable nor Persistent

No recovery information is ever kept, nor is object activation or deactivation ever automatically attempted.

This object property is selected at object construction time and cannot be changed thereafter. Thus an object cannot gain (or lose) recovery capabilities at some arbitrary point during its lifetime.


If an object is recoverable or persistent, StateManager will invoke the operations save_state (while performing deactivation), restore_state (while performing activation), and type at various points during the execution of the application. These operations must be implemented by the programmer since StateManager does not have access to a runtime description of the layout of an arbitrary Java object in memory and thus cannot implement a default policy for converting the in memory version of the object to its passive form. However, the capabilities provided by InputObjectState and OutputObjectState make the writing of these routines fairly simple. For example, the save_state implementation for a class Example that had member variables called A , B , and C could simply be Example 1.11, “ Example Implementation of Methods for StateManager .


In order to support crash recovery for persistent objects, all save_state and restore_state methods of user objects must call super.save_state and super.restore_state .

Note

The type method is used to determine the location in the object store where the state of instances of that class will be saved and ultimately restored. This location can actually be any valid string. However, you should avoid using the hash character (#) as this is reserved for special directories that ArjunaCore requires.

The get_uid operation of StateManager provides read-only access to an object’s internal system name for whatever purpose the programmer requires, such as registration of the name in a name server. The value of the internal system name can only be set when an object is initially constructed, either by the provision of an explicit parameter or by generating a new identifier when the object is created.

The destroy method can be used to remove the object’s state from the object store. This is an atomic operation, and therefore will only remove the state if the top-level transaction within which it is invoked eventually commits. The programmer must obtain exclusive access to the object prior to invoking this operation.

Since object recovery and persistence essentially have complimentary requirements (the only difference being where state information is stored and for what purpose), StateManager effectively combines the management of these two properties into a single mechanism. It uses instances of the classes InputObjectState and OutputObjectState both for recovery and persistence purposes. An additional argument passed to the save_state and restore_state operations allows the programmer to determine the purpose for which any given invocation is being made. This allows different information to be saved for recovery and persistence purposes.

In summary, the ArjunaCore class StateManager manages the state of an object and provides all of the basic support mechanisms required by an object for state management purposes. Some operations must be defined by the class developer. These operations are: save_state , restore_state , and type .

boolean save_state ( OutputObjectState state , int objectType )

Invoked whenever the state of an object might need to be saved for future use, primarily for recovery or persistence purposes. The objectType parameter indicates the reason that save_state was invoked by ArjunaCore. This enables the programmer to save different pieces of information into the OutputObjectState supplied as the first parameter depending upon whether the state is needed for recovery or persistence purposes. For example, pointers to other ArjunaCore objects might be saved simply as pointers for recovery purposes but as Uid s for persistence purposes. As shown earlier, the OutputObjectState class provides convenient operations to allow the saving of instances of all of the basic types in Java. In order to support crash recovery for persistent objects it is necessary for all save_state methods to call super.save_state .

save_state assumes that an object is internally consistent and that all variables saved have valid values. It is the programmer's responsibility to ensure that this is the case.

boolean restore_state ( InputObjectState state , int objectType )

Invoked whenever the state of an object needs to be restored to the one supplied. Once again the second parameter allows different interpretations of the supplied state. In order to support crash recovery for persistent objects it is necessary for all restore_state methods to call super.restore_state .

String type ()

The ArjunaCore persistence mechanism requires a means of determining the type of an object as a string so that it can save or restore the state of the object into or from the object store. By convention this information indicates the position of the class in the hierarchy. For example, /StateManager/LockManager/Object .

The type method is used to determine the location in the object store where the state of instances of that class will be saved and ultimately restored. This can actually be any valid string. However, you should avoid using the hash character (#) as this is reserved for special directories that ArjunaCore requires.

Consider the following basic Array class derived from the StateManager class. In this example, to illustrate saving and restoring of an object’s state, the highestIndex variable is used to keep track of the highest element of the array that has a non-zero value.


Concurrency control information within ArjunaCore is maintained by locks. Locks which are required to be shared between objects in different processes may be held within a lock store, similar to the object store facility presented previously. The lock store provided with ArjunaCore deliberately has a fairly restricted interface so that it can be implemented in a variety of ways. For example, lock stores are implemented in shared memory, on the Unix file system (in several different forms), and as a remotely accessible store. More information about the object stores available in ArjunaCore can be found in the Appendix.

Note

As with all ArjunaCore classes, the default lock stores are pure Java implementations. To access the shared memory and other more complex lock store implementations it is necessary to use native methods.


ArjunaCore comes with support for several different object store implementations. If the object model being used is SINGLE, then no lock store is required for maintaining locks, since the information about the object is not exported from it. However, if the MULTIPLE model is used, then different run-time environments (processes, Java virtual machines) may need to share concurrency control information. The implementation type of the lock store to use can be specified for all objects within a given execution environment using the TxojEnvironmentBean.lockStoreType property variable. Currently this can have one of the following values:

BasicLockStore

This is an in-memory implementation which does not, by default, allow sharing of stored information between execution environments. The application programmer is responsible for sharing the store information.

BasicPersistentLockStore

This is the default implementation, and stores locking information within the local file system. Therefore execution environments that share the same file store can share concurrency control information. The root of the file system into which locking information is written is the LockStore directory within the ArjunaCore installation directory. You can override this at runtime by setting the TxojEnvironmentBean.lockStoreDir property variable accordingly, or placing the location within the CLASSPATH .

java -D TxojEnvironmentBean.lockStoreDir=/var/tmp/LockStore myprogram
java –classpath $CLASSPATH;/var/tmp/LockStore myprogram

If neither of these approaches is taken, then the default location will be at the same level as the etc directory of the installation.

The concurrency controller is implemented by the class LockManager , which provides sensible default behavior, while allowing the programmer to override it if deemed necessary by the particular semantics of the class being programmed. The primary programmer interface to the concurrency controller is via the setlock operation. By default, the ArjunaCore runtime system enforces strict two-phase locking following a multiple reader, single writer policy on a per object basis. Lock acquisition is under programmer control, since just as StateManager cannot determine if an operation modifies an object, LockManager cannot determine if an operation requires a read or write lock. Lock release, however, is normally under control of the system and requires no further intervention by the programmer. This ensures that the two-phase property can be correctly maintained.

The LockManager class is primarily responsible for managing requests to set a lock on an object or to release a lock as appropriate. However, since it is derived from StateManager , it can also control when some of the inherited facilities are invoked. For example, if a request to set a write lock is granted, then LockManager invokes modified directly assuming that the setting of a write lock implies that the invoking operation must be about to modify the object. This may in turn cause recovery information to be saved if the object is recoverable. In a similar fashion, successful lock acquisition causes activate to be invoked.

Therefore, LockManager is directly responsible for activating and deactivating persistent objects, as well as registering Resources for managing concurrency control. By driving the StateManager class, it is also responsible for registering Resources for persistent or recoverable state manipulation and object recovery. The application programmer simply sets appropriate locks, starts and ends transactions, and extends the save_state and restore_state methods of StateManager .


The setlock operation must be parametrized with the type of lock required (READ or WRITE), and the number of retries to acquire the lock before giving up. If a lock conflict occurs, one of the following scenarios will take place:

  • If the retry value is equal to LockManager.waitTotalTimeout , then the thread which called setlock will be blocked until the lock is released, or the total timeout specified has elapsed, and in which REFUSED will be returned.

  • If the lock cannot be obtained initially then LockManager will try for the specified number of retries, waiting for the specified timeout value between each failed attempt. The default is 100 attempts, each attempt being separated by a 0.25 seconds delay. The time between retries is specified in micro-seconds.

  • If a lock conflict occurs the current implementation simply times out lock requests, thereby preventing deadlocks, rather than providing a full deadlock detection scheme. If the requested lock is obtained, the setlock operation will return the value GRANTED, otherwise the value REFUSED is returned. It is the responsibility of the programmer to ensure that the remainder of the code for an operation is only executed if a lock request is granted. Below are examples of the use of the setlock operation.


The concurrency control mechanism is integrated into the atomic action mechanism, thus ensuring that as locks are granted on an object appropriate information is registered with the currently running atomic action to ensure that the locks are released at the correct time. This frees the programmer from the burden of explicitly freeing any acquired locks if they were acquired within atomic actions. However, if locks are acquired on an object outside of the scope of an atomic action, it is the programmer's responsibility to release the locks when required, using the corresponding releaselock operation.

Unlike many other systems, locks in ArjunaCore are not special system types. Instead they are simply instances of other ArjunaCore objects (the class Lock which is also derived from StateManager so that locks may be made persistent if required and can also be named in a simple fashion). Furthermore, LockManager deliberately has no knowledge of the semantics of the actual policy by which lock requests are granted. Such information is maintained by the actual Lock class instances which provide operations (the conflictsWith operation) by which LockManager can determine if two locks conflict or not. This separation is important in that it allows the programmer to derive new lock types from the basic Lock class and by providing appropriate definitions of the conflict operations enhanced levels of concurrency may be possible.


The Lock class provides a modifiesObject operation which LockManager uses to determine if granting this locking request requires a call on modified. This operation is provided so that locking modes other than simple read and write can be supported. The supplied Lock class supports the traditional multiple reader/single writer policy.

Recall that ArjunaCore objects can be recoverable, recoverable and persistent, or neither. Additionally each object possesses a unique internal name. These attributes can only be set when that object is constructed. Thus LockManager provides two protected constructors for use by derived classes, each of which fulfills a distinct purpose

Protected Constructors Provided by LockManager

LockManager ()

This constructor allows the creation of new objects, having no prior state.

LockManager ( int objectType , int objectModel)

As above, this constructor allows the creation of new objects having no prior state. exist. The objectType parameter determines whether an object is simply recoverable (indicated by RECOVERABLE ), recoverable and persistent (indicated by ANDPERSISTENT ), or neither (indicated by NEITHER ). If an object is marked as being persistent then the state of the object will be stored in one of the object stores. The shared parameter only has meaning if it is RECOVERABLE . If the object model is SINGLE (the default behavior) then the recoverable state of the object is maintained within the object itself, and has no external representation). Otherwise an in-memory (volatile) object store is used to store the state of the object between atomic actions.

Constructors for new persistent objects should make use of atomic actions within themselves. This will ensure that the state of the object is automatically written to the object store either when the action in the constructor commits or, if an enclosing action exists, when the appropriate top-level action commits. Later examples in this chapter illustrate this point further.

LockManager ( Uid objUid )

This constructor allows access to an existing persistent object, whose internal name is given by the objUid parameter. Objects constructed using this operation will normally have their prior state (identified by objUid ) loaded from an object store automatically by the system.

LockManager ( Uid objUid , int objectModel )

As above, this constructor allows access to an existing persistent object, whose internal name is given by the objUid parameter. Objects constructed using this operation will normally have their prior state (identified by objUid ) loaded from an object store automatically by the system. If the object model is SINGLE (the default behavior), then the object will not be reactivated at the start of each top-level transaction.

The finalizer of a programmer-defined class must invoke the inherited operation terminate to inform the state management mechanism that the object is about to be destroyed. Otherwise, unpredictable results may occur.

Atomic actions (transactions) can be used by both application programmers and class developers. Thus entire operations (or parts of operations) can be made atomic as required by the semantics of a particular operation. This chapter will describe some of the more subtle issues involved with using transactions in general and ArjunaCore in particular.

In some cases it may be necessary to enlist participants that are not two-phase commit aware into a two-phase commit transaction. If there is only a single resource then there is no need for two-phase commit. However, if there are multiple resources in the transaction, the Last Resource Commit Optimization (LRCO) comes into play. It is possible for a single resource that is one-phase aware (i.e., can only commit or roll back, with no prepare), to be enlisted in a transaction with two-phase commit aware resources. This feature is implemented by logging the decision to commit after committing the one-phase aware participant: The coordinator asks each two-phase aware participant if they are able to prepare and if they all vote yes then the one-phase aware participant is asked to commit. If the one-phase aware participant commits successfully then the decision to commit is logged and then commit is called on each two-phase aware participant. A heuristic outcome will occur if the coordinator fails before logging its commit decision but after the one-phase participant has committed since each two-phase aware participant will eventually rollback (as required under presumed abort semantics). This strategy delays the logging of the decision to commit so that in failure scenarios we have avoided a write operation. But this choice does mean that there is no record in the system of the fact that a heuristic outcome has occurred.

In order to utilize the LRCO, your participant must implement the com.arjuna.ats.arjuna.coordinator.OnePhase interface and be registered with the transaction through the BasicAction.add operation. Since this operation expects instances of AbstractRecord , you must create an instance of com.arjuna.ats.arjuna.LastResourceRecord and give your participant as the constructor parameter.


By default, the Transaction Service executes the commit protocol of a top-level transaction in a synchronous manner. All registered resources will be told to prepare in order by a single thread, and then they will be told to commit or rollback. A similar comment applies to the volatile phase of the protocol which provides a synchronization mechanism that allows an interested party to be notified before and after the transaction completes. This has several possible disadvantages:

  • In the case of many registered synchronizations, the beforeSynchronization operation can logically be invoked in parallel on each non interposed synchronization (and similary for the interposed synchronizations). The disadvantage is that if an “early” synchronization in the list of registered resource forces a rollback by throwing an unchecked exception, possibly many beforeCompletion operations will have been made needlessly.

  • In the case of many registered resources, the prepare operation can logically be invoked in parallel on each resource. The disadvantage is that if an “early” resource in the list of registered resource forces a rollback during prepare , possibly many prepare operations will have been made needlessly.

  • In the case where heuristic reporting is not required by the application, the second phase of the commit protocol (including any afterCompletion synchronizations) can be done asynchronously, since its success or failure is not important to the outcome of the transaction.

Therefore, Narayana provides runtime options to enable possible threading optimizations. By setting the CoordinatorEnvironmentBean.asyncBeforeSynchronization environment variable to YES , during the beforeSynchronization phase a separate thread will be created for each synchronization registered with the transaction. By setting the CoordinatorEnvironmentBean.asyncPrepare environment variable to YES , during the prepare phase a separate thread will be created for each registered participant within the transaction. By setting CoordinatorEnvironmentBean.asyncCommit to YES , a separate thread will be created to complete the second phase of the transaction provided knowledge about heuristics outcomes is not required. By setting the CoordinatorEnvironmentBean.asyncAfterSynchronization environment variable to YES , during the afterSynchronization phase a separate thread will be created for each synchronization registered with the transaction provided knowledge about heuristics outcomes is not required.

Exercise caution when writing the save_state and restore_state operations to ensure that no atomic actions are started, either explicitly in the operation or implicitly through use of some other operation. This restriction arises due to the fact that ArjunaCore may invoke restore_state as part of its commit processing resulting in the attempt to execute an atomic action during the commit or abort phase of another action. This might violate the atomicity properties of the action being committed or aborted and is thus discouraged.

Example 1.18. 

If we consider the Example 1.12, “ Array Class ” given previously, the set and get operations could be implemented as shown below.

This is a simplification of the code, ignoring error conditions and exceptions.

public boolean set (int index, int value)
{
   boolean result = false;
   AtomicAction A = new AtomicAction();

   A.begin();

   // We need to set a WRITE lock as we want to modify the state.

   if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
   {
      elements[index] = value;
      if ((value > 0) &&(index > highestIndex
         highestIndex = index;
      A.commit(true);
      result = true;
   }
   else
      A.rollback();

   return result;
}
public int get (int index)  // assume -1 means error
{
   AtomicAction A = new AtomicAction();

   A.begin();

   // We only need a READ lock as the state is unchanged.

   if (setlock(new Lock(LockMode.READ), 0) == LockResult.GRANTED)
   {
      A.commit(true);

             return elements[index];
   }
   else
      A.rollback();

   return -1;
}

By default, transactions live until they are terminated by the application that created them or a failure occurs. However, it is possible to set a timeout (in seconds) on a per-transaction basis such that if the transaction has not terminated before the timeout expires it will be automatically rolled back.

In ArjunaCore, the timeout value is provided as a parameter to the AtomicAction constructor. If a value of AtomicAction.NO_TIMEOUT is provided (the default) then the transaction will not be automatically timed out. Any other positive value is assumed to be the timeout for the transaction (in seconds). A value of zero is taken to be a global default timeout, which can be provided by the property CoordinatorEnvironmentBean.defaultTimeout , which has a default value of 60 seconds.

Note

Default timeout values for other Narayana components, such as JTS, may be different and you should consult the relevant documentation to be sure.

When a top-level transaction is created with a non-zero timeout, it is subject to being rolled back if it has not completed within the specified number of seconds. Narayana uses a separate reaper thread which monitors all locally created transactions, and forces them to roll back if their timeouts elapse. If the transaction cannot be rolled back at that point, the reaper will force it into a rollback-only state so that it will eventually be rolled back.

By default this thread is dynamically scheduled to awake according to the timeout values for any transactions created, ensuring the most timely termination of transactions. It may alternatively be configured to awake at a fixed interval, which can reduce overhead at the cost of less accurate rollback timing. For periodic operation, change the CoordinatorEnvironmentBean.txReaperMode property from its default value of DYNAMIC to PERIODIC and set the interval between runs, in milliseconds, using the property CoordinatorEnvironmentBean.txReaperTimeout . The default interval in PERIODIC mode is 120000 milliseconds.

Warning

In earlier versions the PERIODIC mode was known as NORMAL and was the default behavior. The use of the configuration value NORMAL is deprecated and PERIODIC should be used instead if the old scheduling behavior is still required.

If a value of 0 is specified for the timeout of a top-level transaction, or no timeout is specified, then Narayana will not impose any timeout on the transaction, and the transaction will be allowed to run indefinitely. This default timeout can be overridden by setting the CoordinatorEnvironmentBean.defaultTimeout property variable when using to the required timeout value in seconds, when using ArjunaCore, ArjunaJTA or ArjunaJTS.

Note

As of JBoss Transaction Service 4.5, transaction timeouts have been unified across all transaction components and are controlled by ArjunaCore.

Examples throughout this manual use transactions in the implementation of constructors for new persistent objects. This is deliberate because it guarantees correct propagation of the state of the object to the object store. The state of a modified persistent object is only written to the object store when the top-level transaction commits. Thus, if the constructor transaction is top-level and it commits, the newly-created object is written to the store and becomes available immediately. If, however, the constructor transaction commits but is nested because another transaction that was started prior to object creation is running, the state is written only if all of the parent transactions commit.

On the other hand, if the constructor does not use transactions, inconsistencies in the system can arise. For example, if no transaction is active when the object is created, its state is not saved to the store until the next time the object is modified under the control of some transaction.


The two objects are created outside of the control of the top-level action A . obj1 is a new object. obj2 is an old existing object. When the remember operation of obj2 is invoked, the object will be activated and the Uid of obj1 remembered. Since this action commits, the persistent state of obj2 may now contain the Uid of obj1 . However, the state of obj1 itself has not been saved since it has not been manipulated under the control of any action. In fact, unless it is modified under the control of an action later in the application, it will never be saved. If, however, the constructor had used an atomic action, the state of obj1 would have automatically been saved at the time it was constructed and this inconsistency could not arise.

ArjunaCore may invoke the user-defined save_state operation of an object at any time during the lifetime of an object, including during the execution of the body of the object’s constructor. This is particularly a possibility if it uses atomic actions. It is important, therefore, that all of the variables saved by save_state are correctly initialized. Exercise caution when writing the save_state and restore_state operations, to ensure that no transactions are started, either explicitly in the operation, or implicitly through use of some other operation. The reason for this restriction is that ArjunaCore may invoke restore_state as part of its commit processing. This would result in the attempt to execute an atomic transaction during the commit or abort phase of another transaction. This might violate the atomicity properties of the transaction being committed or aborted, and is thus discouraged. In order to support crash recovery for persistent objects, all save_state and restore_state methods of user objects must call super.save_state and super.restore_state .

The examples throughout this manual derive user classes from LockManager . These are two important reasons for this.

  1. Firstly, and most importantly, the serializability constraints of atomic actions require it.

  2. It reduces the need for programmer intervention.

However, if you only require access to ArjunaCore's persistence and recovery mechanisms, direct derivation of a user class from StateManager is possible.

Classes derived directly from StateManager must make use of its state management mechanisms explicitly. These interactions are normally undertaken by LockManager . From a programmer's point of view this amounts to making appropriate use of the operations activate , deactivate , and modified , since StateManager 's constructors are effectively identical to those of LockManager .




Development Phases of a ArjunaCore Application

  1. First, develop new classes with characteristics like persistence, recoverability, and concurrency control.

  2. Then develop the applications that make use of the new classes of objects.

Although these two phases may be performed in parallel and by a single person, this guide refers to the first step as the job of the class developer, and the second as the job of the applications developer. The class developer defines appropriate save_state and restore_state operations for the class, sets appropriate locks in operations, and invokes the appropriate ArjunaCore class constructors. The applications developer defines the general structure of the application, particularly with regard to the use of atomic actions.

This chapter outlines a simple application, a simple FIFO Queue class for integer values. The Queue is implemented with a doubly linked list structure, and is implemented as a single object. This example is used throughout the rest of this manual to illustrate the various mechanisms provided by ArjunaCore. Although this is an unrealistic example application, it illustrates all of the ArjunaCore modifications without requiring in depth knowledge of the application code.

Note

The application is assumed not to be distributed. To allow for distribution, context information must be propagated either implicitly or explicitly.

Using an existing persistent object requires the use of a special constructor that takes the Uid of the persistent object, as shown in Example 1.25, “ Class TransactionalQueue .


The use of an atomic action within the constructor for a new object follows the guidelines outlined earlier and ensures that the object’s state will be written to the object store when the appropriate top level atomic action commits (which will either be the action A or some enclosing action active when the TransactionalQueue was constructed). The use of atomic actions in a constructor is simple: an action must first be declared and its begin operation invoked; the operation must then set an appropriate lock on the object (in this case a WRITE lock must be acquired), then the main body of the constructor is executed. If this is successful the atomic action can be committed, otherwise it is aborted.

The finalizer of the queue class is only required to call the terminate and finalizer operations of LockManager .

public void finalize ()
{
    super.terminate();
    super.finalize();
}     

In this chapter we shall cover information on failure recovery that is specific to ArjunaCore, TXOJ or using Narayana outside the scope of a supported application server.

The failure recovery subsystem of Narayana will ensure that results of a transaction are applied consistently to all resources affected by the transaction, even if any of the application processes or the machine hosting them crash or lose network connectivity. In the case of machine (system) crash or network failure, the recovery will not take place until the system or network are restored, but the original application does not need to be restarted – recovery responsibility is delegated to the Recovery Manager process (see below). Recovery after failure requires that information about the transaction and the resources involved survives the failure and is accessible afterward: this information is held in the ActionStore, which is part of the ObjectStore.

Warning

If the ObjectStore is destroyed or modified, recovery may not be possible.

Until the recovery procedures are complete, resources affected by a transaction that was in progress at the time of the failure may be inaccessible. For database resources, this may be reported as tables or rows held by “in-doubt transactions”. For TransactionalObjects for Java resources, an attempt to activate the Transactional Object (as when trying to get a lock) will fail.

The RecoveryManager scans the ObjectStore and other locations of information, looking for transactions and resources that require, or may require recovery. The scans and recovery processing are performed by recovery modules, (instances of classes that implement the com.arjuna.ats.arjuna.recovery.RecoveryModule interface), each with responsibility for a particular category of transaction or resource. The set of recovery modules used are dynamically loaded, using properties found in the RecoveryManager property file.

The interface has two methods: periodicWorkFirstPass and periodicWorkSecondPass. At an interval (defined by property com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod), the RecoveryManager will call the first pass method on each property, then wait for a brief period (defined by property com.arjuna.ats.arjuna.recovery.recoveryBackoffPeriod), then call the second pass of each module. Typically, in the first pass, the module scans (e.g. the relevant part of the ObjectStore) to find transactions or resources that are in-doubt (i.e. are part way through the commitment process). On the second pass, if any of the same items are still in-doubt, it is possible the original application process has crashed and the item is a candidate for recovery.

An attempt, by the RecoveryManager, to recover a transaction that is still progressing in the original process(es) is likely to break the consistency. Accordingly, the recovery modules use a mechanism (implemented in the com.arjuna.ats.arjuna.recovery.TransactionStatusManager package) to check to see if the original process is still alive, and if the transaction is still in progress. The RecoveryManager only proceeds with recovery if the original process has gone, or, if still alive, the transaction is completed. (If a server process or machine crashes, but the transaction-initiating process survives, the transaction will complete, usually generating a warning. Recovery of such a transaction is the RecoveryManager’s responsibility).

It is clearly important to set the interval periods appropriately. The total iteration time will be the sum of the periodicRecoveryPeriod, recoveryBackoffPeriod and the length of time it takes to scan the stores and to attempt recovery of any in-doubt transactions found, for all the recovery modules. The recovery attempt time may include connection timeouts while trying to communicate with processes or machines that have crashed or are inaccessible (which is why there are mechanisms in the recovery system to avoid trying to recover the same transaction for ever). The total iteration time will affect how long a resource will remain inaccessible after a failure – periodicRecoveryPeriod should be set accordingly (default is 120 seconds). The recoveryBackoffPeriod can be comparatively short (default is 10 seconds) – its purpose is mainly to reduce the number of transactions that are candidates for recovery and which thus require a “call to the original process to see if they are still in progress

Note

In previous versions of Narayana there was no contact mechanism, and the backoff period had to be long enough to avoid catching transactions in flight at all. From 3.0, there is no such risk.

Two recovery modules (implementations of the com.arjuna.ats.arjuna.recovery.RecoveryModule interface) are supplied with Narayana, supporting various aspects of transaction recovery including JDBC recovery. It is possible for advanced users to create their own recovery modules and register them with the Recovery Manager. The recovery modules are registered with the RecoveryManager using RecoveryEnvironmentBean.recoveryExtensions. These will be invoked on each pass of the periodic recovery in the sort-order of the property names – it is thus possible to predict the ordering (but note that a failure in an application process might occur while a periodic recovery pass is in progress). The default Recovery Extension settings are:


The operation of the recovery subsystem will cause some entries to be made in the ObjectStore that will not be removed in normal progress. The RecoveryManager has a facility for scanning for these and removing items that are very old. Scans and removals are performed by implementations of the com.arjuna.ats.arjuna.recovery.ExpiryScanner interface. Implementations of this interface are loaded by giving the class names as the value of a property RecoveryEnvironmentBean.expiryScanners. The RecoveryManager calls the scan() method on each loaded Expiry Scanner implementation at an interval determined by the property RecoveryEnvironmentBean.expiryScanInterval”. This value is given in hours – default is 12. An expiryScanInterval value of zero will suppress any expiry scanning. If the value as supplied is positive, the first scan is performed when RecoveryManager starts; if the value is negative, the first scan is delayed until after the first interval (using the absolute value)

The kinds of item that are scanned for expiry are:

TransactionStatusManager items: one of these is created by every application process that uses Narayana – they contain the information that allows the RecoveryManager to determine if the process that initiated the transaction is still alive, and what the transaction status is. The expiry time for these is set by the property com.arjuna.ats.arjuna.recovery.transactionStatusManagerExpiryTime (in hours – default is 12, zero means never expire). The expiry time should be greater than the lifetime of any single Narayana-using process.

The Expiry Scanner properties for these are:


To illustrate the behavior of a recovery module, the following pseudo code describes the basic algorithm used for Atomic Action transactions and Transactional Objects for java.



In order to recover from failure, we have seen that the Recovery Manager contacts recovery modules by invoking periodically the methods periodicWorkFirstPass and periodicWorkSecondPass. Each Recovery Module is then able to manage recovery according the type of resources that need to be recovered. The Narayana product is shipped with a set of recovery modules (TOReceveryModule, XARecoveryModule…), but it is possible for a user to define its own recovery module that fit his application. The following basic example illustrates the steps needed to build such recovery module

This basic example does not aim to present a complete process to recover from failure, but mainly to illustrate the way to implement a recovery module.

The application used here consists to create an atomic transaction, to register a participant within the created transaction and finally to terminate it either by commit or abort. A set of arguments are provided:

  • to decide to commit or abort the transaction,

  • to decide generating a crash during the commitment process.

The code of the main class that control the application is given below


The registered participant has the following behavior:

  • During the prepare phase, it writes a simple message - “I’m prepared”- on the disk such The message is written in a well known file

  • During the commit phase, it writes another message - “I’m committed”- in the same file used during prepare

  • If it receives an abort message, it removes from the disk the file used for prepare if any.

  • If a crash has been decided for the test, then it crashes during the commit phase – the file remains with the message “I’m prepared”.

The main portion of the code illustrating such behavior is described hereafter.

Warning

that the location of the file given in variable filename can be changed


The role of the Recovery Module in such application consists to read the content of the file used to store the status of the participant, to determine that status and print a message indicating if a recovery action is needed or not.

Example 1.39. SimpleRecoveryModule.java

package com.arjuna.demo.recoverymodule;

import com.arjuna.ats.arjuna.recovery.RecoveryModule;

public class SimpleRecoveryModule implements RecoveryModule {
	public String filename = "c:/tmp/RecordState";

	public SimpleRecoveryModule() {
		System.out
				.println("The SimpleRecoveryModule is loaded");
	}

	public void periodicWorkFirstPass() {
		try {
			java.io.FileInputStream file = new java.io.FileInputStream(
					filename);
			java.io.InputStreamReader input = new java.io.InputStreamReader(
					file);
			java.io.BufferedReader reader = new java.io.BufferedReader(
					input);
			String stringState = reader.readLine();
			if (stringState.compareTo("I'm prepared") == 0)
				System.out
						.println("The transaction is in the prepared state");
			file.close();
		} catch (java.io.IOException ex) {
			System.out.println("Nothing found on the Disk");
		}
	}

	public void periodicWorkSecondPass() {
		try {
			java.io.FileInputStream file = new java.io.FileInputStream(
					filename);
			java.io.InputStreamReader input = new java.io.InputStreamReader(
					file);
			java.io.BufferedReader reader = new java.io.BufferedReader(
					input);
			String stringState = reader.readLine();
			if (stringState.compareTo("I'm prepared") == 0) {
				System.out
						.println("The record is still in the prepared state");
				System.out.println("– Recovery is needed");
			} else if (stringState
					.compareTo("I'm Committed") == 0) {
				System.out
						.println("The transaction has completed and committed");
			}
			file.close();
		} catch (java.io.IOException ex) {
			System.out.println("Nothing found on the Disk");
			System.out
					.println("Either there was no transaction");
			System.out.println("or it as been rolled back");
		}
	}
}

        

The recovery module should now be deployed in order to be called by the Recovery Manager. To do so, we just need to add an entry in the the config file for the extension:


Once started, the Recovery Manager will automatically load the listed Recovery modules.

Note

The source of the code can be retrieved under the trailmap directory of the Narayana installation.

As mentioned, the basic application presented above does not present the complete process to recover from failure, but it was just presented to describe how the build a recovery module. In case of the OTS protocol, let’s consider how a recovery module that manages recovery of OTS resources can be configured.

To manage recovery in case of failure, the OTS specification has defined a recovery protocol. Transaction’s participants in a doubt status could use the RecoveryCoordinator to determine the status of the transaction. According to that transaction status, those participants can take appropriate decision either by roll backing or committing. Asking the RecoveryCoordinator object to determine the status consists to invoke the replay_completion operation on the RecoveryCoordinator.

For each OTS Resource in a doubt status, it is well known which RecoveyCoordinator to invoke to determine the status of the transaction in which the Resource is involved – It’s the RecoveryCoordinator returned during the Resource registration process. Retrieving such RecoveryCoordinator per resource means that it has been stored in addition to other information describing the resource.

A recovery module dedicated to recover OTS Resources could have the following behavior. When requested by the recovery Manager on the first pass it retrieves from the disk the list of resources that are in the doubt status. During the second pass, if the resources that were retrieved in the first pass still remain in the disk then they are considered as candidates for recovery. Therefore, the Recovery Module retrieves for each candidate its associated RecoveryCoordinator and invokes the replay_completion operation that the status of the transaction. According to the returned status, an appropriate action would be taken (for instance, rollback the resource is the status is aborted or inactive).

Apart from ensuring that the run-time system is executing normally, there is little continuous administration needed for the Narayana software. Refer to Important Points for Administrators for some specific concerns.

Important Points for Administrators

  • The present implementation of the Narayana system provides no security or protection for data. The objects stored in the Narayana object store are (typically) owned by the user who ran the application that created them. The Object Store and Object Manager facilities make no attempt to enforce even the limited form of protection that Unix/Windows provides. There is no checking of user or group IDs on access to objects for either reading or writing.

  • Persistent objects created in the Object Store never go away unless the StateManager.destroy method is invoked on the object or some application program explicitly deletes them. This means that the Object Store gradually accumulates garbage (especially during application development and testing phases). At present we have no automated garbage collection facility. Further, we have not addressed the problem of dangling references. That is, a persistent object, A, may have stored a Uid for another persistent object, B, in its passive representation on disk. There is nothing to prevent an application from deleting B even though A still contains a reference to it. When A is next activated and attempts to access B, a run-time error will occur.

  • There is presently no support for version control of objects or database reconfiguration in the event of class structure changes. This is a complex research area that we have not addressed. At present, if you change the definition of a class of persistent objects, you are entirely responsible for ensuring that existing instances of the object in the Object Store are converted to the new representation. The Narayana software can neither detect nor correct references to old object state by new operation versions or vice versa.

  • Object store management is critically important to the transaction service.

By default the transaction manager starts up in an active state such that new transactions can be created immediately. If you wish to have more control over this it is possible to set the CoordinatorEnvironmentBean.startDisabled configuration option to YES and in which case no transactions can be created until the transaction manager is enabled via a call to method TxControl.enable ).

It is possible to stop the creation of new transactions at any time by calling method TxControl.disable . Transactions that are currently executing will not be affected. By default recovery will be allowed to continue and the transaction system will still be available to manage recovery requests from other instances in a distributed environment. (See the Failure Recovery Guide for further details). However, if you wish to disable recovery as well as remove any resources it maintains, then you can pass true to method TxControl.disable ; the default is to use false .

If you wish to shut the system down completely then it may also be necessary to terminate the background transaction reaper (see the Programmers Guide for information about what the reaper does.) In order to do this you may want to first prevent the creation of new transactions (if you are not creating transactions with timeouts then this step is not necessary) using method TxControl.disable . Then you should call method TransactionReaper.terminate . This method takes a Boolean parameter: if true then the method will wait for the normal timeout periods associated with any transactions to expire before terminating the transactions; if false then transactions will be forced to terminate (rollback or have their outcome set such that they can only ever rollback) immediately.

Note

if you intent to restart the recovery manager later after having terminated it then you MUST use the TransactionReapear.terminate method with asynchronous behavior set to false .

The failure recovery subsystem of Narayana will ensure that results of a transaction are applied consistently to all resources affected by the transaction, even if any of the application processes or the machine hosting them crash or lose network connectivity. In the case of machine (system) crash or network failure, the recovery will not take place until the system or network are restored, but the original application does not need to be restarted. Recovery responsibility is delegated to Section 2.1.5.1, “The Recovery Manager” . Recovery after failure requires that information about the transaction and the resources involved survives the failure and is accessible afterward: this information is held in the ActionStore , which is part of the ObjectStore .

Warning

If the ObjectStore is destroyed or modified, recovery may not be possible.

Until the recovery procedures are complete, resources affected by a transaction that was in progress at the time of the failure may be inaccessible. For database resources, this may be reported as tables or rows held by “in-doubt transactions”. For TransactionalObjects for Java resources, an attempt to activate the Transactional Object (as when trying to get a lock) will fail.

The RecoveryManager scans the ObjectStore and other locations of information, looking for transactions and resources that require, or may require recovery. The scans and recovery processing are performed by recovery modules. These recovery modules are instances of classes that implement the com.arjuna.ats.arjuna.recovery.RecoveryModule interface . Each module has responsibility for a particular category of transaction or resource. The set of recovery modules used is dynamically loaded, using properties found in the RecoveryManager property file.

The interface has two methods: periodicWorkFirstPass and periodicWorkSecondPass . At an interval defined by property com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod , the RecoveryManager calls the first pass method on each property, then waits for a brief period, defined by property com.arjuna.ats.arjuna.recovery.recoveryBackoffPeriod . Next, it calls the second pass of each module. Typically, in the first pass, the module scans the relevant part of the ObjectStore to find transactions or resources that are in-doubt. An in-doubt transaction may be part of the way through the commitment process, for instance. On the second pass, if any of the same items are still in-doubt, the original application process may have crashed, and the item is a candidate for recovery.

An attempt by the RecoveryManager to recover a transaction that is still progressing in the original process is likely to break the consistency. Accordingly, the recovery modules use a mechanism, implemented in the com.arjuna.ats.arjuna.recovery.TransactionStatusManager package, to check to see if the original process is still alive, and if the transaction is still in progress. The RecoveryManager only proceeds with recovery if the original process has gone, or, if still alive, the transaction is completed. If a server process or machine crashes, but the transaction-initiating process survives, the transaction completes, usually generating a warning. Recovery of such a transaction is the responsibility of the RecoveryManager.

It is clearly important to set the interval periods appropriately. The total iteration time will be the sum of the periodicRecoveryPeriod and recoveryBackoffPeriod properties, and the length of time it takes to scan the stores and to attempt recovery of any in-doubt transactions found, for all the recovery modules. The recovery attempt time may include connection timeouts while trying to communicate with processes or machines that have crashed or are inaccessible. There are mechanisms in the recovery system to avoid trying to recover the same transaction indefinitely. The total iteration time affects how long a resource will remain inaccessible after a failure. – periodicRecoveryPeriod should be set accordingly. Its default is 120 seconds. The recoveryBackoffPeriod can be comparatively short, and defaults to 10 seconds. –Its purpose is mainly to reduce the number of transactions that are candidates for recovery and which thus require a call to the original process to see if they are still in progress.

Note

In previous versions of Narayana , there was no contact mechanism, and the back-off period needed to be long enough to avoid catching transactions in flight at all. From 3.0, there is no such risk.

Two recovery modules, implementations of the com.arjuna.ats.arjuna.recovery.RecoveryModule interface, are supplied with Narayana . These modules support various aspects of transaction recovery, including JDBC recovery. It is possible for advanced users to create their own recovery modules and register them with the Recovery Manager. The recovery modules are registered with the RecoveryManager using RecoveryEnvironmentBean.recoveryModuleClassNames . These will be invoked on each pass of the periodic recovery in the sort-order of the property names – it is thus possible to predict the ordering, but a failure in an application process might occur while a periodic recovery pass is in progress. The default Recovery Extension settings are:

<entry key="RecoveryEnvironmentBean.recoveryModuleClassNames">
    com.arjuna.ats.internal.arjuna.recovery.AtomicActionRecoveryModule
    com.arjuna.ats.internal.txoj.recovery.TORecoveryModule
    com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule
</entry>

The operation of the recovery subsystem cause some entries to be made in the ObjectStore that are not removed in normal progress. The RecoveryManager has a facility for scanning for these and removing items that are very old. Scans and removals are performed by implementations of the com.arjuna.ats.arjuna.recovery.ExpiryScanner interface. These implementations are loaded by giving the class names as the value of a property RecoveryEnvironmentBean.expiryScannerClassNames . The RecoveryManager calls the scan() method on each loaded Expiry Scanner implementation at an interval determined by the property RecoveryEnvironmentBean.expiryScanInterval . This value is given in hours, and defaults to 12hours. An expiryScanInterval value of zero suppresses any expiry scanning. If the value supplied is positive, the first scan is performed when RecoveryManager starts. If the value is negative, the first scan is delayed until after the first interval, using the absolute value.

The kinds of item that are scanned for expiry are:

TransactionStatusManager items

One TransactionStatusManager item is created by every application process that uses Narayana . It contains the information that allows the RecoveryManager to determine if the process that initiated the transaction is still alive, and its status. The expiry time for these items is set by the property com.arjuna.ats.arjuna.recovery.transactionStatusManagerExpiryTime , expressed in hours. The default is 12, and 0 (zero) means never to expire.The expiry time should be greater than the lifetime of any single processes using Narayana .

The Expiry Scanner properties for these are:

 <entry key="RecoveryEnvironmentBean.expiryScannerClassNames">
    com.arjuna.ats.internal.arjuna.recovery.ExpiredTransactionStatusManagerScanner
</entry>

The approach Narayana takes for incorporating JDBC connections within transactions is to provide transactional JDBC drivers as conduits for all interactions. These drivers intercept all invocations and ensure that they are registered with, and driven by, appropriate transactions. The driver com.arjuna.ats.jdbc.TransactionalDriver handles all JDBC drivers, implementing the java.sql.Driver interface. If the database is not transactional, ACID properties cannot be guaranteed.

Because Narayana provides JDBC connectivity via its own JDBC driver, application code can support transactions with relatively small code changes. Typically, the application programmer only needs to start and terminate transactions.

JDBC connections are created from appropriate DataSources. Connections which participate in distributed transactions are obtained from XADataSources. When using a JDBC driver, Narayana uses the appropriate DataSource whenever a connection to the database is made. It then obtains XAResources and registers them with the transaction via the JTA interfaces. The transaction service uses these XAResources when the transaction terminates in order to drive the database to either commit or roll back the changes made via the JDBC connection.

Narayana JDBC support can obtain XADataSources through the Java Naming and Directory Interface (JNDI) or dynamic class instantiation.

A JDBC driver can use arbitrary DataSources without having to know specific details about their implementations, by using JNDI. A specific DataSource or XADataSource can be created and registered with an appropriate JNDI implementation, and the application, or JDBC driver, can later bind to and use it. Since JNDI only allows the application to see the DataSource or XADataSource as an instance of the interface (e.g., javax.sql.XADataSource) rather than as an instance of the implementation class (e.g., com.mydb.myXADataSource), the application is not tied at build-time to only use a specific implementation.

For the TransactionalDriver class to use a JNDI-registered XADataSource, you need to create the XADataSource instance and store it in an appropriate JNDI implementation. Details of how to do this can be found in the JDBC tutorial available at the Java web site.


Once the connection is established, all operations on the connection are monitored by Narayana. you do not need to use the transactional connection within transactions. If a transaction is not present when the connection is used, then operations are performed directly on the database.

Important

JDBC does not support subtransactions.

You can use transaction timeouts to automatically terminate transactions if a connection is not terminated within an appropriate period.

You can use Narayana connections within multiple transactions simultaneously. An example would be different threads, with different notions of the current transaction. Narayana does connection pooling for each transaction within the JDBC connection. Although multiple threads may use the same instance of the JDBC connection, internally there may be a separate connection for each transaction. With the exception of method close , all operations performed on the connection at the application level are only performed on this transaction-specific connection.

Narayana automatically registers the JDBC driver connection with the transaction via an appropriate resource. When the transaction terminates, this resource either commits or rolls back any changes made to the underlying database via appropriate calls on the JDBC driver.

Once created, the driver and any connection can be used in the same way as any other JDBC driver or connection.


Example 2.7. JDBC example

This simplified example assumes that you are using the transactional JDBC driver provided with . For details about how to configure and use this driver see the previous Chapter.

public class JDBCTest
{
    public static void main (String[] args)
    {
        /*
         */

        Connection conn = null;
        Connection conn2 = null;
        Statement stmt = null;        // non-tx statement
        Statement stmtx = null;  // will be a tx-statement
        Properties dbProperties = new Properties();

        try
            {
                System.out.println("\nCreating connection to database: "+url);

                /*
                 * Create conn and conn2 so that they are bound to the JBossTS
                 * transactional JDBC driver. The details of how to do this will
                 * depend on your environment, the database you wish to use and
                 * whether or not you want to use the Direct or JNDI approach. See
                 * the appropriate chapter in the JTA Programmers Guide.
                 */

                stmt = conn.createStatement();  // non-tx statement

                try
                    {
                        stmt.executeUpdate("DROP TABLE test_table");
                        stmt.executeUpdate("DROP TABLE test_table2");
                    }
                catch (Exception e)
                    {
                        // assume not in database.
                    }

                try
                    {
                        stmt.executeUpdate("CREATE TABLE test_table (a INTEGER,b INTEGER)");
                        stmt.executeUpdate("CREATE TABLE test_table2 (a INTEGER,b INTEGER)");
                    }
                catch (Exception e)
                    {
                    }

                try
                    {
                        System.out.println("Starting top-level transaction.");

                        com.arjuna.ats.jta.UserTransaction.userTransaction().begin();

                        stmtx = conn.createStatement(); // will be a tx-statement

                        System.out.println("\nAdding entries to table 1.");

                        stmtx.executeUpdate("INSERT INTO test_table (a, b) VALUES (1,2)");

                        ResultSet res1 = null;

                        System.out.println("\nInspecting table 1.");

                        res1 = stmtx.executeQuery("SELECT * FROM test_table");
                        while (res1.next())
                            {
                                System.out.println("Column 1: "+res1.getInt(1));
                                System.out.println("Column 2: "+res1.getInt(2));
                            }

                        System.out.println("\nAdding entries to table 2.");

                        stmtx.executeUpdate("INSERT INTO test_table2 (a, b) VALUES (3,4)");
                        res1 = stmtx.executeQuery("SELECT * FROM test_table2");
                        System.out.println("\nInspecting table 2.");

                        while (res1.next())
                            {
                                System.out.println("Column 1: "+res1.getInt(1));
                                System.out.println("Column 2: "+res1.getInt(2));
                            }
                        System.out.print("\nNow attempting to rollback changes.");
                        com.arjuna.ats.jta.UserTransaction.userTransaction().rollback();

                        com.arjuna.ats.jta.UserTransaction.userTransaction().begin();
                        stmtx = conn.createStatement();
                        ResultSet res2 = null;

                        System.out.println("\nNow checking state of table 1.");

                        res2 = stmtx.executeQuery("SELECT * FROM test_table");
                        while (res2.next())
                            {
                                System.out.println("Column 1: "+res2.getInt(1));
                                System.out.println("Column 2: "+res2.getInt(2));
                            }

                        System.out.println("\nNow checking state of table 2.");

                        stmtx = conn.createStatement();
                        res2 = stmtx.executeQuery("SELECT * FROM test_table2");
                        while (res2.next())
                            {
                                System.out.println("Column 1: "+res2.getInt(1));
                                System.out.println("Column 2: "+res2.getInt(2));
                            }

                        com.arjuna.ats.jta.UserTransaction.userTransaction().commit(true);
                    }
                catch (Exception ex)
                    {
                        ex.printStackTrace();
                        System.exit(0);
                    }
            }
        catch (Exception sysEx)
            {
                sysEx.printStackTrace();
                System.exit(0);
            }
    }

This class implements the XAResourceRecovery interface for XAResources. The parameter supplied in setParameters can contain arbitrary information necessary to initialize the class once created. In this example, it contains the name of the property file in which the database connection information is specified, as well as the number of connections that this file contains information on. Each item is separated by a semicolon.

This is only a small example of the sorts of things an XAResourceRecovery implementer could do. This implementation uses a property file that is assumed to contain sufficient information to recreate connections used during the normal run of an application so that recovery can be performed on them. Typically, user-names and passwords should never be presented in raw text on a production system.


Some error-handling code is missing from this example, to make it more readable.

Example 2.9. Failure recovery example with BasicXARecovery

/*
 * Some XAResourceRecovery implementations will do their startup work here,
 * and then do little or nothing in setDetails. Since this one needs to know
 * dynamic class name, the constructor does nothing.
 */

public BasicXARecovery () throws SQLException
{
    numberOfConnections = 1;
    connectionIndex = 0;
    props = null;
}

/**
 * The recovery module will have chopped off this class name already. The
 * parameter should specify a property file from which the url, user name,
 * password, etc. can be read.
 * 
 * @message com.arjuna.ats.internal.jdbc.recovery.initexp An exception
 *          occurred during initialisation.
 */

public boolean initialise (String parameter) throws SQLException
{
    if (parameter == null) 
        return true;

    int breakPosition = parameter.indexOf(BREAKCHARACTER);
    String fileName = parameter;

    if (breakPosition != -1)
        {
            fileName = parameter.substring(0, breakPosition - 1);

            try
                {
                    numberOfConnections = Integer.parseInt(parameter
                                                           .substring(breakPosition + 1));
                }
            catch (NumberFormatException e)
                {
                    return false;
                }
        }

    try
        {
            String uri = com.arjuna.common.util.FileLocator
                .locateFile(fileName);
            jdbcPropertyManager.propertyManager.load(XMLFilePlugin.class
                                                     .getName(), uri);

            props = jdbcPropertyManager.propertyManager.getProperties();
        }
    catch (Exception e)
        {
            return false;
        }

    return true;
}

/**
 * @message com.arjuna.ats.internal.jdbc.recovery.xarec {0} could not find
 *          information for connection!
 */

public synchronized XAResource getXAResource () throws SQLException
{
    JDBC2RecoveryConnection conn = null;

    if (hasMoreResources())
        {
            connectionIndex++;

            conn = getStandardConnection();

            if (conn == null) conn = getJNDIConnection();
        }

    return conn.recoveryConnection().getConnection().getXAResource();
}

public synchronized boolean hasMoreResources ()
{
    if (connectionIndex == numberOfConnections) 
        return false;
    else
        return true;
}

private final JDBC2RecoveryConnection getStandardConnection ()
    throws SQLException
{
    String number = new String("" + connectionIndex);
    String url = new String(dbTag + number + urlTag);
    String password = new String(dbTag + number + passwordTag);
    String user = new String(dbTag + number + userTag);
    String dynamicClass = new String(dbTag + number + dynamicClassTag);

    Properties dbProperties = new Properties();

    String theUser = props.getProperty(user);
    String thePassword = props.getProperty(password);

    if (theUser != null)
        {
            dbProperties.put(TransactionalDriver.userName, theUser);
            dbProperties.put(TransactionalDriver.password, thePassword);

            String dc = props.getProperty(dynamicClass);

            if (dc != null)
                dbProperties.put(TransactionalDriver.dynamicClass, dc);

            return new JDBC2RecoveryConnection(url, dbProperties);
        }
    else
        return null;
}

private final JDBC2RecoveryConnection getJNDIConnection ()
    throws SQLException
{
    String number = new String("" + connectionIndex);
    String url = new String(dbTag + jndiTag + number + urlTag);
    String password = new String(dbTag + jndiTag + number + passwordTag);
    String user = new String(dbTag + jndiTag + number + userTag);

    Properties dbProperties = new Properties();

    String theUser = props.getProperty(user);
    String thePassword = props.getProperty(password);

    if (theUser != null)
        {
            dbProperties.put(TransactionalDriver.userName, theUser);
            dbProperties.put(TransactionalDriver.password, thePassword);

            return new JDBC2RecoveryConnection(url, dbProperties);
        }
    else
        return null;
}

private int numberOfConnections;
private int connectionIndex;
private Properties props;
private static final String dbTag = "DB_";
private static final String urlTag = "_DatabaseURL";
private static final String passwordTag = "_DatabasePassword";
private static final String userTag = "_DatabaseUser";
private static final String dynamicClassTag = "_DatabaseDynamicClass";
private static final String jndiTag = "JNDI_";

/*
 * Example:
 * 
 * DB2_DatabaseURL=jdbc\:arjuna\:sequelink\://qa02\:20001
 * DB2_DatabaseUser=tester2 DB2_DatabasePassword=tester
 * DB2_DatabaseDynamicClass=com.arjuna.ats.internal.jdbc.drivers.sequelink_5_1
 * 
 * DB_JNDI_DatabaseURL=jdbc\:arjuna\:jndi DB_JNDI_DatabaseUser=tester1
 * DB_JNDI_DatabasePassword=tester DB_JNDI_DatabaseName=empay
 * DB_JNDI_Host=qa02 DB_JNDI_Port=20000
 */
private static final char BREAKCHARACTER = ';'; // delimiter for parameters

You can use the class com.arjuna.ats.internal.jdbc.recovery.JDBC2RecoveryConnection to create a new connection to the database using the same parameters used to create the initial connection.


WildFly Application Server is discussed here. Refer to the documentation for your application server for differences.

Procedure 2.3. Installing Services in Linux / UNIX

  1. Log into the system with root privileges.

    The installer needs these privileges to create files in /etc .

  2. Change to JBOSS_HOME /services/installer directory.

    JBOSS_HOME refers to the directory where you extracted Narayana.

  3. Set the JAVA_HOME variable, if necessary.

    Set the JAVA_HOME variable to the base directory of the JVM the service will use. The base directory is the directory above bin/java .

    1. Bash: export JAVA_HOME="/opt/java"

    2. CSH: setenv JAVA_HOME="/opt/java"

  4. Run the installer script.

    ./install_service.sh

  5. The start-up and shut-down scripts are installed.

    Information similar to the output below is displayed.

         Adding $JAVA_HOME (/opt/java) to $PATH in
         /opt/arjuna/ats-3.2/services/bin/solaris/recoverymanagerservice.sh
         Adding $JAVA_HOME (/opt/java) to $PATH in
         /opt/arjuna/ats-3.2/services/bin/solaris/transactionserverservice.sh
         Installing shutdown scripts into /etc/rcS.d:
         K01recoverymanagerservice
         K00transactionserverservice
         Installing shutdown scripts into /etc/rc0.d:
         K01recoverymanagerservice
         K00transactionserverservice
         Installing shutdown scripts into /etc/rc1.d:
         K01recoverymanagerservice
         K00transactionserverservice
         Installing shutdown scripts into /etc/rc2.d:
         K01recoverymanagerservice
         K00transactionserverservice
         Installing startup scripts into /etc/rc3.d:
         S98recoverymanagerservice
         S99transactionserverservice
       

    The start-up and shut-down scripts are installed for each run-level. Depending on your specific operating system, you may need to explicitly enable the services for automatic start-up.

Narayana has been designed to be highly configurable at runtime through the use of various property attributes. Although these attributes can be provided at runtime on the command line, it may be more convenient to specify them through a single properties file or via setter methods on the beans. At runtime, Narayana looks for the file jbossts-properties.xml , in a specific search order.

  1. A location specified by a system property , allowing the normal search path to be overridden.

  2. The directory from which the application was executed.

  3. The home directory of the user that launched Narayana.

  4. java.home

  5. The CLASSPATH , which normally includes the installation's etc/ directory.

  6. A default set of properties embedded in the JAR file.

Where properties are defined in both the system properties by using the -D switch, and in the properties file, the value from the system property takes precedence. This facilitates overriding individual properties easily on the command line.

The properties file uses java.uil.Properties XML format, for example:


    
<entry key="CoordinatorEnvironmentBean.asyncCommit">NO</entry>
<entyr key="ObjectStoreEnvironmentBean.objectStoreDir">/var/ObjectStore</entry>
     
  

You can override the name of the properties file at runtime by specifying a new file using the com.arjuna.ats.arjuna.common.propertiesFile attribute variable.

Note

Unlike earlier releases, there is no longer one properties file name per module. This properties file name key is now global for all components in the JVM.

The Java Transaction API consists of three elements: a high-level application transaction demarcation interface, a high-level transaction manager interface intended for application server, and a standard Java mapping of the X/Open XA protocol intended for transactional resource manager. All of the JTA classes and interfaces occur within the jakarta.transaction package, and the corresponding Narayana implementations within the com.arjuna.ats.jta package.

JTS supports the construction of both local and distributed transactional applications which access databases using the JDBC APIs. JDBC supports two-phase commit of transactions, and is similar to the XA X/Open standard. The JDBC support is found in the com.arjuna.ats.jdbc package.

The ArjunaJTS approach to incorporating JDBC connections within transactions is to provide transactional JDBC drivers through which all interactions occur. These drivers intercept all invocations and ensure that they are registered with, and driven by, appropriate transactions. (There is a single type of transactional driver through which any JDBC driver can be driven. This driver is com.arjuna.ats.jdbc.TransactionalDriver, which implements the java.sql.Driver interface.)

Once the connection has been established (for example, using the java.sql.DriverManager.getConnection method), all operations on the connection will be monitored by Narayana. Once created, the driver and any connection can be used in the same way as any other JDBC driver or connection.

Narayana connections can be used within multiple different transactions simultaneously, i.e., different threads, with different notions of the current transaction, may use the same JDBC connection. Narayana does connection pooling for each transaction within the JDBC connection. So, although multiple threads may use the same instance of the JDBC connection, internally this may be using a different connection instance per transaction. With the exception of close, all operations performed on the connection at the application level will only be performed on this transaction-specific connection.

Narayana will automatically register the JDBC driver connection with the transaction via an appropriate resource. When the transaction terminates, this resource will be responsible for either committing or rolling back any changes made to the underlying database via appropriate calls on the JDBC driver.

Since the release of 4.1, the Web Services Transaction product has been merged into . is thus a single product that is compliant with all of the major distributed transaction standards and specifications.

Knowledge of Web Services is not required to administer a installation that only uses the CORBA/J2EE component, nor is knowledge of CORBA required to use the Web Services component. This, administrative tasks are separated when they touch only one component or the other.

Apart from ensuring that the run-time system is executing normally, there is little continuous administration needed for the Narayana software. Refer to Important Points for Administrators for some specific concerns.

Important Points for Administrators

  • The present implementation of the Narayana system provides no security or protection for data. The objects stored in the Narayana object store are (typically) owned by the user who ran the application that created them. The Object Store and Object Manager facilities make no attempt to enforce even the limited form of protection that Unix/Windows provides. There is no checking of user or group IDs on access to objects for either reading or writing.

  • Persistent objects created in the Object Store never go away unless the StateManager.destroy method is invoked on the object or some application program explicitly deletes them. This means that the Object Store gradually accumulates garbage (especially during application development and testing phases). At present we have no automated garbage collection facility. Further, we have not addressed the problem of dangling references. That is, a persistent object, A, may have stored a Uid for another persistent object, B, in its passive representation on disk. There is nothing to prevent an application from deleting B even though A still contains a reference to it. When A is next activated and attempts to access B, a run-time error will occur.

  • There is presently no support for version control of objects or database reconfiguration in the event of class structure changes. This is a complex research area that we have not addressed. At present, if you change the definition of a class of persistent objects, you are entirely responsible for ensuring that existing instances of the object in the Object Store are converted to the new representation. The Narayana software can neither detect nor correct references to old object state by new operation versions or vice versa.

  • Object store management is critically important to the transaction service.

By default the transaction manager starts up in an active state such that new transactions can be created immediately. If you wish to have more control over this it is possible to set the CoordinatorEnvironmentBean.startDisabled configuration option to YES and in which case no transactions can be created until the transaction manager is enabled via a call to method TxControl.enable ).

It is possible to stop the creation of new transactions at any time by calling method TxControl.disable . Transactions that are currently executing will not be affected. By default recovery will be allowed to continue and the transaction system will still be available to manage recovery requests from other instances in a distributed environment. (See the Failure Recovery Guide for further details). However, if you wish to disable recovery as well as remove any resources it maintains, then you can pass true to method TxControl.disable ; the default is to use false .

If you wish to shut the system down completely then it may also be necessary to terminate the background transaction reaper (see the Programmers Guide for information about what the reaper does.) In order to do this you may want to first prevent the creation of new transactions (if you are not creating transactions with timeouts then this step is not necessary) using method TxControl.disable . Then you should call method TransactionReaper.terminate . This method takes a Boolean parameter: if true then the method will wait for the normal timeout periods associated with any transactions to expire before terminating the transactions; if false then transactions will be forced to terminate (rollback or have their outcome set such that they can only ever rollback) immediately.

Note

if you intent to restart the recovery manager later after having terminated it then you MUST use the TransactionReapear.terminate method with asynchronous behavior set to false .

The run-time support consists of run-time packages and the OTS transaction manager server. By default, does not use a separate transaction manager server. Instead, transaction managers are co-located with each application process to improve performance and improve application fault-tolerance by reducing application dependency on other services.

When running applications which require a separate transaction manager, set the JTSEnvironmentBean.transactionManager environment variable to value YES . The system locates the transaction manager server in a manner specific to the ORB being used. This method may be any of:

  • Being registered with a name server.

  • Being added to the ORB’s initial references.

  • Via a specific references file.

  • By the ORB’s specific location mechanism (if applicable).

You override the default registration mechanism by using the OrbPortabilityEnvironmentBean.resolveService environment variable, which takes the following values:


The failure recovery subsystem of Narayana will ensure that results of a transaction are applied consistently to all resources affected by the transaction, even if any of the application processes or the machine hosting them crash or lose network connectivity. In the case of machine (system) crash or network failure, the recovery will not take place until the system or network are restored, but the original application does not need to be restarted. Recovery responsibility is delegated to Section 2.1.5.1, “The Recovery Manager” . Recovery after failure requires that information about the transaction and the resources involved survives the failure and is accessible afterward: this information is held in the ActionStore , which is part of the ObjectStore .

Warning

If the ObjectStore is destroyed or modified, recovery may not be possible.

Until the recovery procedures are complete, resources affected by a transaction that was in progress at the time of the failure may be inaccessible. For database resources, this may be reported as tables or rows held by “in-doubt transactions”. For TransactionalObjects for Java resources, an attempt to activate the Transactional Object (as when trying to get a lock) will fail.

The RecoveryManager scans the ObjectStore and other locations of information, looking for transactions and resources that require, or may require recovery. The scans and recovery processing are performed by recovery modules. These recovery modules are instances of classes that implement the com.arjuna.ats.arjuna.recovery.RecoveryModule interface . Each module has responsibility for a particular category of transaction or resource. The set of recovery modules used is dynamically loaded, using properties found in the RecoveryManager property file.

The interface has two methods: periodicWorkFirstPass and periodicWorkSecondPass . At an interval defined by property com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod , the RecoveryManager calls the first pass method on each property, then waits for a brief period, defined by property com.arjuna.ats.arjuna.recovery.recoveryBackoffPeriod . Next, it calls the second pass of each module. Typically, in the first pass, the module scans the relevant part of the ObjectStore to find transactions or resources that are in-doubt. An in-doubt transaction may be part of the way through the commitment process, for instance. On the second pass, if any of the same items are still in-doubt, the original application process may have crashed, and the item is a candidate for recovery.

An attempt by the RecoveryManager to recover a transaction that is still progressing in the original process is likely to break the consistency. Accordingly, the recovery modules use a mechanism, implemented in the com.arjuna.ats.arjuna.recovery.TransactionStatusManager package, to check to see if the original process is still alive, and if the transaction is still in progress. The RecoveryManager only proceeds with recovery if the original process has gone, or, if still alive, the transaction is completed. If a server process or machine crashes, but the transaction-initiating process survives, the transaction completes, usually generating a warning. Recovery of such a transaction is the responsibility of the RecoveryManager.

It is clearly important to set the interval periods appropriately. The total iteration time will be the sum of the periodicRecoveryPeriod and recoveryBackoffPeriod properties, and the length of time it takes to scan the stores and to attempt recovery of any in-doubt transactions found, for all the recovery modules. The recovery attempt time may include connection timeouts while trying to communicate with processes or machines that have crashed or are inaccessible. There are mechanisms in the recovery system to avoid trying to recover the same transaction indefinitely. The total iteration time affects how long a resource will remain inaccessible after a failure. – periodicRecoveryPeriod should be set accordingly. Its default is 120 seconds. The recoveryBackoffPeriod can be comparatively short, and defaults to 10 seconds. –Its purpose is mainly to reduce the number of transactions that are candidates for recovery and which thus require a call to the original process to see if they are still in progress.

Note

In previous versions of Narayana , there was no contact mechanism, and the back-off period needed to be long enough to avoid catching transactions in flight at all. From 3.0, there is no such risk.

Two recovery modules, implementations of the com.arjuna.ats.arjuna.recovery.RecoveryModule interface, are supplied with Narayana . These modules support various aspects of transaction recovery, including JDBC recovery. It is possible for advanced users to create their own recovery modules and register them with the Recovery Manager. The recovery modules are registered with the RecoveryManager using RecoveryEnvironmentBean.recoveryModuleClassNames . These will be invoked on each pass of the periodic recovery in the sort-order of the property names – it is thus possible to predict the ordering, but a failure in an application process might occur while a periodic recovery pass is in progress. The default Recovery Extension settings are:

<entry key="RecoveryEnvironmentBean.recoveryModuleClassNames">
  com.arjuna.ats.internal.arjuna.recovery.AtomicActionRecoveryModule
  com.arjuna.ats.internal.txoj.recovery.TORecoveryModule
  com.arjuna.ats.internal.jts.recovery.transactions.TopLevelTransactionRecoveryModule
  com.arjuna.ats.internal.jts.recovery.transactions.ServerTransactionRecoveryModule
  com.arjuna.ats.internal.jta.recovery.jts.XARecoveryModule
</entry>

The operation of the recovery subsystem cause some entries to be made in the ObjectStore that are not removed in normal progress. The RecoveryManager has a facility for scanning for these and removing items that are very old. Scans and removals are performed by implementations of the com.arjuna.ats.arjuna.recovery.ExpiryScanner interface. These implementations are loaded by giving the class names as the value of a property RecoveryEnvironmentBean.expiryScannerClassNames . The RecoveryManager calls the scan() method on each loaded Expiry Scanner implementation at an interval determined by the property RecoveryEnvironmentBean.expiryScanInterval . This value is given in hours, and defaults to 12hours. An expiryScanInterval value of zero suppresses any expiry scanning. If the value supplied is positive, the first scan is performed when RecoveryManager starts. If the value is negative, the first scan is delayed until after the first interval, using the absolute value.

The kinds of item that are scanned for expiry are:

TransactionStatusManager items

One TransactionStatusManager item is created by every application process that uses Narayana . It contains the information that allows the RecoveryManager to determine if the process that initiated the transaction is still alive, and its status. The expiry time for these items is set by the property com.arjuna.ats.arjuna.recovery.transactionStatusManagerExpiryTime , expressed in hours. The default is 12, and 0 (zero) means never to expire.The expiry time should be greater than the lifetime of any single processes using Narayana .

The Expiry Scanner properties for these are:

 <entry key="RecoveryEnvironmentBean.expiryScannerClassNames">
    com.arjuna.ats.internal.arjuna.recovery.ExpiredTransactionStatusManagerScanner
</entry>

For JacORB to function correctly it needs a valid jacorb.properties or .jacorb_properties file in one of the following places, in searched order:

  1. The classpath

  2. The home directory of the user running the Service. The home directory is retrieved using System.getProperty( “user.home” );

  3. The current directory

  4. The lib/ directory of the JDK used to run your application. This is retrieved using System.getProperty(“java.home” );

Note

A template jacorb.properties file is located in the JacORB installation directory.

Within the JacORB properties file there are two important properties which must be tailored to suit your application.

  • jacorb.poa.thread_pool_max

  • jacorb.poa.thread_pool_min

These properties specify the minimum and maximum number of request processing threads that JacORB uses in its thread pool. If no threads are available, may block until a thread becomes available.. For more information on configuring JacORB, refer to the JacORB documentation.

Important

JacORB includes its own implementation of the classes defined in the CosTransactions.idl file. Unfortunately these are incompatible with the version shipped with . Therefore, the jar files absolutely must appear in the CLASSPATH before any JacORB jars.

When running the recovery manager, it should always uses the same well-known port for each machine on which it runs. Do not use the OAPort property provided by JacORB unless the recovery manager has its own jacorb.properties file or the property is provided on the command line when starting the recovery manager. If the recovery manager and other components of share the same jacorb.properties file, use the JTSEnvironmentBean.recoveryManagerPort and JTSEnvironmentBean.recoveryManagerAddress properties.

A transaction is a unit of work that encapsulates multiple database actions such that that either all the encapsulated actions fail or all succeed.

Transactions ensure data integrity when an application interacts with multiple datasources.

Practical Example.  If you subscribe to a newspaper using a credit card, you are using a transactional system. Multiple systems are involved, and each of the systems needs the ability to roll back its work, and cause the entire transaction to roll back if necessary. For instance, if the newspaper's subscription system goes offline halfway through your transaction, you don't want your credit card to be charged. If the credit card is over its limit, the newspaper doesn't want your subscription to go through. In either of these cases, the entire transaction should fail of any part of it fails. Neither you as the customer, nor the newspaper, nor the credit card processor, wants an unpredictable (indeterminate) outcome to the transaction.

This ability to roll back an operation if any part of it fails is what is all about. This guide assists you in writing transactional applications to protect your data.

"Transactions" in this guide refers to atomic transactions, and embody the "all-or-nothing" concept outlined above. Transactions are used to guarantee the consistency of data in the presence of failures. Transactions fulfill the requirements of ACID: Atomicity, Consistency, Isolation, Durability.

ACID Properties

Atomicity

The transaction completes successfully (commits) or if it fails (aborts) all of its effects are undone (rolled back).

Consistency

Transactions produce consistent results and preserve application specific invariants.

Isolation

Intermediate states produced while a transaction is executing are not visible to others. Furthermore transactions appear to execute serially, even if they are actually executed concurrently.

Durability

The effects of a committed transaction are never lost (except by a catastrophic failure).

A transaction can be terminated in two ways: committed or aborted (rolled back). When a transaction is committed, all changes made within it are made durable (forced on to stable storage, e.g., disk). When a transaction is aborted, all of the changes are undone. Atomic actions can also be nested; the effects of a nested action are provisional upon the commit/abort of the outermost (top-level) atomic action.

A two-phase commit protocol guarantees that all of the transaction participants either commit or abort any changes made. Figure 3.1, “Two-Phase Commit” illustrates the main aspects of the commit protocol.


Given a system that provides transactions for certain operations, you can combine them to form another operation, which is also required to be a transaction. The resulting transaction’s effects are a combination of the effects of its constituent transactions. This paradigm creates the concept of nested subtransactions, and the resulting combined transaction is called the enclosing transaction. The enclosing transaction is sometimes referred to as the parent of a nested (or child) transaction. It can also be viewed as a hierarchical relationship, with a top-level transaction consisting of several subordinate transactions.

An important difference exists between nested and top-level transactions.

The effect of a nested transaction is provisional upon the commit/roll back of its enclosing transactions. The effects are recovered if the enclosing transaction aborts, even if the nested transaction has committed.

Subtransactions are a useful mechanism for two reasons:

fault-isolation

If a subtransaction rolls back, perhaps because an object it is using fails, the enclosing transaction does not need to roll back.

modularity

If a transaction is already associated with a call when a new transaction begins, the new transaction is nested within it. Therefore, if you know that an object requires transactions, you can them within the object. If the object’s methods are invoked without a client transaction, then the object’s transactions are top-level. Otherwise, they are nested within the scope of the client's transactions. Likewise, a client does not need to know whether an object is transactional. It can begin its own transaction.

The CORBA architecture, as defined by the OMG, is a standard which promotes the construction of interoperable applications that are based upon the concepts of distributed objects. The architecture principally contains the following components:

Object Request Broker (ORB)

Enables objects to transparently send and receive requests in a distributed, heterogeneous environment. This component is the core of the OMG reference model.

Object Services

A collection of services that support functions for using and implementing objects. Such services are necessary for the construction of any distributed application. The Object Transaction Service (OTS) is the most relevant to Narayana.

Common Facilities

Other useful services that applications may need, but which are not considered to be fundamental. Desktop management and help facilities fit this category.

The CORBA architecture allows both implementation and integration of a wide variety of object systems. In particular, applications are independent of the location of an object and the language in which an object is implemented, unless the interface the object explicitly supports reveals such details. As defined in the OMG CORBA Services documentation, object services are defined as a collection of services (interfaces and objects) that support the basic functions for using and implementing objects. These services are necessary to construct distributed application, and are always independent of an application domain. The standards specify several core services including naming, event management, persistence, concurrency control and transactions.

Note

The OTS specification allows, but does not require, nested transactions. is a fully compliant version of the OTS version 1.1 draft 5, and support nested transactions.

The transaction service provides interfaces that allow multiple distributed objects to cooperate in a transaction, committing or rolling back their changes as a group. However, the OTS does not require all objects to have transactional behavior. An object's support of transactions can be none at all, for some operations, or fully. Transaction information may be propagated between client and server explicitly, or implicitly. You have fine-grained control over an object's support of transactions. If your objects supports partial or complete transactional behavior, it needs interfaces derived from interface TransactionalObject .

The Transaction Service specification also distinguishes between recoverable objects and transactional objects. Recoverable objects are those that contain the actual state that may be changed by a transaction and must therefore be informed when the transaction commits or aborts to ensure the consistency of the state changes. This is achieved be registering appropriate objects that support the Resource interface (or the derived SubtransactionAwareResource interface) with the current transaction. Recoverable objects are also by definition transactional objects.

In contrast, a simple transactional object does not necessarily need to be recoverable if its state is actually implemented using other recoverable objects. A simple transactional object does not need to participate the commit protocol used to determine the outcome of the transaction since it maintains no state information of its own.

The OTS is a protocol engine that guarantees obedience to transactional behavior. It does not directly support all of the transaction properties, but relies on some cooperating services:

Persistence/Recovery Service

Supports properties of atomicity and durability.

Concurrency Control Service

Supports the isolation properties.

You are responsible for using the appropriate services to ensure that transactional objects have the necessary ACID properties.

is based upon the original Arjuna system developed at the University of Newcastle between 1986 and 1995. Arjuna predates the OTS specification and includes many features not found in the OTS. is a superset of the OTS. Applications written using the standard OTS interfaces are portable across OTS implementations.

features in terms of OTS specifications

  • full draft 5 compliance, with support for Synchronization objects and PropagationContexts.

  • support for subtransactions.

  • implicit context propagation where support from the ORB is available.

  • support for multi-threaded applications.

  • fully distributed transaction managers, i.e., there is no central transaction manager, and the creator of a top-level transaction is responsible for its termination. Separate transaction manager support is also available, however.

  • transaction interposition.

  • X/Open compliance, including checked transactions. This checking can optionally be disabled. Note: checked transactions are disabled by default, i.e., any thread can terminate a transaction.

  • JDBC support.

  • Full Jakarta Transactions support.

You can use in three different levels, which correspond to the sections in this chapter, and are each explored in their own chapters as well.

Because of differences in ORB implementations, uses a separate ORB Portability library which acts as an abstraction later. Many of the examples used throughout this manual use this library. Refer to the ORB Portability Manual for more details.

The OTS does not provide any Resource implementations. You are responsible for implementing these interfaces. The interfaces defined within the OTS specification are too low-level for most application programmers. Therefore, includes Transactional Objects for Java (TXOJ) , which makes use of the raw Common Object Services interfaces but provides a higher-level API for building transactional applications and frameworks. This API automates much of the activities concerned with participating in an OTS transaction, freeing you to concentrate on application development, rather than transactions.

The architecture of the system is shown in Figure 2. The API interacts with the concurrency control and persistence services, and automatically registers appropriate resources for transactional objects. These resources may also use the persistence and concurrency services.

exploits object-oriented techniques to provide you with a toolkit of Java classes which are inheritable by application classes, to obtain transactional properties. These classes form a hierarchy, illustrated in Figure 3.2, “ class hierarchy ” .


Your main responsibilities are specifying the scope of transactions and setting appropriate locks within objects. guarantees that transactional objects will be registered with, and be driven by, the appropriate transactions. Crash recovery mechanisms are invoked automatically in the event of failures. When using the provided interfaces, you do not need to create or register Resource objects or call services controlling persistence or recovery. If a transaction is nested, resources are automatically propagated to the transaction’s parent upon commit.

The design and implementation goal of was to provide a programming system for constructing fault-tolerant distributed applications. Three system properties were considered highly important:

Integration of Mechanisms

Fault-tolerant distributed systems require a variety of system functions for naming, locating and invoking operations upon objects, as well as for concurrency control, error detection and recovery from failures. These mechanisms are integrated in a way that is easy for you to use.

Flexibility

Mechanisms must be flexible, permitting implementation of application-specific enhancements, such as type-specific concurrency and recovery control, using system defaults.

Portability

You need to be able to run on any ORB.

is implemented in Java and extensively uses the type-inheritance facilities provided by the language to provide user-defined objects with characteristics such as persistence and recoverability.

The OTS specification is written with flexibility in mind, to cope with different application requirements for transactions. supports all optional parts of the OTS specification. In addition, if the specification allows functionality to be implemented in a variety of different ways, supports all possible implementations.

Table 3.2.  implementation of OTS specifications

OTS specification default implementation

If the transaction service chooses to restrict the availability of the transaction context, then it should raise the Unavailable exception.

does not restrict the availability of the transaction context.

An implementation of the transaction service need not initialize the transaction context for every request.

only initializes the transaction context if the interface supported by the target object extends the TransactionalObject interface.

An implementation of the transaction service may restrict the ability for the Coordinator , Terminator , and Control objects to be transmitted or used in other execution environments to enable it to guarantee transaction integrity.

does not impose restrictions on the propagation of these objects.

The transaction service may restrict the termination of a transaction to the client that started it.

allows the termination of a transaction by any client that uses the Terminator interface. In addition, does not impose restrictions when clients use the Current interface.

A TransactionFactory is located using the FactoryFinder interface of the life-cycle service.

provides multiple ways in which the TransactionFactory can be located.

A transaction service implementation may use the Event Service to report heuristic decisions.

does not use the Event Service to report heuristic decisions.

An implementation of the transaction service does not need to support nested transactions.

supports nested transactions.

Synchronization objects must be called whenever the transaction commits.

allows Synchronizations to be called no matter what state the transaction terminates with.

A transaction service implementation is not required to support interposition.

supports various types of interposition.


Basic programming involves using the OTS interfaces provided in the CosTransactions module, which is specified in CosTransactions.idl . This chapter is based on the OTS Specification1 , specifically with the aspects of OTS that are valuable for developing OTS applications using . Where relevant, each section describes implementation decisions and runtime choices available to you. These choices are also summarized at the end of this chapter. Subsequent chapters illustrate using these interfaces to construct transactional applications.

A client application program can manage a transaction using direct or indirect context management.

  • Indirect context management means that an application uses the pseudo-object Current , provided by the Transaction Service, to associate the transaction context with the application thread of control.

  • For direct context management , an application manipulates the Control object and the other objects associated with the transaction.

An object may require transactions to be either explicitly or implicitly propagated to its operations.

  • Explicit propagation means that an application propagates a transaction context by passing objects defined by the Transaction Service as explicit parameters. Typically the object is the PropagationContext structure.

  • Implicit propagation means that requests are implicitly associated with the client’s transaction, by sharing the client's transaction context. The context is transmitted to the objects without direct client intervention. Implicit propagation depends on indirect context management, since it propagates the transaction context associated with the Current pseudo-object. An object that supports implicit propagation should not receive any Transaction Service object as an explicit parameter.

A client may use one or both forms of context management, and may communicate with objects that use either method of transaction propagation. This results in four ways in which client applications may communicate with transactional objects:

Direct Context Management/Explicit Propagation

The client application directly accesses the Control object, and the other objects which describe the state of the transaction. To propagate the transaction to an object, the client must include the appropriate Transaction Service object as an explicit parameter of an operation. Typically, the object is the PropagationContext structure.

Indirect Context Management/Implicit Propagation

The client application uses operations on the Current pseudo-object to create and control its transactions. When it issues requests on transactional objects, the transaction context associated with the current thread is implicitly propagated to the object.

Indirect Context Management/Explicit Propagation

for an implicit model application to use explicit propagation, it can get access to the Control using the get_control operation on the Current pseudo object. It can then use a Transaction Service object as an explicit parameter to a transactional object; for efficiency reasons this should be the PropagationContext structure, obtained by calling get_txcontext on the appropriate Coordinator reference. This is explicit propagation.

Direct Context Management/Implicit Propagation

A client that accesses the Transaction Service objects directly can use the resume pseudo-object operation to set the implicit transaction context associated with its thread. This way, the client can invoke operations of an object that requires implicit propagation of the transaction context.

The main difference between direct and indirect context management is the effect on the invoking thread’s transaction context. Indirect context management causes the thread’s transaction context to be modified automatically by the OTS. For instance, if method begin is called, the thread’s notion of the current transaction is modified to the newly-created transaction. When the transaction is terminated, the transaction previously associated with the thread, if one existed, is restored as the thread’s context. This assumes that subtransactions are supported by the OTS implementation.

If you use direct management, no changes to the thread's transaction context are made by the OTS, leaving the responsibility to you.

The TransactionFactory interface allows the transaction originator to begin a top-level transaction. Subtransactions must be created using the begin method of Current , or the create_subtransaction method of the parent’s Coordinator.) Operations on the factory and Coordinator to create new transactions use direct context management, and therefore do not modify the calling thread’s transaction context.

The create operation creates a new top-level transaction and returns its Control object, which you can use to manage or control participation in the new transaction. Method create takes a parameter that is is an application-specific timeout value, in seconds. If the transaction does not complete before this timeout elapses, it is rolled back. If the parameter is 0 , no application-specific timeout is established.

Note

Subtransactions do not have a timeout associated with them.

The Transaction Service implementation allows the TransactionFactory to be a separate server from the application, shared by transactions clients, and which manages transactions on their behalf. However, the specification also allows the TransactionFactory to be implemented by an object within each transactional client. This is the default implementation used by , because it removes the need for a separate service to be available in order for transactional applications to execute, and therefore reduces a point of failure.

If your applications require a separate transaction manager, set the OTS_TRANSACTION_MANAGER environment variable to the value YES . The system locates the transaction manager server in a manner specific to the ORB being used. The server can be located in a number of ways.

  • Registration with a name server.

  • Addition to the ORB’s initial references, using a specific references file.

  • The ORB’s specific location mechanism, if applicable.

Transaction contexts are fundamental to the OTS architecture. Each thread is associated with a context in one of three ways.

Null

The thread has no associated transaction.

A transaction IDThe thread is associated with a transaction.

Contexts may be shared across multiple threads. In the presence of nested transactions, a context remembers the stack of transactions started within the environment, so that the context of the thread can be restored to the state before the nested transaction started, when the nested transaction ends. Threads most commonly use object Current to manipulate transactional information, which is represented by Control objects. Current is the broker between a transaction and Control objects.

Your application can manage transaction contexts either directly or indirectly. In the direct approach, the transaction originator issues a request to a TransactionFactory to begin a new top-level transaction. The factory returns a Control object that enables both a Terminator interface and a Coordinator interface. Terminator ends a transaction. Coordinator associates a thread with a transaction, or begins a nested transaction. You need to pass each interface as an explicit parameter in invocations of operations, because creating a transaction with them does not change a thread's current context. If you use the factory, and need to set the current context for a thread to the context which its control object returns, use the resume method of interface Current .


When the factory creates a transaction, you can specify a timeout value in seconds. If the transaction times out, it is subject to possible roll-back. Set the timeout to 0 to disable application-specific timeout.

The Current interface handles implicit context management. Implicit context management provides simplified transaction management functionality, and automatically creates nested transactions as required. Transactions created using Current do not alter a thread’s current transaction context.


Subtransactions are a useful mechanism for two reasons:

fault-tolerance

If a subtransaction rolls back, the enclosing transaction does not also need to roll back. This preserves as much of the work done so far, as possible.

modularity

Indirect transaction management does not require special syntax for creating subtransactions. Begin a transaction, and if another transaction is associated with the calling thread, the new transaction is nested within the existing one. If you know that an object requires transactions, you can use them within the object. If the object's methods are invoked without a client transaction, the object's transaction is top-level. Otherwise, it is nested within the client's transaction. A client does not need to know whether an object is transactional.

The outermost transaction of the hierarchy formed by nested transactions is called the top-level transaction. The inner components are called subtransactions. Unlike top-level transactions, the commits of subtransactions depend upon the commit/rollback of the enclosing transactions. Resources acquired within a subtransaction should be inherited by parent transactions when the top-level transaction completes. If a subtransaction rolls back, it can release its resources and undo any changes to its inherited resources.

In the OTS, subtransactions behave differently from top-level transactions at commit time. Top-level transactions undergo a two-phase commit protocol, but nested transactions do not actually perform a commit protocol themselves. When a program commits a nested transaction, it only informs registered resources of its outcome. If a resource cannot commit, an exception is thrown, and the OTS implementation can ignore the exception or roll back the subtransaction. You cannot roll back a subtransaction if any resources have been informed that the transaction committed.

The OTS supports both implicit and explicit propagation of transactional behavior.

  • Implicit propagation means that an operation signature specifies no transactional behavior, and each invocation automatically sends transaction context associated with the calling thread.

  • Explicit propagation means that applications must define their own mechanism for propagating transactions. This has the following features:

    • A client to control if its transaction is propagated with any operation invocation.

    • A client can invoke operations on both transactional and non-transactional objects within a transaction.

Transaction context management and transaction propagation are different things that may be controlled independently of each other. Mixing of direct and indirect context management with implicit and explicit transaction propagation is supported. Using implicit propagation requires cooperation from the ORB. The client must send current context associated with the thread with any operation invocations, and the server must extract them before calling the targeted operation.

If you need implicit context propagation, ensure that is correctly initialized before you create objects. Both client and server must agree to use implicit propagation. To use implicit context propagation, your ORB needs to support filters or interceptors, or the CosTSPortability interface.

Implicit context propagation

Property variable OTS_CONTEXT_PROP_MODE set to CONTEXT .

Interposition

Property variable OTS_CONTEXT_PROP_MODE set to INTERPOSITION .

Important

Interposition is required to use the Advanced API.


The next example rewrites the same program to use indirect context management and implicit propagation. This example is considerably simpler, because the application only needs to start and either commit or abort actions.


The last example illustrates the flexibility of OTS by using both direct and indirect context management in conjunction with explicit and implicit transaction propagation.


The Control interface allows a program to explicitly manage or propagate a transaction context. An object supporting the Control interface is associated with one specific transaction. The Control interface supports two operations: get_terminator and get_coordinator . get_terminator returns an instance of the Terminator interface. get_coordinator returns an instance of the Coordinator interface. Both of these methods throw the Unavailable exception if the Control cannot provide the requested object. The OTS implementation can restrict the ability to use the Terminator and Coordinator in other execution environments or threads. At a minimum, the creator must be able to use them.

Obtain the Control object for a transaction when it is created either by using either the TransactionFactory or create_subtransaction methods defined by the Coordinator interface. Obtain a Control for the transaction associated with the current thread using the get_control or suspend methods defined by the Current interface.

The Terminator interface supports commit and rollback operations. Typically, the transaction originator uses these operations. Each object supporting the Terminator interface is associated with a single transaction. Direct context management via the Terminator interface does not change the client thread’s notion of the current transaction.

The commit operation attempts to commit the transaction. To successfully commit, the transaction must not be marked rollback only , and all of its must participants agree to commit. Otherwise, the TRANSACTION_ROLLEDBACK exception is thrown. If the report_heuristics parameter is true , the Transaction Service reports inconsistent results using the HeuristicMixed and HeuristicHazard exceptions.

When a transaction is committed, the coordinator drives any registered Resources using their prepare or commit methods. These Resources are responsible to ensure that any state changes to recoverable objects are made permanent, to guarantee the ACID properties.

When rollback is called, the registered Resources need to guarantee that all changes to recoverable objects made within the scope of the transaction, and its descendants, is undone. All resources locked by the transaction are made available to other transactions, as appropriate to the degree of isolation the resources enforce.

See Section 3.2.3.7.1, “ specifics ” for how long Terminator references remain valid after a transaction terminates.

When a transaction is committing, it must make certain state changes persistent, so that it can recover if a failure occurs, and continue to commit, or rollback. To guarantee ACID properties, flush these state changes to the persistence store implementation before the transaction proceeds to commit. Otherwise, the application may assume that the transaction has committed, when the state changes may still volatile storage, and may be lost by a subsequent hardware failure. By default, makes sure that such state changes are flushed. However, these flushes can impose a significant performance penalty to the application. To prevent transaction state flushes, set the TRANSACTION_SYNC variable to OFF . Obviously, do this at your own risk.

When a transaction commits, if only a single resource is registered, the transaction manager does not need to perform the two-phase protocol. A single phase commit is possible, and the outcome of the transaction is determined by the resource. In a distributed environment, this optimization represents a significant performance improvement. As such, defaults to performing single phase commit in this situation. Override this behavior at runtime by setting the COMMIT_ONE_PHASE property variable to NO .

The Coordinator interface is returned by the get_coordinator method of the Control interface. It supports the operations resources need to participate in a transaction. These participants are usually either recoverable objects or agents of recoverable objects, such as subordinate coordinators. Each object supporting the Coordinator interface is associated with a single transaction. Direct context management via the Coordinator interface does not change the client thread’s notion of the current transaction. You can terminate transaction directly, through the Terminator interface. In that case, trying to terminate the transaction a second time using Current causes an exception to be thrown for the second termination attempt.

The operations supported by the Coordinator interface of interest to application programmers are:

Table 3.4.  Operations supported by the Coordinator interface

get_status

get_parent_status

get_top_level_status

Return the status of the associated transaction. At any given time a transaction can have one of the following status values representing its progress:

StatusActive

The transaction is currently running, and has not been asked to prepare or marked for rollback.

StatusMarkedRollback

The transaction is marked for rollback.

StatusPrepared

The transaction has been prepared, which means that all subordinates have responded VoteCommit .

StatusCommitted

The transaction has committed. It is likely that heuristics exist. Otherwise, the transaction would have been destroyed and StatusNoTransaction returned.

StatusRolledBack

The transaction has rolled back. It is likely that heuristics exist. Otherwise. the transaction would have been destroyed and StatusNoTransaction returned.

StatusUnknown

The Transaction Service cannot determine the current status of the transaction. This is a transient condition, and a subsequent invocation should return a different status.

StatusNoTransaction

No transaction is currently associated with the target object. This occurs after a transaction completes.

StatusPreparing

The transaction is in the process of preparing and the final outcome is not known.

StatusCommitting

The transaction is in the process of committing.

StatusRollingBack

The transaction is in the process of rolling back.

is_same_transaction and others

You can use these operations for transaction comparison. Resources may use these various operations to guarantee that they are registered only once with a specific transaction.

hash_transaction

hash_top_level_tran

Returns a hash code for the specified transaction.

register_resource

Registers the specified Resource as a participant in the transaction. The Inactive exception is raised if the transaction is already prepared. The TRANSACTION_ROLLEDBACK exception is raised if the transaction is marked rollback only . If the Resource is a SubtransactionAwareResource and the transaction is a subtransaction, this operation registers the resource with this transaction and indirectly with the top-level transaction when the subtransaction’s ancestors commit. Otherwise, the resource is only registered with the current transaction. This operation returns a RecoveryCoordinator which this Resource can use during recovery. No ordering of registered Resources is implied by this operation. If A is registered after B , the OTS can operate on them in any order when the transaction terminates. Therefore, do not assume such an ordering exists in your implementation.

register_subtran_aware

Registers the specified subtransaction-aware resource with the current transaction, so that it know when the subtransaction commits or rolls back. This method cannot register the resource as a participant in the top-level transaction. The NotSubtransaction exception is raised if the current transaction is not a subtransaction. As with register_resource , no ordering is implied by this operation.

register_synchronization

Registers the Synchronization object with the transaction so that will be invoked before prepare and after the transaction completes. Synchronizations can only be associated with top-level transactions, and the SynchronizationsUnavailable exception is raised if you try to register a Synchronization with a subtransaction. As with register_resource , no ordering is implied by this operation.

rollback_only

Marks the transaction so that the only possible outcome is for it to rollback. The Inactive exception is raised if the transaction has already been prepared/completed.

create_subtransaction

A new subtransaction is created. Its parent is the current transaction. The Inactive exception is raised if the current transaction has already been prepared or completed. If you configure the Transaction Service without subtransaction support, the SubtransactionsUnavailable exception is raised.


See Section 3.2.3.7.1, “ specifics ” to control how long Coordinator references remain valid after a transaction terminates.

Note

To disable subtransactions, set set the OTS_SUPPORT_SUBTRANSACTIONS property variable to NO .

The Current interface defines operations that allow a client to explicitly manage the association between threads and transactions, using indirect context management. It defines operations that simplify the use of the Transaction Service.

Table 3.6.  Methods of Current

begin

Creates a new transaction and associates it with the current thread. If the client thread is currently associated with a transaction, and the OTS implementation supported nested transactions, the new transaction becomes a subtransaction of that transaction. Otherwise, the new transaction is a top-level transaction. If the OTS implementation does not support nested transactions, the SubtransactionsUnavailable exception is thrown. The thread’s notion of the current context is modified to be this transaction.

commit

Commits the transaction. If the client thread does not have permission to commit the transaction, the standard exception NO_PERMISSION is raised. The effect is the same as performing the commit operation on the corresponding Terminator object. The client thread's transaction context is returned to its state before the begin request was initiated.

rollback

Rolls back the transaction. If the client thread does not have permission to terminate the transaction, the standard exception NO_PERMISSION is raised. The effect is the same as performing the rollback operation on the corresponding Terminator object. The client thread's transaction context is returned to its state before the begin request was initiated.

rollback_only

Limits the transaction's outcome to rollback only. If the transaction has already been terminated, or is in the process of terminating, an appropriate exception is thrown.

get_status

Returns the status of the current transaction, or exception StatusNoTransaction if no transaction is associated with the thread.

set_timeout

Modifies the timeout associated with top-level transactions for subsequent begin requests, for this thread only. Subsequent transactions are subject to being rolled back if they do not complete before the specified number of seconds elapses. Default timeout values for transactions without explicitly-set timeouts are implementation-dependent. uses a value of 0, which results in transactions never timing out. There is no interface in the OTS for obtaining the current timeout associated with a thread. However, provides additional support for this. See Section 3.2.3.11.1, “ specifics ” .

get_control

Obtains a Control object representing the current transaction. If the client thread is not associated with a transaction, a null object reference is returned. The operation is not dependent on the state of the transaction. It does not raise the TRANSACTION_ROLLEDBACK exception.

suspend

Obtains an object representing a transaction's context. If the client thread is not associated with a transaction, a null object reference is returned. You can pass this object to the resume operation to re-establish this context in a thread. The operation is not dependent on the state of the transaction. It does not raise the TRANSACTION_ROLLEDBACK exception. When this call returns, the current thread has no transaction context associated with it.

resume

Associates the client thread with a transaction. If the parameter is a null object reference, the client thread becomes associated with no transaction. The thread loses association with any previous transactions.




Ideally, you should Obtain Current by using the life-cycle service factory finder. However, very few ORBs support this. provides method get_current of Current for this purpose. This class hides any ORB-specific mechanisms required for obtaining Current .

If no timeout value is associated with Current, associates no timeout with the transaction. The current OTS specification does not provide a means whereby the timeout associated with transaction creation can be obtained. However, Current supports a get_timeout method.

By default, the implementation of Current does not use a separate TransactionFactory server when creating new top-level transactions. Each transactional client has a TransactionFactory co-located with it. Override this by setting the OTS_TRANSACTION_MANAGER variable to YES .

The transaction factory is located in the bin/ directory of the distribution. Start it by executing the OTS script. Current locates the factory in a manner specific to the ORB: using the name service, through resolve_initial_references , or via the CosServices.cfg file. The CosServices.cfg file is similar to resolve_initial_references, and is automatically updated when the transaction factory is started on a particular machine. Copy the file to each instance which needs to share the same transaction factory.

If you do not need subtransaction support, set the OTS_SUPPORT_SUBTRANSACTIONS property variable to NO . The setCheckedAction method overrides the CheckedAction implementation associated with each transaction created by the thread.

The Transaction Service uses a two-phase commit protocol to complete a top-level transaction with each registered resource.


The Resource interface defines the operations invoked by the transaction service. Each Resource object is implicitly associated with a single top-level transaction. Do not register a Resource with the same transaction more than once. When you tell a Resource to prepare, commit, or abort, it must do so on behalf of a specific transaction. However, the Resource methods do not specify the transaction identity. It is implicit, since a Resource can only be registered with a single transaction.

Transactional objects must use the register_resource method to register objects supporting the Resource interface with the current transaction. An object supporting the Coordinator interface is either passed as a parameter in the case of explicit propagation, or retrieved using operations on the Current interface in the case of implicit propagation. If the transaction is nested, the Resource is not informed of the subtransaction’s completion, and is registered with its parent upon commit.

This example assumes that transactions are only nested two levels deep, for simplicity.


Do not register a given Resource with the same transaction more than once, or it will receive multiple termination calls. When a Resource is directed to prepare, commit, or abort, it needs to link these actions to a specific transaction. Because Resource methods do not specify the transaction identity, but can only be associated with a single transaction, you can infer the identity.

A single Resource or group of Resources guarantees the ACID properties for the recoverable object they represent. A Resource's work depends on the phase of its transaction.

prepare

If none of the persistent data associated with the resource is modified by the transaction, the Resource can return VoteReadOnly and forget about the transaction. It does not need to know the outcome of the second phase of the commit protocol, since it hasn't made any changes.

If the resource can write, or has already written, all the data needed to commit the transaction to stable storage, as well as an indication that it has prepared the transaction, it can return VoteCommit . After receiving this response, the Transaction Service either commits or rolls back. To support recovery, the resource should store the RecoveryCoordinator reference in stable storage.

The resource can return VoteRollback under any circumstances. After returning this response, the resource can forget the transaction.

The Resource reports inconsistent outcomes using the HeuristicMixed and HeuristicHazard exceptions. One example is that a Resource reports that it can commit and later decides to roll back. Heuristic decisions must be made persistent and remembered by the Resource until the transaction coordinator issues the forget method. This method tells the Resource that the heuristic decision has been noted, and possibly resolved.

rollback

The resource should undo any changes made as part of the transaction. Heuristic exceptions can be used to report heuristic decisions related to the resource. If a heuristic exception is raised, the resource must remember this outcome until the forget operation is performed so that it can return the same outcome in case rollback is performed again. Otherwise, the resource can forget the transaction.

commit

If necessary, the resource should commit all changes made as part of this transaction. As with rollback , it can raise heuristic exceptions. The NotPrepared exception is raised if the resource has not been prepared.

commit_one_phase

Since there can be only a single resource, the HeuristicHazard exception reports heuristic decisions related to that resource.

forget

Performed after the resource raises a heuristic exception. After the coordinator determines that the heuristic situation is addressed, it issues forget on the resource. The resource can forget all knowledge of the transaction.

Recoverable objects that need to participate within a nested transaction may support the SubtransactionAwareResource interface, a specialization of the Resource interface.


A recoverable object is only informed of the completion of a nested transaction if it registers a SubtransactionAwareResource . Register the object with either the register_resource of the Coordinator interface, or the register_subtran_aware method of the Current interface. A recoverable object registers Resources to participate within the completion of top-level transactions, and SubtransactionAwareResources keep track of the completion of subtransactions. The commit_subtransaction method uses a reference to the parent transaction to allow subtransaction resources to register with these transactions.

SubtransactionAwareResources find out about the completion of a transaction after it terminates. They cannot affect the outcome of the transaction. Different OTS implementations deal with exceptions raised by SubtransactionAwareResources in implementation-specific ways.

Use method register_resource or method register_subtran_aware to register a SubtransactionAwareResource with a transaction using.

register_resource

If the transaction is a subtransaction, the resource is informed of its completion, and automatically registered with the subtransaction’s parent if the parent commits.

register_subtran_aware

If the transaction is not a subtransaction, an exception is thrown. Otherwise, the resource is informed when the subtransaction completes. Unlike register_resource , the resource is not propagated to the subtransaction’s parent if the transaction commits. If you need this propagation, re-register using the supplied parent parameter.



In either case, the resource cannot affect the outcome of the transaction completion. It can only act on the transaction's decision, after the decision is made. However, if the resource cannot respond appropriately, it can raise an exception. Thee OTS handles these exceptions in an implementation-specific way.

If an object needs notification before a transaction commits, it can register an object which is an implements the Synchronization interface, using the register_synchronization operation of the Coordinator interface. Synchronizations flush volatile state data to a recoverable object or database before the transaction commits. You can only associate Synchronizations with top-level transactions. If you try to associate a Synchronization to a nested transaction, an exception is thrown. Each object supporting the Synchronization interface is associated with a single top-level transaction.


The method before_completion is called before the two-phase commit protocol starts, and after_completion is called after the protocol completes. The final status of the transaction is given as a parameter to after_completion . If before_completion raises an exception, the transaction rolls back. Any exceptions thrown by after_completion do not affect the transaction outcome.

The OTS only requires Synchronizations to be invoked if the transaction commits. If it rolls back, registered Synchronizations are not informed.

Given the previous description of Control , Resource , SubtransactionAwareResource , and Synchronization, the following UML relationship diagram can be drawn:


Synchronizations must be called before the top-level transaction commit protocol starts, and after it completes. By default, if the transaction is instructed to roll back, the Synchronizations associated with the transaction is not contacted. To override this, and call Synchronizations regardless of the transaction's outcome, set the OTS_SUPPORT_ROLLBACK_SYNC property variable to YES .

If you use distributed transactions and interposition, a local proxy for the top-level transaction coordinator is created for any recipient of the transaction context. The proxy looks like a Resource or SubtransactionAwareResource , and registers itself as such with the actual top-level transaction coordinator. The local recipient uses it to register Resources and Synchronizations locally.

The local proxy can affect how Synchronizations are invoked during top-level transaction commit. Without the proxy, all Synchronizations are invoked before any Resource or SubtransactionAwareResource objects are processed. However, with interposition, only those Synchronizations registered locally to the transaction coordinator are called. Synchronizations registered with remote participants are only called when the interposed proxy is invoked. The local proxy may only be invoked after locally-registered Resource or SubtransactionAwareResource objects are invoked. With the OTS_SUPPORT_INTERPOSED_SYNCHRONIZATION property variable set to YES , all Synchronizations are invoked before any Resource or SubtransactionAwareResource, no matter where they are registered.


In Figure 3.11, “Subtransaction commit” , a subtransaction with both Resource and SubtransactionAwareResource objects commits. The SubtransactionAwareResources were registered using register_subtran_aware . The Resources do not know the subtransaction terminated, but the SubtransactionAwareResources do. Only the Resources are automatically propagated to the parent transaction.


Figure 3.12, “Subtransaction rollback” illustrates the impact of a subtransaction rolling back. Any registered resources are discarded, and all SubtransactionAwareResources are informed of the transaction outcome.


Figure 3.13, “Top-level commit” shows the activity diagram for committing a top-level transaction. Subtransactions within the top-level transaction which have also successfully committed propagate SubtransactionAwareResources to the top-level transaction. These SubtransactionAwareResources then participate within the two-phase commit protocol. Any registered Synchronizations are contacted before prepare is called. Because of indirect context management, when the transaction commits, the transaction service changes the invoking thread’s transaction context.



The TransactionalObject interface indicates to an object that it is transactional. By supporting this interface, an object indicates that it wants to associate the transaction context associated with the client thread with all operations on its interface. The TransactionalObject interface defines no operations.

OTS specifications do not require an OTS to initialize the transaction context of every request handler. It is only a requirement if the interface supported by the target object is derived from TransactionalObject . Otherwise, the initial transaction context of the thread is undefined. A transaction service implementation can raise the TRANSACTION_REQUIRED exception if a TransactionalObject is invoked outside the scope of a transaction.

In a single-address space application, transaction contexts are implicitly shared between clients and objects, regardless of whether or not the objects support the TransactionalObject interface. To preserve distribution transparency, where implicit transaction propagation is supported, you can direct to always propagate transaction contexts to objects. The default is only to propagate if the object is a TransactionalObject . Set the OTS_ALWAYS_PROPAGATE_CONTEXT property variable to NO to override this behavior.

By default, does not require objects which support the TransactionalObject interface to invoked within the scope of a transaction. The object determines whether it should be invoked within a transaction. If so, it must throw the TransactionRequired exception. Override this default by setting the OTS_NEED_TRAN_CONTEXT shell environment variable to YES .

Important

Make sure that the settings for OTS_ALWAYS_PROPAGATE_CONTEXT and OTS_NEED_TRAN_CONTEXT are identical at the client and the server. If they are not identical at both ends, your application may terminate abnormally.

OTS objects supporting interfaces such as the Control interface are standard CORBA objects. When an interface is passed as a parameter in an operation call to a remote server, only an object reference is passed. This ensures that any operations that the remote server performs on the interface are correctly performed on the real object. However, this can have substantial penalties for the application, because of the overhead of remote invocation. For example, when the server registers a Resource with the current transaction, the invocation might be remote to the originator of the transaction.

To avoid this overhead, your OTS may support interposition. This permits a server to create a local control object which acts as a local coordinator, and fields registration requests that would normally be passed back to the originator. This coordinator must register itself with the original coordinator, so that it can correctly participate in the commit protocol. Interposed coordinators form a tree structure with their parent coordinators.

To use interposition, ensure that is correctly initialized before creating objects. Also, the client and server must both use interposition. Your ORB must support filters or interceptors, or the CosTSPortability interface, since interposition requires the use of implicit transaction propagation. To use interposition, set the OTS_CONTEXT_PROP_MODE property variable to INTERPOSITION .

Note

Interposition is not required if you use the advanced API.

The OTS supports both checked and unchecked transaction behavior.

Integrity constraints of checked transactions

  • A transaction will not commit until all transactional objects involved in the transaction have completed their transactional requests.

  • Only the transaction originator can commit the transaction

Checked transactional behavior is typical transaction behavior, and is widely implemented. Checked behavior requires implicit propagation, because explicit propagation prevents the OTS from tracking which objects are involved in the transaction.

Unchecked behavior allows you to implement relaxed models of atomicity. Any use of explicit propagation implies the possibility of unchecked behavior, since you as the programmer are in control of the behavior. Even if you use implicit propagation, a server may unilaterally abort or commit the transaction using the Current interface, causing unchecked behavior.

Some OTS implementations enforce checked behavior for the transactions they support, to provide an extra level of transaction integrity. The checks ensure that all transactional requests made by the application complete their processing before the transaction is committed. A checked Transaction Service guarantees that commit fails unless all transactional objects involved in the transaction complete the processing of their transactional requests. Rolling back the transaction does not require such as check, since all outstanding transactional activities will eventually roll back if they are not directed to commit.

There are many possible implementations of checking in a Transaction Service. One provides equivalent function to that provided by the request and response inter-process communication models defined by X/Open. The X/Open Transaction Service model of checking widely implemented. It describes the transaction integrity guarantees provided by many existing transaction systems. These transaction systems provide the same level of transaction integrity for object-based applications, by providing a Transaction Service interface that implements the X/Open checks.

In X/Open, completion of the processing of a request means that the object has completed execution of its method and replied to the request. The level of transaction integrity provided by a Transaction Service implementing the X/Open model provides equivalent function to that provided by the XATMI and TxRPC interfaces defined by X/Open for transactional applications. X/Open DTP Transaction Managers are examples of transaction management functions that implement checked transaction behavior.

This implementation of checked behavior depends on implicit transaction propagation. When implicit propagation is used, the objects involved in a transaction at any given time form a tree, called the request tree for the transaction. The beginner of the transaction is the root of the tree. Requests add nodes to the tree, and replies remove the replying node from the tree. Synchronous requests, or the checks described below for deferred synchronous requests, ensure that the tree collapses to a single node before commit is issued.

If a transaction uses explicit propagation, the Transaction Service has no way to know which objects are or will be involved in the transaction. Therefore, the use of explicit propagation is not permitted by a Transaction Service implementation that enforces X/Open-style checked behavior.

Applications that use synchronous requests exhibit checked behavior. If your application uses deferred synchronous requests, all clients and objects need to be under the control of a checking Transaction Service. In that case, the Transaction Service can enforce checked behavior, by applying a reply check and a committed check. The Transaction Service must also apply a resume check, so that the transaction is only resumed by applications in the correct part of the request tree.

reply check

Before an object replies to a transactional request, a check is made to ensure that the object has received replies to all the deferred synchronous requests that propagated the transaction in the original request. If this condition is not met, an exception is raised and the transaction is marked as rollback-only. A Transaction Service may check that a reply is issued within the context of the transaction associated with the request.

commit check

Before a commit can proceed, a check is made to ensure that the commit request for the transaction is being issued from the same execution environment that created the transaction, and that the client issuing commit has received replies to all the deferred synchronous requests it made that propagated the transaction.

resume check

Before a client or object associates a transaction context with its thread of control, a check is made to ensure that this transaction context was previously associated with the execution environment of the thread. This association would exist if the thread either created the transaction or received it in a transactional operation.

Where support from the ORB is available, supports X/Open checked transaction behavior. However, unless the OTS_CHECKED_TRANSACTIONS property variable is set to YES , checked transactions are disabled. This is the default setting.

Note

Checked transactions are only possible with a co-located transaction manager.

In a multi-threaded application, multiple threads may be associated with a transaction during its lifetime, sharing the context. In addition, if one thread terminates a transaction, other threads may still be active within it. In a distributed environment, it can be difficult to guarantee that all threads have finished with a transaction when it terminates. By default, issues a warning if a thread terminates a transaction when other threads are still active within it, but allow the transaction termination to continue. You can choose to block the thread which is terminating the transaction until all other threads have disassociated themselves from its context, or use other methods to solve the problem. provides the com.arjuna.ats.arjuna.coordinator.CheckedAction class, which allows you to override the thread and transaction termination policy. Each transaction has an instance of this class associated with it, and you can implement the class on a per-transaction basis.


When a thread attempts to terminate the transaction and there active threads exist within it, the system invokes the check method on the transaction’s CheckedAction object. The parameters to the check method are:

isCommit

Indicates whether the transaction is in the process of committing or rolling back.

actUid

The transaction identifier.

list

A list of all of the threads currently marked as active within this transaction.

When check returns, the transaction termination continues. Obviously the state of the transaction at this point may be different from that when check was called.

Set the CheckedAction instance associated with a given transaction with the setCheckedAction method of Current .

  • Any execution environment (thread, process) can use a transaction Control.

  • Control s, Coordinator s, and Terminator s are valid for use for the duration of the transaction if implicit transaction control is used, via Current . If you use explicit control, via the TransactionFactory and Terminator , then use the destroyControl method of the OTS class in com.arjuna.CosTransactions to signal when the information can be garbage collected.

  • You can propagate Coordinator s and Terminator s between execution environments.

  • If you try to commit a transaction when there are still active subtransactions within it, rolls back the parent and the subtransactions.

  • includes full support for nested transactions. However, if a resource raises an exception to the commitment of a subtransaction after other resources have previously been told that the transaction committed, forces the enclosing transaction to abort. This guarantees that all resources used within the subtransaction are returned to a consistent state. You can disable support for subtransactions by setting the OTS_SUPPORT_SUBTRANSACTIONS variable to NO .

  • Obtain Current from the get_current method of the OTS.

  • A timeout value of zero seconds is assumed for a transaction if none is specified using set_timeout .

  • by default, Current does not use a separate transaction manager server by default. Override this behavior by setting the OTS_TRANSACTION_MANAGER environment variable. Location of the transaction manager is ORB-specific.

  • Checked transactions are disabled by default. To enable them, set the OTS_CHECKED_TRANSACTIONS property to YES .

Steps to participate in an OTS transaction

  • Create Resource and SubtransactionAwareResource objects for each object which will participate within the transaction or subtransaction. These resources manage the persistence, concurrency control, and recovery for the object. The OTS invokes these objects during the prepare, commit, or abort phase of the transaction or subtransaction, and the Resources perform the work of the application.

  • Register Resource and SubtransactionAwareResource objects at the correct time within the transaction, and ensure that the object is only registered once within a given transaction. As part of registration, a Resource receives a reference to a RecoveryCoordinator . This reference must be made persistent, so that the transaction can recover in the event of a failure.

  • Correctly propagate resources such as locks to parent transactions and SubtransactionAwareResource objects.

  • Drive the crash recovery for each resource which was participating within the transaction, in the event of a failure.

The OTS does not provide any Resource implementations. You need to provide these implementations. The interfaces defined within the OTS specification are too low-level for most situations. is designed to make use of raw Common Object Services (COS) interfaces, but provides a higher-level API for building transactional applications and framework. This API automates much of the work involved with participating in an OTS transaction.

If you use implicit transaction propagation, ensure that appropriate objects support the TransactionalObject interface. Otherwise, you need to pass the transaction contexts as parameters to the relevant operations.

A Recoverable Server includes at least one transactional object and one resource object, each of which have distinct responsibilities.

Example 3.14. Reliable server

/* 
  BankAccount1 is an object with internal resources. It inherits from both the TransactionalObject and the Resource interfaces:
*/
interface BankAccount1:
                    CosTransactions::TransactionalObject, CosTransactions::Resource
{
    ...
    void makeDeposit (in float amt);
    ...
};
/* The corresponding Java class is: */
public class BankAccount1
{
public void makeDeposit(float amt);
    ...
};
/*
  Upon entering, the context of the transaction is implicitly associated with the object’s thread. The pseudo object
  supporting the Current interface is used to retrieve the Coordinator object associated with the transaction.
*/
void makeDeposit (float amt)
{
    org.omg.CosTransactions.Control c;
    org.omg.CosTransactions.Coordinator co;
    c = txn_crt.get_control();
    co = c.get_coordinator();
    ...
/*
  Before registering the resource the object should check whether it has already been registered for the same
  transaction. This is done using the hash_transaction and is_same_transaction operations.  that this object registers
  itself as a resource. This imposes the restriction that the object may only be involved in one transaction at a
  time. This is not the recommended way for recoverable objects to participate within transactions, and is only used as an
  example.  If more parallelism is required, separate resource objects should be registered for involvement in the same
  transaction.
*/
    RecoveryCoordinator r;
    r = co.register_resource(this);

    // performs some transactional activity locally
    balance = balance + f;
    num_transactions++;
    ...
    // end of transactional operation
};


The Transaction Service provides atomic outcomes for transactions in the presence of application, system or communication failures. From the viewpoint of each user object role, two types of failure are relevant:

  • A local failure, which affects the object itself.

  • An external failure, such as failure of another object or failure in the communication with an object.

The transaction originator and transactional server handle these failures in different ways.

Local failure

If a Transaction originator fails before the originator issues commit , the transaction is rolled back. If the originator fails after issuing commit and before the outcome is reported, the transaction can either commit or roll back, depending on timing. In this case, the transaction completes without regard to the failure of the originator.

External failure

Any external failure which affects the transaction before the originator issues commit causes the transaction to roll back. The standard exception TransactionRolledBack is raised in the originator when it issues commit .

If a failure occurs after commit and before the outcome is reported, the client may not be informed of the outcome of the transaction. This depends on the nature of the failure, and the use of the report_heuristics option of commit . For example, the transaction outcome is not reported to the client if communication between the client and the Coordinator fails.

A client can determine the outcome of the transaction by using method get_status on the Coordinator . However, this is not reliable because it may return the status NoTransaction , which is ambiguous. The transaction could have committed and been forgotten, or it could have rolled back and been forgotten.

An originator is only guaranteed to know the transaction outcome in one of two ways.

  • if its implementation includes a Resource object, so that it can participate in the two-phase commit procedure.

  • The originator and Coordinator must be located in the same failure domain.

This chapter contains a description of the use of the classes you can use to extend the OTS interfaces. These advanced interfaces are all written on top of the basic OTS engine described previously, and applications which use them run on other OTS implementations, only without the added functionality.

Features

AtomicTransaction

Provides a more manageable interface to the OTS transaction than CosTransactions::Current . It automatically keeps track of transaction scope, and allows you to create nested top-level transactions in a more natural manner than the one provided by the OTS.

Advanced subtransaction-Resource classes

Allow nested transactions to use a two-phase commit protocol. These Resources can also be ordered within , enabling you to control the order in which Resource s are called during the commit or abort protocol.

Implicit context propagation between client and server

Where available, uses implicit context propagation between client and server. Otherwise, provides an explicit interposition class, which simplifies the work involved in interposition. The API, Transactional Objects for Java (TXOJ) , requires either explicit or implicit interposition. This is even true in a stand-alone mode when using a separate transaction manager. TXOJ is fully described in the ArjunaCore Development Guide .

Note

the extensions to the CosTransactions.idl are located in the com.arjuna.ArjunaOTS package and the ArjunaOTS.idl file.

The OTS implementation of nested transactions is extremely limited, and can lead to the generation of inconsistent results. One example is a scenario in which a subtransaction coordinator discovers part of the way through committing that a resources cannot commit. It may not be able to tell the committed resources to abort.

In most transactional systems which support subtransactions, the subtransaction commit protocol is the same as a top-level transaction’s. There are two phases, a prepare phase and a commit or abort phase. Using a multi-phase commit protocol avoids the above problem of discovering that one resources cannot commit after others have already been told to commit. The prepare phase generates consensus on the commit outcome, and the commit or abort phase enforces the outcome.

supports the strict OTS implementation of subtransactions for those resources derived from CosTransactions::SubtransactionAwareResource . However, if a resource is derived from ArjunaOTS::ArjunaSubtranAwareResource , it is driven by a two-phase commit protocol whenever a nested transaction commits.


During the first phase of the commit protocol the prepare_subtransaction method is called, and the resource behaves as though it were being driven by a top-level transaction, making any state changes provisional upon the second phase of the protocol. Any changes to persistent state must still be provisional upon the second phase of the top-level transaction, as well. Based on the votes of all registered resources, then calls either commit_subtransaction or rollback_subtransaction .

Note

This scheme only works successfully if all resources registered within a given subtransaction are instances of the ArjunaSubtranAwareResource interface, and that after a resource tells the coordinator it can prepare, it does not change its mind.

When resources are registered with a transaction, the transaction maintains them within a list, called the intentions list. At termination time, the transaction uses the intentions list to drive each resource appropriately, to commit or abort. However, you have no control over the order in which resources are called, or whether previously-registered resources should be replaced with newly registered resources. The interface ArjunaOTS::OTSAbstractRecord gives you this level of control.


typeId

returns the record type of the instance. This is one of the values of the enumerated type Record_type .

uid

a stringified Uid for this record.

propagateOnAbort

by default, instances of OTSAbstractRecord should not be propagated to the parent transaction if the current transaction rolls back. By returning TRUE , the instance will be propagated.

propagateOnCommit

returning TRUE from this method causes the instance to be propagated to the parent transaction if the current transaction commits. Returning FALSE disables the propagation.

saveRecord

returning TRUE from this method causes to try to save sufficient information about the record to persistent state during commit, so that crash recovery mechanisms can replay the transaction termination in the event of a failure. If FALSE is returned, no information is saved.

merge

used when two records need to merge together.

alter

used when a record should be altered.

shouldAdd

returns true ii the record should be added to the list, false if it should be discarded.

shouldMerge

returns true if the two records should be merged into a single record, false otherwise.

shouldReplace

returns true if the record should replace an existing one, false otherwise.

When inserting a new record into the transaction’s intentions list, uses the following algorithm:

  1. if a record with the same type and uid has already been inserted, then the methods shouldAdd , and related methods, are invoked to determine whether this record should also be added.

  2. If no such match occurs, then the record is inserted in the intentions list based on the type field, and ordered according to the uid. All of the records with the same type appear ordered in the intentions list.

OTSAbstractRecord is derived from ArjunaSubtranAwareResource . Therefore, all instances of OTSAbstractRecord inherit the benefits of this interface.

In terms of the OTS, AtomicTransaction is the preferred interface to the OTS protocol engine. It is equivalent to CosTransactions::Current , but with more emphasis on easing application development. For example, if an instance of AtomicTransaction goes out of scope before it is terminates, the transaction automatically rolls back. CosTransactions::Current cannot provide this functionality. When building applications using , use AtomicTransaction for the added benefits it provides. It is located in the com.arjuna.ats.jts.extensions.ArjunaOTS package.



Transaction nesting is determined dynamically. Any transaction started within the scope of another running transaction is nested.

The TopLevelTransaction class, which is derived from AtomicTransaction , allows creation of nested top-level transactions. Such transactions allow non-serializable and potentially non-recoverable side effects to be initiated from within a transaction, so use them with caution. You can create nested top-level transactions with a combination of the CosTransactions::TransactionFactory and the suspend and resume methods of CosTransactions::Current . However, the TopLevelTransaction class provides a more user-friendly interface.

AtomicTransaction and TopLevelTransaction are completely compatible with CosTransactions::Current . You an use the two transaction mechanisms interchangeably within the same application or object.

AtomicTransaction and TopLevelTransaction are similar to CosTransactions::Current . They both simplify the interface between you and the OTS. However, you gain two advantages by using AtomicTransaction or TopLevelTransaction .

  • The ability to create nested top-level transactions which are automatically associated with the current thread. When the transaction ends, the previous transaction associated with the thread, if any, becomes the thread’s current transaction.

  • Instances of AtomicTransaction track scope, and if such an instance goes out of scope before it is terminated, it is automatically aborted, along with its children.

When using TXOJ in a distributed manner, requires you to use interposition between client and object. This requirement also exists if the application is local, but the transaction manager is remote. In the case of implicit context propagation, where the application object is derived from CosTransactions::TransactionalObject, you do not need to do anything further. automatically provides interposition. However, where implicit propagation is not supported by the ORB, or your application does not use it, you must take additional action to enable interposition.

The class com.arjuna.ats.jts.ExplicitInterposition allows an application to create a local control object which acts as a local coordinator, fielding registration requests that would normally be passed back to the originator. This surrogate registers itself with the original coordinator, so that it can correctly participate in the commit protocol. The application thread context becomes the surrogate transaction hierarchy. Any transaction context currently associated with the thread is lost. The interposition lasts for the lifetime of the explicit interposition object, at which point the application thread is no longer associated with a transaction context. Instead, it is set to null .

interposition is intended only for those situations where the transactional object and the transaction occur within different processes, rather than being co-located. If the transaction is created locally to the client, do not use the explicit interposition class. The transaction is implicitly associated with the transactional object because it resides within the same process.


A transaction context can be propagated between client and server in two ways: either as a reference to the client’s transaction Control, or explicitly sent by the client. Therefore, there are two ways in which the interposed transaction hierarchy can be created and registered. For example, consider the class Example which is derived from LockManager and has a method increment:


if the Control passed to the register operation of ExplicitInterposition is null , no exception is thrown. The system assumes that the client did not send a transaction context to the server. A transaction created within the object will thus be a top-level transaction.

When the application returns, or when it finishes with the interposed hierarchy, the program should call unregisterTransaction to disassociate the thread of control from the hierarchy. This occurs automatically when the ExplicitInterposition object is garbage collected. However, since this may be after the transaction terminates, assumes the thread is still associated with the transaction and issues a warning about trying to terminate a transaction while threads are still active within it.

This example illustrates the concepts and the implementation details for a simple client/server example using implicit context propagation and indirect context management.

This example only includes a single unit of work within the scope of the transaction. consequently, only a one-phase commit is needed.

The client and server processes are both invoked using the implicit propagation and interposition command-line options.

For the purposes of this worked example, a single method implements the DemoInterface interface. This method is used in the DemoClient program.


This section deals with the pieces needed to implement the example interface.

First, you need to to initialize the ORB and the POA. Lines 10 through 14 accomplish these tasks.

The servant class DemoImplementation contains the implementation code for the DemoInterface interface. The servant services a particular client request. Line 16 instantiates a servant object for the subsequent servicing of client requests.

Once a servant is instantiated, connect the servant to the POA, so that it can recognize the invocations on it, and pass the invocations to the correct servant. Line 18 performs this task.

Lines 20 through to 21 registers the service through the default naming mechanism. More information about the options available can be found in the ORB Portability Guide.

If this registration is successful, line 23 outputs a sanity check message.

Finally, line 25 places the server process into a state where it can begin to accept requests from client processes.


After the server compiles, you can use the command line options defined below to start a server process. By specifying the usage of a filter on the command line, you can override settings in the TransactionService.properties file.

Note

if you specify the interposition filter, you also imply usage of implicit context propagation.

These settings are defaults, and you can override them at run-time by using property variables, or in the properties file in the etc/ directory of the installation.

  • Unless a CORBA object is derived from CosTransactions::TransactionalObject,you do not need to propagate any context. In order to preserve distribution transparency, defaults to always propagating a transaction context when calling remote objects, regardless of whether they are marked as transactional objects. You can override this by setting the com.arjuna.ats.jts.alwaysPropagateContext property variable to NO .

  • If an object is derived from CosTransactions::TransactionalObject, and no client context is present when an invocation is made, transmits a null context. Subsequent transactions begun by the object are top-level. If a context is required, then set the com.arjuna.ats.jts.needTranContext property variable to YES, in which case raises the TransactionRequired exception.

  • needs a persistent object store, so that it can record information about transactions in the event of failures. If all transactions complete successfully, this object store has no entries. The default location for this must be set using the ObjectStoreEnvironmentBean.objectStoreDir variable in the properties file.

  • If you use a separate transaction manager for Current , its location is obtained from the CosServices.cfg file. CosServices.cfg is located at runtime by the OrbPortabilityEnvironmentBean properties initialReferencesRoot and initialReferencesFile . The former is a directory, defaulting to the current working directory. The latter is a file name, relative to the directory. The default value is CosServices.cfg .

  • Checked transactions are not enabled by default. This means that threads other than the transaction creator may terminate the transaction, and no check is made to ensure all outstanding requests have finished prior to transaction termination. To override this, set the JTSEnvironmentBean.checkedTransactions property variable to YES .

  • Note

    As of 4.5, transaction timeouts are unified across all transaction components and are controlled by ArjunaCore. The old JTS configuration property com.arjuna.ats.jts.defaultTimeout still remains but is deprecated.

    if a value of 0 is specified for the timeout of a top-level transaction, or no timeout is specified, does not impose any timeout on the transaction. To override this default timeout, set the CoordinatorEnvironmentBean.defaultTimeout property variable to the required timeout value in seconds.

assures complete, accurate business transactions for any Java based applications, including those written for the Jakarta EE and EJB frameworks.

is a 100% Java implementation of a distributed transaction management system based on the Jakarta EE Java Transaction Service (JTS) standard. Our implementation of the JTS utilizes the Object Management Group's (OMG) Object Transaction Service (OTS) model for transaction interoperability as recommended in the Jakarta EE and EJB standards. Although any JTS-compliant product will allow Java objects to participate in transactions, one of the key features of is it's 100% Java implementation. This allows to support fully distributed transactions that can be coordinated by distributed parties.

runs can be run both as an embedded distributed service of an application server (e.g. WildFly Application Server), affording the user all the added benefits of the application server environment such as real-time load balancing, unlimited linear scalability and unmatched fault tolerance that allows you to deliver an always-on solution to your customers. It is also available as a free-standing Java Transaction Service.

In addition to providing full compliance with the latest version of the JTS specification, leads the market in providing many advanced features such as fully distributed transactions and ORB portability with POA support.

is tested on HP-UX 11i, Red Hat Linux, Windows Server 2003, and Sun Solaris 10, using Sun's JDK 5. It should howerver work on any system with JDK 5 or 6.

The Java Transaction API support for comes in two flavours:

  • a purely local implementation, that does not require an ORB, but obviously requires all coordinated resources to reside within the same JVM.
  • a fully distributed implementation.

Key features

  • full compliance with the Jakarta Transactions specification:
    • Purely local (ORB-less) JTA offers the fastest JTA performance
    • JDBC support
    • XA compliance
    • JDBC drivers for database access with full transaction support
    • Automatic crash recovery for XAResources
  • compliance with the JTS specification and OTS 1.2 specification from the OMG
    • Distributed JTA implementation
    • support for distributed transactions (utilizing two-phase commit)
    • POA ORB support
    • interposition
    • transaction heuristics
    • distributed transaction manager (co-located with the transaction initiator) or transaction manager server
    • checked/unchecked transaction behaviour
    • supports both flat and nested transaction models, with nested-aware resources and resource adapters
    • independent concurrency control system with support for type-specific concurrency control
    • support for CosTransaction::Current
    • direct and indirect transaction management
    • synchronization interface
    • explicit and implicit transaction context propagation
    • automatic crash recovery
    • multi-thread aware
  • transactional objects (TO) for Java
  • ORB independence via the ORB portability layer

This trail map will help you get started with running product. It is structured as follows:

  • 1. Installation Content: This trail describes the content installed by the distribution
  • 2. The Sample Application: This trail describes via a set of examples how is used to build transactional applications
  • 3. Deploying and testing the Sample Application: This trail describes how to deploy and to test the sample application
  • 4. Making the Sample Application Persistent: This trail describes tools allowing to build a persistent application
  • 5. Recovery from Failure: This trail describes via a simple scenario how manages recovery from failure.
  • 6. Where Next?: This trail indicates where to find additional information

In addition to the trails listed above, a set of trails giving more explanation on concept around transaction processing and standards, and also a quick access to section explaining how to configure are listed in the section "Additional Trails".

Note: When running the local JTS transactions part of the trailmap, you will need to start the recovery manager: java com.arjuna.ats.arjuna.recovery.RecoveryManager -test

There are six interfaces between software components in the X/Open DTP model.

The functions that each RM provides for the TM are called the xa_*() functions. For example the TM calls xa_start( ) in each participating RM to start an RM-internal transaction as part of a new global transaction. Later, the TM may call in sequence xa_end() xa_prepare( ) and xa_commit() to coordinate a (successful in this case) two-phase commit protocol. The functions that the TM provides for each RM are called the ax_*( ) functions. For example an RM calls ax_reg( ) to register dynamically with the TM.

  • the TxRPC interface (see the referenced TxRPC specification)
  • the XATMI interface (see the referenced XATMI specification)
  • the CPI-C interface (see the referenced CPI-C specification).
  • AP-RM. The AP-RM interfaces give the AP access to resources. X/Open interfaces, such as SQL and ISAM provide AP portability. The X/Open DTP model imposes few constraints on native RM APIs. The constraints involve only those native RM interfaces that define transactions.
  • AP-TM. The AP-TM interface (the TX interface) provides the AP with an Application Programming Interface (API) by which the AP coordinates global transaction management with the TM. For example, when the AP calls tx_begin( ) the TM informs the participating RMs of the start of a global transaction. After each request is completed, the TM provides a return value to the AP reporting back the success or otherwise of the TX call.
  • TM-RM. The TM-RM interface (the XA interface) lets the TM structure the work of RMs into global transactions and coordinate completion or recovery. The XA interface is the bidirectional interface between the TM and RM.
  • TM-CRM. The TM-CRM interface (the XA+ interface) supports global transaction information flow across TM Domains. In particular TMs can instruct CRMs by use of xa_*() function calls to suspend or complete transaction branches, and to propagate global transaction commitment protocols to other transaction branches. CRMs pass information to TMs in subordinate branches by use of ax_*( ) function calls. CRMs also use ax_*( ) function calls to request the TM to create subordinate transaction branches, to save and retrieve recovery information, and to inform the TM of the start and end of blocking conditions.
  • AP-CRM. X/Open provides portable APIs for DTP communication between APs within a global transaction. The API chosen can significantly influence (and may indeed be fundamental to) the whole architecture of the application. For this reason, these APIs are frequently referred to in this specification and elsewhere as communication paradigms.In practice, each paradigm has unique strengths, so X/Open offers the following popular paradigms:
  • CRM-OSI TP. This interface (the XAP-TP interface) provides a programming interface between a CRM and Open Systems Interconnection Distributed Transaction Processing (OSI TP) services. XAP-TP interfaces with the OSI TP Service and the Presentation Layer of the seven-layer OSI model. X/Open has defined this interface to support portable implementations of application-specific OSI services. The use of OSI TP is mandatory for communication between heterogeneous TM domains. For details of this interface, see the referenced XAP-TP specification and the OSI TP standards.
Although the aim of the Open Group was providing portable interfaces, only the XA interface appears to be accepted and implemented by a wide range of vendors.

XA is a bidirectional interface between resource managers and transaction managers. This interface specifies two sets of functions. The first set, called as xa_*() functions are implemented by resource managers for use by the transaction manager.

Table 1 - XA Interface of X/Open DTP Model for the transaction manager

Function Purpose
xa_start Directs a resource manager to associate the subsequent requests by application programs to a transaction identified by the supplied identifier.
xa_end Ends the association of a resource manager with the transaction.
xa_prepare Prepares the resource manager for the commit operation. Issued by the transaction manager in the first phase of the two-phase commit operation.
xa_commit Commits the transactional operations. Issued by the transaction manager in the second phase of the two-phase commit operation.
xa_recover Retrieves a list of prepared and heuristically committed or heuristically rolled back transactions
xa_forget Forgets the heuristic transaction associated with the given transaction identifier

The second set of functions, called as ax_*() functions, are implemented by the transaction manager for use by resource managers.

Table 2 - XA Interface of X/Open DTP Model for resource managers

Function Purpose
ax_reg() Dynamically enlists with the transaction manager.
ax_unreg() Dynamically delists from the transaction manager.

Transaction management is one of the most crucial requirements for enterprise application development. Most of the large enterprise applications in the domains of finance, banking and electronic commerce rely on transaction processing for delivering their business functionality.

Enterprise applications often require concurrent access to distributed data shared amongst multiple components, to perform operations on data. Such applications should maintain integrity of data (as defined by the business rules of the application) under the following circumstances:

  • distributed access to a single resource of data, and
  • access to distributed resources from a single application component.

In such cases, it may be required that a group of operations on (distributed) resources be treated as one unit of work. In a unit of work, all the participating operations should either succeed or fail and recover together. This problem is more complicated when

  • a unit of work is implemented across a group of distributed components operating on data from multiple resources, and/or
  • the participating operations are executed sequentially or in parallel threads requiring coordination and/or synchronization.

In either case, it is required that success or failure of a unit of work be maintained by the application. In case of a failure, all the resources should bring back the state of the data to the previous state ( i.e., the state prior to the commencement of the unit of work).

From the programmer's perspective a transaction is a scoping mechanism for a collection of actions which must complete as a unit. It provides a simplified model for exception handling since only two outcomes are possible:

  • success - meaning that all actions involved within a transaction are completed
  • failure - no actions complete

To illustrate the reliability expected by the application let’s consider the funds transfer example which is familiar to all of us.

The Money transfer involves two operations: Deposit and Withdrawal

The complexity of implementation doesn't matter; money moves from one place to another. For instance, involved accounts may be either located in a same relational table within a database or located on different databases.

A Simple transfer consists on moving money from savings to checking while a Complex transfer can be performed at the end- of- day according to a reconciliation between international banks

The concept of a transaction, and a transaction manager (or a transaction processing service) simplifies construction of such enterprise level distributed applications while maintaining integrity of data in a unit of work.

A transaction is a unit of work that has the following properties:

  • Atomicity – either the whole transaction completes or nothing completes - partial completion is not permitted.
  • Consistency – a transaction transforms the system from one consistent state to another. In other words, On completion of a successful transaction, the data should be in a consistent state. For example, in the case of relational databases, a consistent transaction should preserve all the integrity constraints defined on the data.
  • Isolation: Each transaction should appear to execute independently of other transactions that may be executing concurrently in the same environment. The effect of executing a set of transactions serially should be the same as that of running them concurrently. This requires two things:
    • During the course of a transaction, intermediate (possibly inconsistent) state of the data should not be exposed to all other transactions.
    • Two concurrent transactions should not be able to operate on the same data. Database management systems usually implement this feature using locking.
  • Durabiliy: The effects of a completed transaction should always be persistent.

These properties, called as ACID properties, guarantee that a transaction is never incomplete, the data is never inconsistent, concurrent transactions are independent, and the effects of a transaction are persistent.

A collection of actions is said to be transactional if they possess the ACID properties. These properties are assumed to be ensured, in the presence of failures; if actions involved within the transaction are performed by a Transactional System. A transaction system includes a set of components where each of them has a particular role. Main components are described below.

Application Programs are clients for the transactional resources. These are the programs with which the application developer implements business transactions. With the help of the transaction manager, these components create global transactions and operate on the transactional resources with in the scope of these transactions. These components are not responsible for implementing mechanisms for preserving ACID properties of transactions. However, as part of the application logic, these components generally make a decision whether to commit or rollback transactions.

Application responsibilities could be summarized as follow:

  • Create and demarcate transactions
  • Operate on data via resource managers

A resource manager is in general a component that manages persistent and stable data storage system, and participates in the two phase commit and recovery protocols with the transaction manager.

A resource manager is typically a driver that provides two sets of interfaces: one set for the application components to get connections and operating, and the other set for participating in two phase commit and recovery protocols coordinated by a transaction manager. This component may also, directly or indirectly, register resources with the transaction manager so that the transaction manager can keep track of all the resources participating in a transaction. This process is called as resource enlistment.

Resource Manager responsibilities could be summarized as follow

  • Enlist resources with the transaction manager
  • Participate in two-phase commit and recovery protocol

The transaction manager is the core component of a transaction processing environment. Its main responsibilities are to create transactions when requested by application components, allow resource enlistment and delistment, and to manage the two-phase commit or recovery protocol with the resource managers.

A typical transactional application begins a transaction by issuing a request to a transaction manager to initiate a transaction. In response, the transaction manager starts a transaction and associates it with the calling thread. The transaction manager also establishes a transaction context. All application components and/or threads participating in the transaction share the transaction context. The thread that initially issued the request for beginning the transaction, or, if the transaction manager allows, any other thread may eventually terminate the transaction by issuing a commit or rollback request.

Before a transaction is terminated, any number of components and/or threads may perform transactional operations on any number of transactional resources known to the transaction manager. If allowed by the transaction manager, a transaction may be suspended or resumed before finally completing the transaction.

Once the application issues the commit request, the transaction manager prepares all the resources for a commit operation, and based on whether all resources are ready for a commit or not, issues a commit or rollback request to all the resources.

Resource Manager responsibilities could be summarized as follow:

  • Establish and maintain transaction context
  • Maintain association between a transaction and the participating resources.
  • Initiate and conduct two-phase commit and recovery protocol with the resource managers.
  • Make synchronization calls to the application components before beginning and after end of the two-phase commit and recovery process

Basically, the Recovery is the mechanism which preserves the transaction atomicity in presence of failures. The basic technique for implementing transactions in presence of failures is based on the use of logs. That is, a transaction system has to record enough information to ensure that it can be able to return to a previous state in case of failure or to ensure that changes committed by a transaction are properly stored.

In addition to be able to store appropriate information, all participants within a distributed transaction must log similar information which allow them to take a same decision either to set data in their final state or in their initial state.

Two techniques are in general used to ensure transaction's atomicity. A first technique focuses on manipulated data, such the Do/Undo/Redo protocol (considered as a recovery mechanism in a centralized system), which allow a participant to set its data in their final values or to retrieve them in their initial values. A second technique relies on a distributed protocol named the two phases commit, ensuring that all participants involved within a distributed transaction set their data either in their final values or in their initial values. In other words all participants must commit or all must rollback.

In addition to failures we refer as centralized such system crashes, communication failures due for instance to network outages or message loss have to be considered during the recovery process of a distributed transaction.

In order to provide an efficient and optimized mechanism to deal with failure, modern transactional systems typically adopt a “presume abort” strategy, which simplifies the transaction management.

The presumed abort strategy can be stated as «when in doubt, abort». With this strategy, when the recovery mechanism has no information about the transaction, it presumes that the transaction has been aborted.

A particularity of the presumed-abort assumption allows a coordinator to not log anything before the commit decision and the participants do not to log anything before they prepare. Then, any failure which occurs before the 2pc starts lead to abort the transaction. Furthermore, from a coordinator point of view any communication failure detected by a timeout or exception raised on sending prepare is considered as a negative vote which leads to abort the transaction. So, within a distributed transaction a coordinator or a participant may fail in two ways: either it crashes or it times out for a message it was expecting. When a coordinator or a participant crashes and then restarts, it uses information on stable storage to determine the way to perform the recovery. As we will see it the presumed-abort strategy enable an optimized behavior for the recovery.
Saying that a distributed transaction can involve several distributed participants, means that these participant must be integrated within a global transaction manager which has the responsibility to ensure that all participants take a common decision to commit or rollback the distributed transaction. The key of such integration is the existence of a common transactional interface which is understood by all participants, transaction manager and resource managers such databases.

The importance of common interfaces between participants, as well as the complexity of their implementation, becomes obvious in an open systems environment. For this aim various distributed transaction processing standards have been developed by international standards organizations. Among these organizations, We list three of them which are mainly considered in the product:

  • The X/Open model and its successful XA interface
  • The OMG with its CORBA infrastructure and the Object Transaction Service and finally
  • The Jakarta Transactions specification process
Basically these standards have proposed logical models, which divide transaction processing into several functions:
  • those assigned to the application which ties resources together in application- specific operations
  • those assigned to the Resource manager which access physically to data stores
  • functions performed by the Transaction Manager which manages transactions, and finally
  • Communication Resource Managers which allow to exchange information with other transactional domains.
Object Transaction Service (OTS) is a distributed transaction processing service specified by the Object Management Group (OMG). This specification extends the CORBA model and defines a set of interfaces to perform transaction processing across multiple CORBA objects.

OTS is based on the Open Group's DTP model and is designed so that it can be implemented using a common kernel for both the OTS and Open Group APIs. In addition to the functions defined by DTP, OTS contains enhancements specifically designed to support the object environment. Nested transactions and explicit propagation are two examples.

The CORBA model also makes some of the functions in DTP unnecessary so these have been consciously omitted. Static registration and the communications resource manager are unnecessary in the CORBA environment.

A key feature of OTS is its ability to share a common transaction with XA compliant resource managers. This permits the incremental addition of objects into an environment of existing procedural applications.

Figure 1 - OTS Architecture

The OTS architecture, shown in Figure 1, consists of the following components:

  • Transaction Client: A program or object that invokes operations on transactional objects.
  • Transactional Object : A CORBA object that encapsulates or refers to persistent data, and whose behavior depends on whether or not its operations are invoked during a transaction.
  • Recoverable Object : A transactional object that directly maintains persistent data, and participates in transaction protocols.
  • Transactional Server : A collection of one or more transactional objects.
  • Recoverable Server: A collection of objects, of which at least one of which is recoverable.
  • Resource Object : A resource object is an object in the transaction service that is registered for participation in the two-phase commit and recovery protocol.
In addition to the usual transactional semantics, the CORBA OTS provides for the following features:
  • Nested Transactions : This allows an application to create a transaction that is embedded in an existing transaction. In this model, multiple subtransactions can be embedded recursively in a transaction. Subtransactions can be committed or rolled back without committing or rolling back the parent transaction. However, the results of a commit operation are contingent upon the commitment of all the transaction's ancestors. The main advantage of this model is that transactional operations can be controlled at a finer granularity. The application will have an opportunity to correct or compensate for failures at the subtransaction level, without actually attempting to commit the complete parent transaction.
  • Application Synchronization : Using the OTS synchronization protocol, certain objects can be registered with the transaction service for notification before the start of and the completion of the two-phase commit process. This enables such application objects to synchronize transient state and data stored in persistent storage.
A client application program may use direct or indirect context management to manage a transaction. With indirect context management, an application uses the pseudo object called Current, provided by the Transaction Service , to associate the transaction context with the application thread of control. In direct context management, an application manipulates the Control object and the other objects associated with the transaction.

An object may require transactions to be either explicitly or implicitly propagated to its operations.

  • Explicit propagation means that an application propagates a transaction context by passing objects defined by the Transaction Service as explicit parameters. This should typically be the PropagationContext structure.
  • Implicit propagation means that requests are implicitly associated with the client's transaction; they share the client's transaction context. It is transmitted implicitly to the objects, without direct client intervention. Implicit propagation depends on indirect context management, since it propagates the transaction context associated with the Current pseudo object. An object that supports implicit propagation would not typically expect to receive any Transaction Service object as an explicit parameter.
A client may use one or both forms of context management, and may communicate with objects that use either method of transaction propagation. (Details of how to enable implicit propagation were described in Section Chapter 0 and Section 0). This results in four ways in which client applications may communicate with transactional objects:
  • Direct Context Management/Explicit Propagation: the client application directly accesses the Control object, and the other objects which describe the state of the transaction. To propagate the transaction to an object, the client must include the appropriate Transaction Service object as an explicit parameter of an operation; typically this should be the PropagationContext structure.
  • Indirect Context Management/Implicit Propagation: the client application uses operations on the Current pseudo object to create and control its transactions. When it issues requests on transactional objects, the transaction context associated with the current thread is implicitly propagated to the object.
  • Indirect Context Management/Explicit Propagation: for an implicit model application to use explicit propagation, it can get access to the Control using the get_control operation on the Current pseudo object. It can then use a Transaction Service object as an explicit parameter to a transactional object; for efficiency reasons this should be the PropagationContext structure, obtained by calling get_txcontext on the appropriate Coordinator reference. This is explicit propagation.
  • Direct Context Management/Implicit Propagation: a client that accesses the Transaction Service objects directly can use the resume pseudo object operation to set the implicit transaction context associated with its thread. This allows the client to invoke operations of an object that requires implicit propagation of the transaction context.
  • Indirect and Implicit

    In the code fragments below, a transaction originator uses indirect context management and implicit transaction propagation; txn_crt is an example of an object supporting the Current interface. The client uses the begin operation to start the transaction whichbecomes implicitly associated with the originator's thread of control.

    ...
    txn_crt.begin();
    // should test the exceptions that might be raised
    ...
    // the client issues requests, some of which involve
    // transactional objects;
    BankAccount.makeDeposit(deposit);
    ...
    txn_crt.commit(false)
    

    The program commits the transaction associated with the client thread. The report_heuristics argument is set to false so no report will be made by the Transaction Service about possible heuristic decisions.

  • Direct and Explicit

    In the following example, a transaction originator uses direct context management and explicit transaction propagation. The client uses a factory object supporting the CosTransactions::TransactionFactory interface to create a new transaction and uses the returned Control object to retrieve the Ter mi nat or and Coordinator objects.

    ...
    CosTransactions::Control ctrl;
    CosTransactions::Terminator ter;
    CosTransactions::Coordinator coo;
    coo = TFactory.create(0);
    ter = ctrl.get_terminator();
    ...
    transactional_object.do_operation(arg, c);
    ...
    t.commit(false);
    

    The client issues requests, some of which involve transactional objects, in this case explicit propagation of the context is used. The Control object reference is passed as an explicit parameter of the request; it is declared in the OMG IDL of the interface. The transaction originator uses the Terminator object to commit the transaction; the report_heuristics argument is set to false: so no report will be made by the Transaction Service about possible heuristic decisions.

The main difference between direct and indirect context management is the effect on the invoking thread's transaction context. If using indirect (i.e., invoking operations through the Current pseudo object), then the thread's transaction context will be modified automatically by the OTS, e.g., if begin is called then the thread's notion of the current transaction will be modified to the newly created transaction; when that is terminated, the transaction previously associated with the thread (if any) will be restored as the thread's context (assuming subtransactions are supported by the OTS implementation). However, if using direct management, no changes to the threads transaction context are performed by the OTS: the application programmer assumes responsibility for this.

Figure 2 describes the principal interfaces in the CORBA OTS specification, with their interaction, while the Table 1 below provides more details for each interface.

Figure 2 - OTS interfaces and their interactions

Table 1 - OTS Interfaces and their role.

Interface Role and operations
Current
  • Transaction demarcation ( begin, commit, rollback, rollback_only, set_time_out )
  • Status of the transaction ( get_status )
  • Name of the transaction (g et_transaction_name )
  • Transaction context ( get_control )
TransactionFactory Explicit transaction creation
  • create a transaction with its associated cooridinator ( create )
  • create an interposed coordinator as a subrodinator in the transaction tree ( recreate )
Control Explicit transaction context management
  • access to the transaction coordinator ( get_coordinator )
  • access to the transactions terminator ( get_terminator )
Terminator Commit (commit) or rollback (rollback) a transaction in a direct transaction management mode
Coordinator
  • Status of the transaction ( get_status, get_parent_status, get_top_level_status )
  • Transaction information ( is_same_transaction, is_related_transaction, is_ancestor_transaction, is_descendant_transaction, is_top_level_transaction, hash_transaciton, hash_top_level_transaction, get_transaction_name, get_txcontext )
  • Resource enlistment ( register_resource, register_subtrans_aware )
  • Registration of synchronization objects ( register_synchronization )
  • Set the transaction for rollback ( rollback_only )
  • Create subtransactions ( create_subtransaction )
RecoveryCoordinator Allows to coordinate recovery in case of failure ( replay_completion )
Resource Participation in two-phase commit and recovery protocol ( prepare, rollback, commit, commit_one_phase, forget )
Synchronization Application synchronization before beginning and after completion of two-phase commit ( before_completion, after_completion )
SubtransactionAwareResource Commit or rollback a subtransaction ( commit_subtransaction, rollback_subtransaction)
TransactionalObject A marker interface to be implemented by all transactional objects (no operation defined)
The Java transaction initiative consists of two specifications: Java Transaction Service (JTS) and Jakarta Transactions API (also known as JTA).

JTS specifies the implementation of a Java transaction manager. This transaction manager supports the JTA, using which application servers can be built to support transactional Java applications. Internally the JTS implements the Java mapping of the OMG OTS specifications.

The JTA specifies an architecture for building transactional application servers and defines a set of interfaces for various components of this architecture. The components are: the application, resource managers, and the application server, as shown in the slide.

The JTS thus provides a new architecture for transactional application servers and applications, while complying to the OMG OTS 1.1 interfaces internally. This allows the JTA compliant applications to interoperate with other OTS 1.1 complaint applications through the standard IIOP.

As shown in the Figure 1, in the Java transaction model, the Java application components can conduct transactional operations on JTA compliant resources via the JTS. The JTS acts as a layer over the OTS. The applications can therefore initiate global transactions to include other OTS transaction managers, or participate in global transactions initiated by other OTS compliant transaction managers.

Figure 1 - The JTA/JTS transaction model

The Java Transaction Service is architected around an application server and a transaction manager. The architecture is shown in Figure 2.

Figure 2 - The JTA/JTS Architecture

The JTS architecture consists of the following components:

  • Transaction Manager : The transaction manager is the core component of this architecture and is provided by an implementation of the JTS. It provides interfaces to create transactions (including transaction demarcation and propagation of transaction context), allows enlistment and delistment of resources, provides interfaces for registering components for application synchronization, implements the synchronization protocol, and initiates and directs the two phase commit and recovery protocol with the resource managers.
  • Application Server : One of the key features of the JTS architecture is that it allows an application server to be built on top of the transaction service and the resources. Application developers can develop and deploy application components onto the application server for initiating and managing transactions. The application server can therefore abstract all transactional semantics from the application programs.
  • Application Components : These are the clients for the transactional resources and implement business transactions. These are deployed on the application server. Depending on the architecture of the application server, these components can directly or indirectly create transactions and operate on the transactional resources. For example, an Jakarta Enterprise Beans (EJB) server allows declarative transaction demarcation, in which case, the EJB components need not directly implement the transactions. However, a Java implementation of a CORBA OTS, requires the CORBA object to demarcate transactions explicitly.
  • Resource Manager : A resource manager is an X/Open XA compliant component that manages a persistent and stable storage system, and participates in the two phase commit and recovery protocol with the transaction manager. The application manager also provides interfaces for the application server and the application components to operate on the data managed by it.
  • Communication Resource Manager : This allows the transaction manager to participate in transactions initiated by other transaction managers. However, the JTS specification does not specify any protocol for this communication and assumes that an implementation of the communication resource manager supports the CORBA OTS and GIOP specifications.
The Jakarta Transactions specification may be classified into three categories of interface as shown in Figure 3. The Java Transaction API consists of three elements: a high-level application transaction demarcation interface, a high-level transaction manager interface intended for application server, and a standard Java mapping of the X/Open XA protocol intended for transactional resource manager.

Figure 3 - JTA Interfaces
  • jakarta.transaction.Status: Defines the following flags for the status of a transaction:
Flag Purpose
STATUS_ACTIVE Transaction is active (started but not prepared)
STATUS_COMMITTED Transaction is committed
STATUS_COMMITTING Transaction is in the process of committing.
STATUS_MARKED_ROLLBACK Transaction is marked for rollback.
STATUS_NO_TRANSACTION There is no transaction associated with the current Transaction, UserTransaction or TransactionManager objects.
STATUS_PREPARED Voting phase of the two phase commit is over and the transaction is prepared.
STATUS_PREPARING Transaction is in the process of preparing.
STATUS_ROLLEDBACK Outcome of the transaction has been determined as rollback. It is likely that heuristics exists.
STATUS_ROLLING_BACK Transaction is in the process of rolling back.
STATUS_UNKNOWN A transaction exists but its current status can not be determined. This is a transient condition

Table 1: Transaction Status Flags

The jakarta.transaction.Transaction, jakarta.transaction.TransactionManager, and jakarta.transaction.UserTransaction interfaces provide a getStatus method that returns one of the above status flags.

  • jakarta.transaction.Transaction: An object of this type is created for each global transaction. This interface provides methods for transaction completion(commit and rollback), resource enlistment (enlistResource) and delistment (delistResource), registration of synchronization objects (registerSynchronization), and query of status of the transaction (getStatus).
  • jakarta.transaction.TransactionManager: This interface is implemented by the JTS and allows an application server to communicate with the transaction manager to demarcate transactions (begin, commit, rollback), suspending and resuming transactions (suspend and resume), set the transaction for rollback (setRollbackOnly), get the associated Transaction object (getTransaction), set the transaction timeout interval (setTransactionTimeout) and query the status of the transaction (getStatus).
  • jakarta.transaction.UserTransaction: . This interface provides methods to begin and end transactions (begin, commit, and rollback), set the transaction for rollback (setRollbackOnly), set the transaction timeout interval (setTransactionTimeout), and get the status of the transaction (getStatus). Nested transactions are not supported, and begin throws the NotSupportedException when the calling thread is already associated with a transaction. UserTransaction automatically associates newly created transactions with the invoking thread.
  • javax.transaction.xa.Xid: This interface is a Java mapping of the X/Open transaction identifier xid structure. The transaction manager uses an object of this type to associate a resource manager with a transaction.
This section describes the usage of the JTA for implementing various transaction semantics. The purpose of this section is to provide conceptual guidelines only.
Transactional resources such as database connections are typically managed by the application server in conjunction with some resource adapter and optionally with connection pooling optimisation. In order for an external transaction manager to co-ordinate transactional work performed by the resource managers, the application server must enlist and de-list the resources used in the transaction. These resources (participants) are enlisted with the transaction so that they can be informed when the transaction terminates, e.g., are driven through the two-phase commit protocol.

Jakarta Transactions is much more closely integrated with the XA concept of resources than the arbitrary objects. For each resource in-use by the application, the application server invokes the enlistResource method with an XAResource object which identifies the resource in use.

The enlistment request results in the transaction manager informing the resource manager to start associating the transaction with the work performed through the corresponding resource. The transaction manager is responsible for passing the appropriate flag in its XAResource.start method call to the resource manager.

The delistResource method is used to disassociate the specified resource from the transaction context in the target object. The application server invokes the method with the two parameters: the XAResource object that represents the resource, and a flag to indicate whether the operation is due to the transaction being suspended (TMSUSPEND), a portion of the work has failed (TMFAIL), or a normal resource release by the application (TMSUCCESS).

The de-list request results in the transaction manager informing the resource manager to end the association of the transaction with the target XAResource. The flag value allows the application server to indicate whether it intends to come back to the same resource whereby the resource states must be kept intact. The transaction manager passes the appropriate flag value in its XAResource.end method call to the underlying resource manager.

The application server can enlist and delist resource managers with the transaction manager using the jakarta.transaction.Transaction interface

Usage

Resource enlistment is in general done by the application server when an application requests it for a connection to a transactional resource.


// ... an implementation of the application server
// Get a reference to the underlying TransactionManager object.
...
// Get the current Transaction object from the TransactionManager.
transaction = transactionManager.getTransaction();
// Get an XAResource object from a transactional resource.
...
// Create a Transaction object.
...
// Enlist the resource
transaction.enlistResource(xaResource);...
// Return the connection to the application.
...

Resource delistment is done similarly after the application closes connections to transactional resources.

Jakarta Enterprise Beans (EJB) is a technology specification that specifies a framework for building component-based distributed applications. As an application server framework, the EJB servers address transaction processing, resource pooling, security, threading, persistence, remote access, life cycle etc.

The EJB framework specifies construction, deployment and invocation of components called as enterprise beans. The EJB specification classifies enterprise beans into two categories: entity beans and session beans. While entity beans abstract persistent domain data, session beans provide for session specific application logic. Both types of beans are maintained by EJB compliant servers in what are called as containers. A container provides the run time environment for an enterprise bean. Figure 4 shows a simplified architecture of transaction management in EJB compliant application servers.

Figure 4 - EJB and Transactions

An enterprise bean is specified by two interfaces: the home interface and the remote interface. The home interface specifies how a bean can created or found. With the help of this interface, a client or another bean can obtain a reference to a bean residing in a container on an EJB server. The remote interface specifies application specific methods that are relevant to entity or session beans.

Clients obtain references to home interfaces of enterprise beans via the Java Naming and Directory Interface (JNDI) mechanism. An EJB server should provide a JNDI implementation for any naming and directory server. Using this reference to the home interface, a client can obtain a reference to the remote interface. The client can then access methods specified in the remote interface. The EJB specification specifies the Java Remote Method Invocation (RMI) as the application level protocol for remote method invocation. However, an implementation can use IIOP as the wire-level protocol.

In Figure 5, the client first obtains a reference to the home interface, and then a reference to an instance of Bean A via the home interface. The same procedure is applicable for instance of Bean A to obtain a reference and invoke methods on an instance of Bean B.

The EJB framework does not specify any specific transaction service (such as the JTS) or protocol for transaction management. However, the specification requires that the jakarta.transaction.UserTransaction interface of the JTS be exposed to enterprise beans. This interface is required for programmatic transaction demarcation as discussed in the next section.

The EJB framework allows both programmatic and declarative demarcation of transactions. Declarative demarcation is needed for all enterprise beans deployed on the EJB. In addition, EJB clients can also initiative and end transactions programmatically.

The container performs automatic demarcation depending on the transaction attributes specified at the time of deploying an enterprise bean in a container. The following attributes determine how transactions are created.

  • NotSupported : The container invokes the bean without a global transaction context.
  • Required : The container invokes the bean within a global transaction context. If the invoking thread already has a transaction context associated, the container invokes the bean in the same context. Otherwise, the container creates a new transaction and invokes the bean within the transaction context.
  • Supports : The bean is transaction-ready. If the client invokes the bean within a transaction, the bean is also invoked within the same transaction. Otherwise, the bean is invoked without a transaction context.
  • RequiresNew : The container invokes the bean within a new transaction irrespective of whether the client is associated with a transaction or not.
  • Mandatory : The container must invoke the bean within a transaction. The caller should always start a transaction before invoking any method on the bean.

Java Data Base Connectivity, provide Java programs with a way to connect to and use relational databases. The JDBC API lets you invoke SQL commands from Java programming language methods. In simplest terms, JDBC allows to do three things

  • Establish a connection with a database
  • Send SQL statements
  • Process the results

The following code fragment gives a simple example of these three steps:


Connection con = DriverManager.getConnection(
  "jdbc:myDriver:wombat", "myLogin", "myPassword");
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM Table1");
while (rs.next()) {
  int x = rs.getInt("a");
  String s = rs.getString("b");
  float f = rs.getFloat("c");
}

Before the version 2.0 of JDBC, only local transactions controlled by the transaction manager of the DBMS is possible. To code a JDBC transaction, you invoke the commit and rollback methods of the java.sql.Connection interface. The beginning of a transaction is implicit. A transaction begins with the first SQL statement that follows the most recent commit, rollback, or connect statement. (This rule is generally true, but may vary with DBMS vendor.). The following example illustrates how transactions are managed by the JDBC API.


public void withdraw (double amount) {
  try {
    //A connection opened with JDBC is an AUTO COMMIT mode meaning
    // that the commitment is automatically performed when the connection
    // is closed
    //setAutoCommit to false disable this feature
    connection.setAutoCommit(false);
    //perform an SQL update to Withdraw money from account
    connection.commit();
  } catch (Exception ex) {
      try {
         connection.rollback();
          throw new Exception("Transaction failed: " +  ex.getMessage());
      } catch (Exception sqx) {
           throw new Exception(...}
      }
  }
}

From the version 2.0, a JDBC driver can be involved within a distributed transaction since it supports the XAResource interface that allows to participate to the 2PC protocol. An application that need to include more than one database can create a JTA transaction. To demarcate a JTA transaction, the application program invokes the begin, commit, and rollback methods of the jakarta.transaction.UserTransaction interface. The following code, that can be applied to a bean-managed transaction, demonstrates the UserTransaction methods. The begin and commit invocations delimit the updates to the database. If the updates fail, the code invokes the rollback method and throws an Exception.


public void transfer(double amount) {
  UserTransaction ut = context.getUserTransaction();

  try {
     ut.begin();
     // Perform SQL command to debit account 1
     // Perform SQL command to debit account 2
     ut.commit();
   } catch (Exception ex) {
        try {
          ut.rollback();
        } catch (Exception ex1) {
             throw new Exception ("Rollback failed: " + ex1.getMessage());
        }
        throw new Exception ("Transaction failed: " + ex.getMessage());
   }
}

This trail provides information on the way to configure environmental variables needed to define the behaviour of transactional applications managed with . Basically, the behaviour of the product is configurable through property attributes. Although these property attributes may be specified as command line arguments, it is more convenient to organise and initialise them through properties files.

The properties file named jbossts-properties.xml and located under the <ats_installation_directory>/etc directory is organised as a collection of property names.


<property>
  name="a_name"
  value="a_value"
</property> 

Some properties must be specified by the developer while others do not need to be defined and can be used with their default values. Basically the properties file that does not provide default values to all its properties is the jbossts-properties.xml.

The following table describes some properties in the jbossts-properties.xml, where:

  • Name : indicates the name of the property
  • Description : explain the aim of the property
  • Possible Value : indicates possible value the property can have
  • Default Value : shows the default value, if any, assigned to the property
Name Description Possible Value Default Value
com.arjuna.ats.arjuna.objectstore.localOSRoot By default, all object states will be stored within the "defaultStore" subdirectory of the object store root. However, this subdirectory can be changed by setting the localOSRoot property variable accordingly Directory name defaultStore
com.arjuna.ats.arjuna.objectstore.objectStoreDir Specify the location of the ObjectStore Directory name PutObjectStoreDirHere
com.arjuna.ats.arjuna.common.varDir needs to be able to write temporary files to a well known location during execution. By default this location is var. However, by setting the varDir property variable this can be overridden. Directory name var/tmp

Sometimes it is desirable, mainly in case of debugging, to have some form of output during execution to trace internal actions performed. uses the logging tracing mechanism provided by the Arjuna Common Logging Framework (CLF) version 2.4, which provides a high level interface that hides differences that exist between logging APIs such Jakarta log4j, JDK 1.4 logging API or dotnet logging API.

With the CLF applications make logging calls on commonLogger objects. These commonLogger objects pass log messages to Handler for publication. Both commonLoggers and Handlers may use logging Levels to decide if they are interested in a particular log message. Each log message has an associated log Level, that gives the importance and urgency of a log message. The set of possible Log Levels are DEBUG, INFO, WARN, ERROR and FATAL. Defined Levels are ordered according to their integer values as follows: DEBUG < INFO < WARN < ERROR < FATAL.

The CLF provides an extension to filter logging messages according to finer granularity an application may define. That is, when a log message is provided to the commonLogger with the DEBUG level, additional conditions can be specified to determine if the log message is enabled or not.

Note : These conditions are applied if and only the DEBUG level is enabled and the log request performed by the application specifies debugging granularity.

When enabled, Debugging is filtered conditionally on three variables:

  • Debugging level: this is where the log request with the DEBUG Level is generated from, e.g., constructors or basic methods.
  • Visibility level: the visibility of the constructor, method, etc. that generates the debugging.
  • Facility code: for instance the package or sub-module within which debugging is generated, e.g., the object store.

According to these variables the Common Logging Framework defines three interfaces. A particular product may implement its own classes according to its own finer granularity. uses the default Debugging level and the default Visibility level provided by CLF, but it defines its own Facility Code. uses the default level assigned to its commonLoggers objects (DEBUG). However, it uses the finer debugging features to disable or enable debug messages. Finer values used by the are defined below:

  • Debugging level – uses the default values defined in the class com.arjuna.common.util.logging.CommonDebugLevel
Debug Level Value Description
NO_DEBUGGING 0x0000 A commonLogger object assigned with this values discard all debug requests
CONSTRUCTORS 0x0001 Diagnostics from constructors
DESTRUCTORS 0x0002 Diagnostics from finalizers.
CONSTRUCT_AND_DESTRUCT CONSTRUCTORS | DESTRUCTORS Diagnostics from constructors and finalizers
FUNCTIONS 0x010 Diagnostics from functions
OPERATORS 0x020 Diagnostics from operators, such as equals
FUNCS_AND_OPS FUNCTIONS | OPERATORS Diagnostics from functions and operations.
ALL_NON_TRIVIAL CONSTRUCT_AND_DESTRUCT | FUNCTIONS | OPERATORS Diagnostics from all non-trivial operations
TRIVIAL_FUNCS 0x0100 Diagnostics from trivial functions.
TRIVIAL_OPERATORS: 0x0200 Diagnostics from trivial operations, and operators.
ALL_TRIVIAL TRIVIAL_FUNCS | TRIVIAL_OPERATORS Diagnostics from all trivial operations
FULL_DEBUGGING 0xffff Full diagnostics.
  • Visibility level – uses the default values defined in the class com.arjuna.common.util.logging.CommonVisibilityLevel
Debug Level Value Description
VIS_NONE 0x0000 No Diagnostic
VIS_PRIVATE 0x0001 only from private methods.
VIS_PROTECTED 0x0002 only from protected methods.
VIS_PUBLIC 0x0004 only from public methods.
VIS_PACKAGE 0x0008 only from package methods.
VIS_ALL 0xffff Full Diagnostic
  • Facility Code – uses the following values
Facility Code Level Value Description
FAC_ATOMIC_ACTION 0x00000001 atomic action core module
FAC_BUFFER_MAN 0x00000004 state management (buffer) classes
FAC_ABSTRACT_REC 0x00000008 abstract records
FAC_OBJECT_STORE 0x00000010 object store implementations
FAC_STATE_MAN 0x00000020 state management and StateManager)
FAC_SHMEM 0x00000040 shared memory implementation classes
FAC_GENERAL 0x00000080 general classes
FAC_CRASH_RECOVERY 0x00000800 detailed trace of crash recovery module and classes
FAC_THREADING 0x00002000 threading classes
FAC_JDBC 0x00008000 JDBC 1.0 and 2.0 support
FAC_RECOVERY_NORMAL 0x00040000 normal output for crash recovery manager

To ensure appropriate output, it is necessary to set some of the finer debug properties explicitly as follows:

 <properties>
   <!-- CLF 2.4 properties -->
   <property
     name="com.arjuna.common.util.logging.DebugLevel"
     value="0x00000000"/>
   <property
     name="com.arjuna.common.util.logging.FacilityLevel"
     value="0xffffffff"/>
   <property
     name="com.arjuna.common.util.logging.VisibilityLevel"
     value="0xffffffff"/>
   <property
     name="com.arjuna.common.util.logger"
     value="log4j"/>
 </properties>

By default, debugging messages are not enabled since the DebugLevel is set to NO_DEBUGGING (0x00000000). You can enable debugging by providing one of the appropriate value listed above - for instance with you wish to see all internal actions performed by the RecoveryManager to recover transactions from a failure set the DebugLevel to FULL_DEBUGGING (0xffffffff) and the FacilityCode Level FAC_CRASH_RECOVERY.

Note : To enable finger debug messages, the logging level should be set to the DEBUG level as described below.

From the program point of view a same API is used whatever the underlying logging mechanism, but from a configuration point of view is that the user is totally responsible for the configuration of the underlying logging system. Hence, the properties of the underlying log system are configured in a manner specific to that log system, e.g., a log4j.properties file in the case that log4j logging is used. To set the logging level to the DEBUG value, the log4j.properties file can be edited to set that value.

The property com.arjuna.common.util.logger allows to select the underlying logging system. Possible value are listed in the following table.

Property Value Description
log4j Log4j logging (log4j classes must be available in the classpath); configuration through the log4j.properties file, which is picked up from the CLASSPATH or given through a System property: log4j.configuration
jdk14 JDK 1.4 logging API (only supported on JVMs of version 1.4 or higher). Configuration is done through a file logging.properties in the jre/lib directory.
simple Selects the simple JDK 1.1 compatible console-based logger provided by Jakarta Commons Logging
csf Selects CSF-based logging (CSF embeddor must be available)
jakarta Uses the default log system selection algorithm of the Jakarta Commons Logging framework
dotnet Selects a .net logging implementation

Since a dotnet logger is not currently implemented, this is currently identical to simple. Simple is a purely JDK1.1 console-based log implementation.

avalon Uses the Avalon Logkit implementation
noop Disables all logging

The ORB class provided in the package com.arjuna.orbportability.ORB shown below provides a uniform way of using the ORB. There are methods for obtaining a reference to the ORB, and for placing the application into a mode where it listens for incoming connections. There are also methods for registering application specific classes to be invoked before or after ORB initialisation.


public class ORB
{
   public static ORB getInstance(String uniqueId);
   // given the various parameters,this method initialises the ORB and
   // retains a reference to it within the ORB class.
   public synchronized void initORB () throws SystemException;
   public synchronized void initORB (Applet a, Properties p)
        throws SystemException;
   public synchronized void initORB (String[] s, Properties p)
        throws SystemException;

  //The orb method returns a reference to the ORB.
  //After shutdown is called this reference may be null.
   public synchronized org.omg.CORBA.ORB orb ();
   public synchronized boolean setOrb (org.omg.CORBA.ORB theORB);
   // If supported, this method cleanly shuts down the ORB.
   // Any pre- and post- ORB shutdown classes which
   //have been registered will also be called.
   public synchronized void shutdown ();

  public synchronized boolean addAttribute (Attribute p);
  public synchronized void addPreShutdown (PreShutdown c);
  public synchronized void addPostShutdown (PostShutdown c);

  public synchronized void destroy () throws SystemException;
  //these methods place the ORB into a listening mode,
  //where it waits for incoming invocations.
   public void run ();
   public void run (String name);
};

Note, some of the methods are not supported on all ORBs, and in this situation, a suitable exception will be thrown. The ORB class is a factory class which has no public constructor. To create an instance of an ORB you must call the getInstance method passing a unique name as a parameter. If this unique name has not been passed in a previous call to getInstance you will be returned a new ORB instance. Two invocations of getInstance made with the same unique name, within the same JVM, will return the same ORB instance.

The OA classes shown below provide a uniform way of using Object Adapters (OA). There are methods for obtaining a reference to the OA. There are also methods for registering application specific classes to be invoked before or after OA initialisation. Note, some of the methods are not supported on all ORBs, and in this situation, a suitable exception will be thrown. The OA class is an abstract class and provides the basic interface to an Object Adapter. It has two sub-classes RootOA and ChildOA, these classes expose the interfaces specific to the root Object Adapter and a child Object Adapter respectively. From the RootOA you can obtain a reference to the RootOA for a given ORB by using the static method getRootOA. To create a ChildOA instance use the createPOA method on the RootOA.

As described below, the OA class and its sub-classes provide most operations provided by the POA as specified in the POA specification.


public abstract class OA
{
  public synchronized static RootOA getRootOA(ORB associatedORB);
  public synchronized void initPOA () throws SystemException;
  public synchronized void initPOA (String[] args) throws SystemException;
  public synchronized void initOA () throws SystemException;
  public synchronized void initOA (String[] args) throws SystemException;
  public synchronized ChildOA createPOA (String adapterName,
      PolicyList policies) throws AdapterAlreadyExists, InvalidPolicy;
  public synchronized org.omg.PortableServer.POA rootPoa ();
  public synchronized boolean setPoa (org.omg.PortableServer.POA thePOA);
  public synchronized org.omg.PortableServer.POA poa (String adapterName);
  public synchronized boolean setPoa (String adapterName,
     org.omg.PortableServer.POA thePOA);
  ...
};

public class RootOA extends OA
{
  public synchronized void destroy() throws SystemException;
  public org.omg.CORBA.Object corbaReference (Servant obj);
  public boolean objectIsReady (Servant obj, byte[] id);
  public boolean objectIsReady (Servant obj);
  public boolean shutdownObject (org.omg.CORBA.Object obj);
  public boolean shutdownObject (Servant obj);
};

public class ChildOA extends OA
{
  public synchronized boolean setRootPoa (POA thePOA);
  public synchronized void destroy() throws SystemException;
  public org.omg.CORBA.Object corbaReference (Servant obj);
  public boolean objectIsReady (Servant obj, byte[] id)
      throws SystemException;
  public boolean objectIsReady (Servant obj) throws SystemException;
  public boolean shutdownObject (org.omg.CORBA.Object obj);
  public boolean shutdownObject (Servant obj);
};
The Recovery Manager is a daemon process responsible for performing crash recovery. Only one Recovery Manager runs per node. The Object Store provides persistent data storage for transactions to log data. During normal transaction processing each transaction will log persistent data needed for the commit phase to the Object Store. On successfully committing a transaction this data is removed, however if the transaction fails then this data remains within the Object Store.

The Recovery Manager functions by:

  • Periodically scanning the Object Store for transactions that may have failed. Failed transactions are indicated by the presence of log data after a period of time that the transaction would have normally been expected to finish.
  • Checking with the application process which originated the transaction whether the transaction is still in progress or not.
  • Recovering the transaction by re-activating the transaction and then replaying phase two of the commit protocol.
To start the Recovery Manager issue the following command:

java com.arjuna.ats.arjuna.recovery.RecoveryManager
If the -test flag is used with the Recovery Manager then it will display a "Ready" message when initialised, i.e.,

java com.arjuna.ats.arjuna.recovery.RecoveryManager -test
On initialization the Recovery Manager first loads in configuration information via a properties file. This configuration includes a number of recovery activators and recovery modules, which are then dynamically loaded.

Each recovery activator, which implements the com.arjuna.ats.arjuna.recovery.RecoveryActivator interface, is used to instantiate a recovery class related to the underlying communication protocol. Indeed, since the version 3.0 of , the Recovery Manager is not specifically tied to an Object Request Broker or ORB, which is to specify a recovery instance able to manage the OTS recovery protocol the new interface RecoveryActivator is provided to identify specific transaction protocol. For instance, when used with OTS, the RecoveryActivitor has the responsibility to create a RecoveryCoordinator object able to respond to the replay_completion operation.

All RecoveryActivator instances inherit the same interface. They are loaded via the following recovery extension property:


<property
  name="com.arjuna.ats.arjuna.recovery.recoveryActivator_<number>"
  value="RecoveryClass"/> 

For instance the RecoveryActivator provided in the distribution of JTS/OTS, which shall not be commented, is as follow :


<property
  name="com.arjuna.ats.arjuna.recovery.recoveryActivator_1"
  value="com.arjuna.ats.internal.jts.
     orbspecific.recovery.RecoveryEnablement"/> 
Each recovery module, which implements the com.arjuna.ats.arjuna.recovery.RecoveryModule interface, is used to recover a different type of transaction/resource, however each recovery module inherits the same basic behaviour.

Recovery consists of two separate passes/phases separated by two timeout periods. The first pass examines the object store for potentially failed transactions; the second pass performs crash recovery on failed transactions. The timeout between the first and second pass is known as the backoff period. The timeout between the end of the second pass and the start of the first pass is the recovery period. The recovery period is larger than the backoff period.

The Recovery Manager invokes the first pass upon each recovery module, applies the backoff period timeout, invokes the second pass upon each recovery module and finally applies the recovery period timeout before restarting the first pass again.

The recovery modules are loaded via the following recovery extension property:


com.arjuna.ats.arjuna.recovery.recoveryExtension<number>=<RecoveryClass> 
The default RecoveryExtension settings are:

<property name="com.arjuna.ats.arjuna.recovery.recoveryExtension1"
  value="com.arjuna.ats.internal.
     arjuna.recovery.AtomicActionRecoveryModule"/>
<property name="com.arjuna.ats.arjuna.recovery.recoveryExtension2"
  value="com.arjuna.ats.internal.
     txoj.recovery.TORecoveryModule"/>
<property name="com.arjuna.ats.arjuna.recovery.recoveryExtension3"
  value="com.arjuna.ats.internal.
     jts.recovery.transactions.TopLevelTransactionRecoveryModule"/>
<property  name="com.arjuna.ats.arjuna.recovery.recoveryExtension4"
  value="com.arjuna.ats.internal.
     jts.recovery.transactions.ServerTransactionRecoveryModule"/> 
The operation of the recovery subsystem will cause some entries to be made in the ObjectStore that will not be removed in normal progress. The RecoveryManager has a facility for scanning for these and removing items that are very old. Scans and removals are performed by implementations of the com.arjuna.ats.arjuna.recovery.ExpiryScanner. Implementations of this interface are loaded by giving the class name as the value of a property whose name begins with ExperyScanner.

The RecoveryManager calls the scan() method on each loaded ExpiryScanner implementation at an interval determined by the property com.arjuna.ats.arjuna.recovery.expiryScanInterval. This value is given in hours default is 12. An EXPIRY_SCAN_INTERVAL value of zero will suppress any expiry scanning. If the value as supplied is positive, the first scan is performed when RecoveryManager starts; if the value is negative, the first scan is delayed until after the first interval (using the absolute value)

The default ExpiryScanner is:


<property
  name="com.arjuna.ats.arjuna.recovery.
        expiryScannerTransactionStatusManager"
  value="com.arjuna.ats.internal.arjuna.recovery.
       ExpiredTransactionStatusManagerScanner"/> 

The following table summarize properties used by the Recovery Manager. These properties are defined by default the properties file named RecoveryManager-properties.xml.

Name Description Possible Value Default Value
com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod Interval in seconds between initiating the periodic recovery modules Value in seconds 120
com.arjuna.ats.arjuna.recovery.recoveryBackoffPeriod Interval in seconds between first and second pass of periodic recovery Value in seconds 10
com.arjuna.ats.arjuna.recovery.recoveryExtensionX Indicates a periodic recovery module to use. X is the occurence number of the recovery module among a set of recovery modules. These modules are invoked in sort-order of names The class name of the periodic recovery module provides a set classes given in the RecoveryManager-properties.xml file
com.arjuna.ats.arjuna.recovery.recoveryActivator_X Indicates a recovery activator to use. X is the occurence number of the recovery activator among a set of recovery activators. The class name of the periodic recovery activator provide one class that manages the recovery protocol specified by the OTS specification
com.arjuna.ats.arjuna.recovery.expiryScannerXXX Expiry scanners to use (order of invocation is random). Names must begin with "com.arjuna.ats.arjuna.recovery.expiryScanner" Class name provides one class given in the RecoveryManager-properties.xml file
com.arjuna.ats.arjuna.recovery.expiryScanInterval Interval, in hours, between running the expiry scanners. This can be quite long. The absolute value determines the interval - if the value is negative, the scan will NOT be run until after one interval has elapsed. If positive the first scan will be immediately after startup. Zero will prevent any scanning. Value in hours 12
com.arjuna.ats.arjuna.recovery.transactionStatusManagerExpiryTime Age, in hours, for removal of transaction status manager item. This should be longer than any ts-using process will remain running. Zero = Never removed. Value in Hours 12
com.arjuna.ats.arjuna.recovery.transactionStatusManagerPort Use this to fix the port on which the TransactionStatusManager listens Port number (short) use a free port

To ensure that your installation is fully operational, we will run the simple demo.

Please follow these steps before running the transactional applications

  • Ensure you have the Ant build system installed. Ant is a Java build tool, similar to make. It is available for free from http://ant.apache.org/ The sample application requires version 1.5.1 or later.
  • The PATH and CLASSPATH environment variables need to be set appropriately to use . To make this easier, we provide a shell script setup_env.sh (and for Windows a batch file setup_env.bat) in the directory <jbossts_install_root>/bin/
  • From a command prompt, cd to the directory containing the build.xml file (<jbossts_install_root>/trail_map) and type 'ant'. This will compile a set of sources files located under <jbossts_install_root>/trail_map/src then create an application .jar file named jbossts-demo.jar. under the directory <jbossts_install_root>/trail_map/lib
  • Add the generated jar file to the CLASSPATH environment variable.
  • Ensure that the jacorb is added in your CLASSPATH. Use only the patched version that ships with .

    Ensure that jar files appear before jacorb jar files.

  • Start the server. src/com/arjuna/demo/simple/HelloServer.java">(HelloServer.java) ( Note: The source code for the trailmap is fully documented and can often contain very useful tips and information that may not be reflected elsewhere in the trailmap )

java com.arjuna.demo.simple.HelloServer

  • Open another command prompt, go to the same /trail_map directory and start the client src/com/arjuna/demo/simple/HelloClient.java">(HelloClient.java) . Be sure that the environment variable CLASSPATH is set with the same value as explained above.

java com.arjuna.demo.simple.HelloClient

In the client window you should see the following lines:


     Creating a transaction !
     Call the Hello Server !
     Commit transaction
     Done

In the server, which must be stopped by hand, you should see:


     Hello - called within a scope of a transaction

Transaction management is one of the most crucial requirements for enterprise application development. Most of the large enterprise applications in the domains of finance, banking and electronic commerce rely on transaction processing for delivering their business functionality.

Enterprise applications often require concurrent access to distributed data shared amongst multiple components, to perform operations on data. Such applications should maintain integrity of data (as defined by the business rules of the application) under the following circumstances:

  • distributed access to a single resource of data, and
  • access to distributed resources from a single application component.

In such cases, it may be required that a group of operations on (distributed) resources be treated as one unit of work. In a unit of work, all the participating operations should either succeed or fail and recover together. This problem is more complicated when

  • a unit of work is implemented across a group of distributed components operating on data from multiple resources, and/or
  • the participating operations are executed sequentially or in parallel threads requiring coordination and/or synchronization.

In either case, it is required that success or failure of a unit of work be maintained by the application. In case of a failure, all the resources should bring back the state of the data to the previous state ( i.e., the state prior to the commencement of the unit of work).

From the programmer's perspective a transaction is a scoping mechanism for a collection of actions which must complete as a unit. It provides a simplified model for exception handling since only two outcomes are possible:

  • success - meaning that all actions involved within a transaction are completed
  • failure - no actions complete

To illustrate the reliability expected by the application let’s consider the funds transfer example which is familiar to all of us.

The Money transfer involves two operations: Deposit and Withdrawal

The complexity of implementation doesn't matter; money moves from one place to another. For instance, involved accounts may be either located in a same relational table within a database or located on different databases.

A Simple transfer consists on moving money from savings to checking while a Complex transfer can be performed at the end- of- day according to a reconciliation between international banks

The concept of a transaction, and a transaction manager (or a transaction processing service) simplifies construction of such enterprise level distributed applications while maintaining integrity of data in a unit of work.

A transaction is a unit of work that has the following properties:

  • Atomicity – either the whole transaction completes or nothing completes - partial completion is not permitted.
  • Consistency – a transaction transforms the system from one consistent state to another. In other words, On completion of a successful transaction, the data should be in a consistent state. For example, in the case of relational databases, a consistent transaction should preserve all the integrity constraints defined on the data.
  • Isolation: Each transaction should appear to execute independently of other transactions that may be executing concurrently in the same environment. The effect of executing a set of transactions serially should be the same as that of running them concurrently. This requires two things:
    • During the course of a transaction, intermediate (possibly inconsistent) state of the data should not be exposed to all other transactions.
    • Two concurrent transactions should not be able to operate on the same data. Database management systems usually implement this feature using locking.
  • Durability: The effects of a completed transaction should always be persistent.

These properties, called as ACID properties, guarantee that a transaction is never incomplete, the data is never inconsistent, concurrent transactions are independent, and the effects of a transaction are persistent.

The transaction manager is the core component of a transaction processing environment. Its main responsibilities are to create transactions when requested by application components, allow resource enlistment and delistment, and to manage the two-phase commit or recovery protocol with the resource managers.

A typical transactional application begins a transaction by issuing a request to a transaction manager to initiate a transaction. In response, the transaction manager starts a transaction and associates it with the calling thread. The transaction manager also establishes a transaction context. All application components and/or threads participating in the transaction share the transaction context. The thread that initially issued the request for beginning the transaction, or, if the transaction manager allows, any other thread may eventually terminate the transaction by issuing a commit or rollback request.

Before a transaction is terminated, any number of components and/or threads may perform transactional operations on any number of transactional resources known to the transaction manager. If allowed by the transaction manager, a transaction may be suspended or resumed before finally completing the transaction.

Once the application issues the commit request, the transaction manager prepares all the resources for a commit operation, and based on whether all resources are ready for a commit or not, issues a commit or rollback request to all the resources.

Resource Manager responsibilities could be summarized as follow:

  • Establish and maintain transaction context
  • Maintain association between a transaction and the participating resources.
  • Initiate and conduct two-phase commit and recovery protocol with the resource managers.
  • Make synchronization calls to the application components before beginning and after end of the two-phase commit and recovery process

Basically, the Recovery is the mechanism which preserves the transaction atomicity in presence of failures. The basic technique for implementing transactions in presence of failures is based on the use of logs. That is, a transaction system has to record enough information to ensure that it can be able to return to a previous state in case of failure or to ensure that changes committed by a transaction are properly stored.

In addition to be able to store appropriate information, all participants within a distributed transaction must log similar information which allow them to take a same decision either to set data in their final state or in their initial state.

Two techniques are in general used to ensure transaction's atomicity. A first technique focuses on manipulated data, such the Do/Undo/Redo protocol (considered as a recovery mechanism in a centralized system), which allow a participant to set its data in their final values or to retrieve them in their initial values. A second technique relies on a distributed protocol named the two phases commit, ensuring that all participants involved within a distributed transaction set their data either in their final values or in their initial values. In other words all participants must commit or all must rollback.

In addition to failures we refer as centralized such system crashes, communication failures due for instance to network outages or message loss have to be considered during the recovery process of a distributed transaction.

In order to provide an efficient and optimized mechanism to deal with failure, modern transactional systems typically adopt a “presume abort” strategy, which simplifies the transaction management.

The presumed abort strategy can be stated as «when in doubt, abort». With this strategy, when the recovery mechanism has no information about the transaction, it presumes that the transaction has been aborted.

A particularity of the presumed-abort assumption allows a coordinator to not log anything before the commit decision and the participants do not to log anything before they prepare. Then, any failure which occurs before the 2pc starts lead to abort the transaction. Furthermore, from a coordinator point of view any communication failure detected by a timeout or exception raised on sending prepare is considered as a negative vote which leads to abort the transaction. So, within a distributed transaction a coordinator or a participant may fail in two ways: either it crashes or it times out for a message it was expecting. When a coordinator or a participant crashes and then restarts, it uses information on stable storage to determine the way to perform the recovery. As we will see it the presumed-abort strategy enable an optimized behavior for the recovery.
Saying that a distributed transaction can involve several distributed participants, means that these participant must be integrated within a global transaction manager which has the responsibility to ensure that all participants take a common decision to commit or rollback the distributed transaction. The key of such integration is the existence of a common transactional interface which is understood by all participants, transaction manager and resource managers such databases.

The importance of common interfaces between participants, as well as the complexity of their implementation, becomes obvious in an open systems environment. For this aim various distributed transaction processing standards have been developed by international standards organizations. Among these organizations, We list three of them which are mainly considered in the product:

  • The X/Open model and its successful XA interface
  • The OMG with its CORBA infrastructure and the Object Transaction Service and finally
  • The Java Community Process leaded by Sun with its JTA/JTS specification
Basically these standards have proposed logical models, which divide transaction processing into several functions:
  • those assigned to the application which ties resources together in application- specific operations
  • those assigned to the Resource manager which access physically to data stores
  • functions performed by the Transaction Manager which manages transactions, and finally
  • Communication Resource Managers which allow to exchange information with other transactional domains.

assures complete, accurate business transactions for any Java based applications, including those written for the Jakarta EE and EJB frameworks.

is a 100% Java implementation of a distributed transaction management system based on the Jakarta EE Java Transaction Service (JTS) standard. Our implementation of the JTS utilizes the Object Management Group's (OMG) Object Transaction Service (OTS) model for transaction interoperability as recommended in the Jakarta EE and EJB standards. Although any JTS-compliant product will allow Java objects to participate in transactions, one of the key features of is it's 100% Java implementation. This allows to support fully distributed transactions that can be coordinated by distributed parties.

runs can be run both as an embedded distributed service of an application server (e.g. WildFly Application Server), affording the user all the added benefits of the application server environment such as real-time load balancing, unlimited linear scalability and unmatched fault tolerance that allows you to deliver an always-on solution to your customers. It is also available as a free-standing Java Transaction Service.

In addition to providing full compliance with the latest version of the JTS specification, leads the market in providing many advanced features such as fully distributed transactions and ORB portability with POA support.

works on a number of operating systems including Red Hat linux, Sun Solaris and Microsoft Windows XP. It requires a Java 5 or later environment.

The Java Transaction API support for comes in two flavours:

  • a purely local implementation, that does not require an ORB, but obviously requires all coordinated resources to reside within the same JVM.
  • a fully distributed implementation.

The sample application consists of a banking application that involves a bank able to manage accounts on behalf of clients. Clients can obtain information on accounts and perform operations such credit, withdraw and transfer money from one account to an other.

Figure 1 - The Banking Applications

  • The client application:
  • Initializes the banking object.
  • Choose an operation to be performed on the banking object. Possible operations are:
    • Create Account: this operation asks the bank to create a new account credit it with the first amount provided in the request. The creation consists:
      • to create an Account Object, then
    • Get Balance: this operation invokes the bank to obtain the balance of an account.
      • the account is first returned by the bank, then
      • the account is asked to return its balance
    • Withdraw: this operation is invoked to withdraw money from an account. If the final balance is negative the withdraw is refused and the associated transaction aborted
    • Credit: this operation is performed to credit an account
    • Transfer: This operation is used to transfer money from an account to another. If the transfer leads to get a negative balance of the debited account, the transfer is refused and the associated transaction is aborted.
    • Exit: This operation terminates the client
  • Waits for a response.
  • The Bank Object
  • Creates Account Objects using name
  • Maintains the list of created Accounts
  • Returns, when asked, the Account Object requested by the client. If the Account doesn't exist an exception is returned to the client.
  • An Account Object
  • Performs operations requested by the client
    • credit,
    • withdraw (debit), and
    • return the current balance.

Each operation provided to the client leads to the creation of a transaction; therefore in order to commit or rollback changes made on an account, a resource is associated with the account to participate to the transaction commitment protocol. According to the final transaction decision, the resource is able to set the Account either to its initial state (in case of rollback) or to the final state (in case of commit). From the transactional view, Figure 2 depicts of transactional components.

Figure 2 - The Banking Application and the transactional Component

Assuming that the product has been installed, this trail provides a set of examples that show how to build transactional applications. Two types of transactional applications are presented, those using the JTA interface and those accessing to the JTS (OTS) interfaces.

Please follow these steps before running the transactional applications

  • Ensure you have the Ant build system installed. Ant is a Java build tool, similar to make. It is available for free from http://ant.apache.org/ The sample application requires version 1.5.1 or later.
  • The PATH and CLASSPATH environment variables need to be set appropriately to use . To make this easier, we provide a shell script setup_env.sh (and for Windows a batch file setup_env.bat) in the directory <jbossts_install_root>/bin/
  • From a command prompt, cd to the directory containing the build.xml file (<jbossts_install_root>/trail_map) and type 'ant', unless already done in the installation section . This will compile a set of sources files located under <jbossts_install_root>/trail_map/src then create an application .jar file named jbossts-demo.jar . under the directory <jbossts_install_root>/trail_map/lib
  • Add the generated jar file to the CLASSPATH environment variable.
  • The demo application is provided in several ways, accessing persistent data or not. When JDBC is used as a mean to access a database, Oracle 9i is used. For this aim the appropriate Oracle libraries (classes12.zip) should be add in the CLASSPATH environment variable.

To illustrate the programming interfaces possibilities enabled by , the banking application is provided in several versions: a version that uses the JTA API and a second that uses JTS/OTS interfaces.

This trail focuses to understanding concepts related to the creation of transactions and the behavior of the commitment protocol, while the next trail illustrates the similar application with persistent data.

  • Testing the Banking application with JTA
  • Testing the Banking application with JTS
The Banking sample using JTA creates local transactions, ensure that JTA is configured for local transactions as explained above.

To launch the JTA version of the Banking application, which creates only local transactions, execute the following java program:


java com.arjuna.demo.jta.localbank.BankClient

Once one of the program given above is launched the following lines are displayed:


-------------------------------------------------
  Bank client
-------------------------------------------------
Select an option :
   0. Quit
   1. Create a new account.
   2. Get an account information.
   3. Make a transfer.
   4. Credit an account.
   5. Withdraw from an account

Your choice :

After introducing your choice, the appropriate operation is performed by the Bank object, to get the requested account, and by the account to execute the credit or withdraw or to return the current balance. Let's consider the following execution.

Enter the number 1 as your choice, then give the name "Foo" as the account name and "1000" as an initial value of the account to create. You should get the following lines:

Your choice : 1
- Create a new account -
------------------------
Name : Foo
Initial balance : 1000
Beginning a User transaction to create account
XA_START[]
Attempt to commit the account creation transaction
XA_END[]
XA_COMMIT (ONE_PHASE)[]
  • The line XA_START indicates that the AccountResource object that implements the XAResource interface and enlisted to participate in the account creation transaction, receives the indication from the Transaction Manager that the transaction has started.
  • The line XA_END indicates that the calling thread in which the AccountRessource object is associated shall be ended to enable the transaction completion as recommended by the X/Open specification.
  • Since only one AccountResource then only one XAResource is involved in the account creation transaction, the two phases needed to get a consensus in the 2PC protocol are not mandatory. The one phase commit optimization, indicated by the "XA_COMMIT (ONE_PHASE)", is applied.

In the same way create a second account with the name "Bar" and the initial balance set to 500.

As a choice now, enter "3" to make a transfer (300) from "Foo" to "Bar".

Your choice : 3
- Make a transfer -
-------------------
Take money from : Foo
Put money to : Bar
Transfert amount : 300
Beginning a User transaction to get balance
XA_START[]
XA_START[]
XA_END[]
XA_PREPARE[]
XA_END[]
XA_PREPARE[]
XA_COMMIT[]
XA_COMMIT[]
  • Now two AccountResource objects, then two XAResource objects are enlisted with the transaction. The displayed lines show that the two phases, prepare and commit, are applied.

Any attempt to manipulate an account that it doesn't exist leads to throw the NotExistingAccount exception and to rollback the transaction in progress. For instance, let's withdraw money from an account FooBar not previously created.

Your choice : 5
- Withdraw from an Account -
----------------------------
Give the Account name : FooBar
Amount to withdraw : 200
Beginning a User transaction to
withdraw from an account
The requested account does not exist!
ERROR - jakarta.transaction.RollbackException

From an architectural point of view of JTA, the bank client is considered as an application program able to manage transactions via the jakarta.transaction.UserTransaction interface. The following portion of code illustrates how a JTA transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate objects instances (such UserTransaction).

Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the src/com/arjuna/demo/jta/localbank/BankClient.java)


package com.arjuna.demo.jta.localbank;
public class BankClient
{
   private Bank _bank;
   // This operation is used to make a transfer
   //from an account to another account
   private void makeTransfer()
   {
     System.out.print("Take money from : ");
     String name_supplier = input();

     System.out.print("Put money to : ");
     String name_consumer = input();

     System.out.print("Transfer amount : ");
     String amount = input();

     float famount = 0;
     try
      {
        famount = new Float( amount ).floatValue();
      }
     catch ( java.lang.Exception ex )
      {
        System.out.println("Invalid float number, abort operation...");
        return;
      }

     try
      {
       //the following instruction asks a specific 
       //class to obtain a UserTransaction instance
       jakarta.transaction.UserTransaction userTran =
                     com.arjuna.ats.jta.UserTransaction.userTransaction();
       System.out.println("Beginning a User transaction to get balance");
       userTran.begin();

       Account supplier = _bank.get_account( name_supplier );
       Account consumer = _bank.get_account( name_consumer );
       supplier.debit( famount );
       consumer.credit( famount );

       userTran.commit( );
      }
     catch (Exception e)
      {
       System.err.println("ERROR - "+e);
      }
   }
   ......
}

The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object


package com.arjuna.demo.jta.localbank;
public class Bank {
   private java.util.Hashtable _accounts;

   public Bank()
   {
     _accounts = new java.util.Hashtable();
   }

   public Account create_account( String name )
   {
     Account acc = new Account(name);
     _accounts.put( name, acc );
      return acc;
   }

   public Account get_account(String name)
   throws NotExistingAccount
   {
     Account acc = ( Account ) _accounts.get( name );
     if ( acc == null )
       throw new NotExistingAccount("The Account requested does not exist");
     return acc;
   }
}

The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.

The AccountResource object is in fact an object that implements the javax.transaction.xa.XAResource, then able to participate to the transaction commitment. For this aim, the Account object has to register or enlist the AccountResource object as a participant after having obtaining the reference of the jakarta.transaction.Transaction object via the jakarta.transaction.TransactionManager object


package com.arjuna.demo.jta.localbank;

public class Account
{
   float _balance;
   AccountResource accRes = null;

   public Account(String name)
   {
     _name = name;
     _balance = 0;
   }

   public float balance()
   {
     return getXAResource().balance();;
   }

   public void credit( float value )
   {
     getXAResource().credit( value );
   }

   public void debit( float value )
   {
     getXAResource().debit( value );
   }

   public AccountResource getXAResource()
   {

     try
     {
       jakarta.transaction.TransactionManager transactionManager =
         com.arjuna.ats.jta.TransactionManager.transactionManager();
       jakarta.transaction.Transaction currentTrans =
          transactionManager.getTransaction();

       if (accRes == null) {
         currentTrans.enlistResource(
            accRes = new AccountResource(this, _name) );
       }

       currentTrans.delistResource( accRes, XAResource.TMSUCCESS );

     }
     catch (Exception e)
     {
       System.err.println("ERROR - "+e);
     }
     return accRes;
   }
   ...
}

The AccountResource class that implements the javax.transaction.xa.XAResource interface provides similar methods as the Account class (credit, withdraw and balance) but also all methods specified by the javax.transaction.xa.XAResource. The following portion of code describes how the methods prepare, commit and rollback are implemented.


public class AccountResource implements XAResource
{
   public AccountResource(Account account, String name )
   {
     _name = name;
     _account = account;
     _initial_balance = account._balance;
     _current_balance = _initial_balance;
   }

   public float balance()
   {
     return _current_balance;
   }

   public void credit( float value )
   {
     _current_balance += value;
   }

   public void debit( float value )
   {
     _current_balance -= value;
   }

   public void commit(Xid id, boolean onePhase) throws XAException
   {
     //The value of the associated Account object is modified
     _account._balance = _current_balance;
   }

   public int prepare(Xid xid) throws XAException
   {
     if ( _initial_balance == _current_balance ) //account not modified
        return (XA_RDONLY);
     if ( _current_balance < 0 )
        throw new XAException(XAException.XA_RBINTEGRITY);
        //If the integrity of the account is corrupted then vote rollback
     return (XA_OK); //return OK
   }
   
public void rollback(Xid xid) throws XAException
   {
     //Nothing is done
   }
   
   private float _initial_balance;
   private float _current_balance;
   private Account _account;

   }
 }

The JTS version of the Banking application means that the Object Request Broker will be used. The distribution is provided to work with the bundled JacORB version

To describe the possibilities provided by to build a transactional application according to the programming models defined by the OTS specification, the Banking Application is programmed in different ways.

  • Local transactions: The Bank Client and the Bank server are collocated in the same process.
  • Distributed Transactions: The Bank Client and the Bank Server and located on different process. To participate within a client's transaction, Account Objects needed to access the transactional context. We describe the two of context propagation.
    • implicit context propagation, and
    • explicit context propagation.

JTS Local Transactions>

JTS Distributed Transactions

The JTS version of the Banking application means that the Object Request Broker will be used. The distribution is provided to work with the bundled JacORB version

Note : Ensure that the jacorb jar files are added in your CLASSPATH

To launch the JTS version of the Banking application, execute the following java program

java com.arjuna.demo.jts.localbank.BankClient

Once one of the program given above is launched the following lines are displayed:


-------------------------------------------------
   Bank client
-------------------------------------------------
Select an option :
   0. Quit
   1. Create a new account.
   2. Get an account information.
   3. Make a transfer.
   4. Credit an account.
   5. Withdraw from an account

Your choice :

After introducing your choice, the appropriate operation is performed by the Bank object, to get the requested account, and by the account to execute the credit or withdraw or to return the current balance. Let's consider the following execution.

Enter the number 1 as your choice, then give the name "Foo" as the account name and "1000" as an initial value of the account to create. You should get the following lines:


Your choice : 1
- Create a new account -
------------------------
Name : Foo
Initial balance : 1000
Beginning a User transaction to create account
[ Connected to 192.168.0.2:4799 from local port 4924 ]
Attempt to commit the account creation transaction
/[ Resource for Foo : Commit one phase ]
  • Since only one AccountResource then only one CosTransaction.Resource is involved in the account creation transaction, the two phases needed to get a consensus in the 2PC protocol are not mandatory. The one phase commit optimisation, indicated by the "Commit one phase", is applied.

In the same way create a second account with the name "Bar" and the initial balance set to 500.

As a choice now, enter "3" to make a transfer (300) from "Foo" to "Bar".


Your choice : 3
- Make a transfer -
-------------------

Take money from : Foo
Put money to : Bar
Transfer amount : 300
Beginning a User transaction to Transfer money
[ Resource for Foo : Prepare ]
[ Resource for Bar : Prepare ]
[ Resource for Foo : Commit ]
[ Resource for Bar : Commit ]
  • Now two AccountResource objects, then two CosTransactions.Resource objects are enlisted with the transaction. The displayed lines show that the two phases, prepare and commit, are applied.

Any attempt to manipulate an account that it doesn't exist leads to throw the NotExistingAccount exception and to rollback the transaction in progress. For instance, let's withdraw money from an account FooBar not previously created.


Your choice : 5
- Withdraw from an Account -
----------------------------
Give the Account name : FooBar
Amount to withdraw : 200
Beginning a User transaction to withdraw from an account
The requested account does not exist!
ERROR - org.omg.CORBA.TRANSACTION_ROLLEDBACK:
minor code: 50001  completed: No

The JTS version of the Banking application means that the Object Request Broker will be used. The distribution is provided to work with the bundled JacORB version

Note : Ensure that the jacorb jar files are added in your CLASSPATH

  • In a separate window launch the Recovery Manager, as follow.
java com.arjuna.ats.arjuna.recovery.RecoveryManager
  • Testing the distributed transaction with Implicit Propagation Context
  • Start the Server
java com.arjuna.demo.jts.remotebank.BankServer
  • In a separate window, start the client
java com.arjuna.demo.jts.remotebank.BankClient
  • Testing the distributed transaction with Explicit Propagation Context
  • Start the Server
java com.arjuna.demo.jts.explicitremotebank.BankServer
  • In a separate window, start the client
java com.arjuna.demo.jts.explicitremotebank.BankClient

In both cases (implicit and explicit), the Bank Server, which can be stopped by hand, displays the following lines:

The bank server is now ready...

In both cases (implicit and Explicit), the Bank Client window displays the following lines:

------------------------------------------------- Bank client ------------------------------------------------- Select an option : 0. Quit 1. Create a new account. 2. Get an account information. 3. Make a transfer. 4. Credit an account. 5. Withdraw from an account Your choice :

After entering your choice, the appropriate operation is performed by the remote Bank object, to get the requested account, and by the account to execute the credit or withdraw or to return the current balance. Let's consider the following execution.

Enter the number 1 as your choice, then give the name "Foo" as the account name and "1000" as an initial value of the account to create. You should get in the server window a result that terminates with the following line

[ Resource for Foo : Commit one phase ]
  • Since only one AccountResource then only one CosTransaction.Resource is involved in the account creation transaction, the two phases needed to get a consensus in the 2PC protocol are not mandatory. The one phase commit optimisation, indicated by the "Commit one phase", is applied.

In the same way create a second account with the name "Bar" and the initial balance set to 500.

As a choice now, enter in the client window "3" to make a transfer (300) from "Foo" to "Bar".

Your choice : 3 - Make a transfer - ------------------- Take money from : Foo Put money to : Bar Transfer amount : 300

In the Server window you should see a result with the following lines

[ Resource for Foo : Prepare ] [ Resource for Bar : Prepare ] [ Resource for Foo : Commit ] [ Resource for Bar : Commit ]
  • Now two AccountResource objects, then two CosTransactions.Resource objects are enlisted with the transaction. The displayed lines show that the two phases, prepare and commit, are applied.

Any attempt to manipulate an account that it doesn't exist leads to throw the NotExistingAccount exception and to rollback the transaction in progress. For instance, let's withdraw money from an account FooBar not previously created.

Your choice : 5 - Withdraw from an Account - ---------------------------- Amount to withdraw : 200 Beginning a User transaction to withdraw from an account The requested account does not exist! ERROR - org.omg.CORBA.TRANSACTION_ROLLEDBACK: minor code: 50001 completed: No

It is possible to run the Transaction Service and recovery manager processes on a different machine and have clients access these centralized services in a hub-and-spoke style architecture.

All that must be done is to provide the clients with enough information to contact the transaction service (such as the ORB's NameService). However, configuring the ORB is beyond the remit of this trailmap and so we shall opt for a simpler mechanism wherby the transaction services IOR is shared by access to a common file.

This trailmap stage assumes that the transaction service has been appropriately installed and configured (the setenv.[bat|sh] script has been ran) onto two hosts (for the purpose of explanation we shall refer to these hosts as host1 and host2).

  • Start the recovery manager in one command prompt terminal
java com.arjuna.ats.arjuna.recovery.RecoveryManager [-test]
  • Start the transaction service in a second command prompt terminal
java com.arjuna.ats.jts.TransactionServer [-test]
  • Start the transaction service and recovery manager on host1

Open a command prompt on host2 and copy the CosServices.cfg file from the <narayana-jts_install_root>/etc directory on host1.

For example, using the popular scp package, open a shell prompt and issue the following command:

scp user @ host1:<ats_root>/etc/CosServices.cfg <host2_ats_root>/etc/
  • Share the transaction service IOR on host1 with host2

NOTE: See the section above entitled "Using a stand-alone Transaction Server" for more information on how to configure these application to use a remote transaction service.

  • Testing the distributed transaction with Implicit Propagation Context
  • Start the Server
   java com.arjuna.demo.jts.remotebank.BankServer
  • In a separate window, start the client
   java com.arjuna.demo.jts.remotebank.BankClient
  • Testing the distributed transaction with Explicit Propagation Context
  • Start the Server
   java com.arjuna.demo.jts.explicitremotebank.BankServer
  • In a separate window, start the client
   java com.arjuna.demo.jts.explicitremotebank.BankClient
  • Start the Bank Server and Bank Client applications on host2

From an architectural point of view of JTS, the bank client is considered as an application program able to manage transactions either in a direct or indirect management mode, respectively with the interfaces org.omg.CosTransactions.TransactionFactory and org.omg.CosTransactions.Terminator or with the org.omg.CosTransactions.Current interface. Transactions created by the client in the Banking application are done in the indirect mode.

The following portion of code illustrates how a JTS transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate objects instances (such Current).

Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the ../src/com/arjuna/demo/jts/localbank/BankClient.java)

package com.arjuna.demo.jta.localbank;
import com.arjuna.ats.jts.OTSManager;
import com.arjuna.ats.internal.jts.ORBManager;
					
public class BankClient
{
   private Bank _bank; //Initialised on BankClient initializations
   ....
   // This operation is used to make a transfer from an account to another account
   private void makeTransfer()
   {
     System.out.print("Take money from : ");
     String name_supplier = input();

     System.out.print("Put money to : ");
     String name_consumer = input();

     System.out.print("Transfert amount : ");
     String amount = input();

     float famount = 0;
     try
      {
        famount = new Float( amount ).floatValue();
      }
     catch ( java.lang.Exception ex )
      {
        System.out.println("Invalid float number, abort operation...");
        return;
      }

     try
      {
       //the following instruction asks a specific  class to obtain a Current instance
       Current current = OTSManager.get_current(); 
       System.out.println("Beginning a User transaction to get balance");
       current.begin();

       Account supplier = _bank.get_account( name_supplier );
       Account consumer = _bank.get_account( name_consumer );
       supplier.debit( famount );
       consumer.credit( famount );

       current.commit( );
      }
     catch (Exception e)
      {
       System.err.println("ERROR - "+e);
      }
   }

Since JTS is used invocations against an ORB are needed, such ORB and Object Adapter instantiation and initialisation. To ensure a better portability, the ORB Portability API provides a set of methods that can be used as described below.

public static void main( String [] args )
{  
    try {
     myORB = ORB.getInstance("test");// Create an ORB instance
     myOA = OA.getRootOA(myORB); //Obtain the Root POA
     myORB.initORB(args, null); //Initialise the ORB
     myOA.initOA(); //Initialise the POA

     // The ORBManager is a class provided by  to facilitate the association
     // of the ORB/POA with the transaction service
     ORBManager.setORB(myORB);
     ORBManager.setPOA(myOA);
     ....
   }
   catch(Exception e)
   {
     e.printStackTrace(System.err);
   }
}

The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object

package com.arjuna.demo.jta.localbank;
public class Bank {
   private java.util.Hashtable _accounts;

   public Bank()
   {
     _accounts = new java.util.Hashtable();
   }

   public Account create_account( String name )
   {
     Account acc = new Account(name);
     _accounts.put( name, acc );
      return acc;
   }

   public Account get_account(String name)
   throws NotExistingAccount
   {
     Account acc = ( Account ) _accounts.get( name );
     if ( acc == null )
       throw new NotExistingAccount("The Account requested does not exist");
     return acc;
   }
}

The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.

The AccountResource object is in fact an object that implements the org.omg.CosTransactions.Resource, then able to participate to the transaction commitment. For this aim, the Account object has to register the AccountResource object as a participant, after having obtaining the reference of the org.omg.CosTransactions.Coordinator object , itself obtained via the org.omg.CosTransactions.Control object

package com.arjuna.demo.jta.localbank;


public class Account
{
   float _balance;
   AccountResource accRes = null;

   public Account(String name )
   {
     _name = name;
     _balance = 0;
   }

   public float balance()
   {
     return getResource().balance();;
   }

   public void credit( float value )
   {
     getResource().credit( value );
   }

   public void debit( float value )
   {
     getResource().debit( value );
   }


   public AccountResource getResource()
    {
    try {
    if (accRes == null) {
         accRes = new AccountResource(this, _name) ;
         Resource ref = org.omg.CosTransactions.ResourceHelper.narrow(ORBManager.getPOA().corbaReference(accRes));
         // Note above the possibilities provided by the ORBManager to access the POA then to obtain
         // the CORBA reference of the created AccountResource object

         RecoveryCoordinator recoverycoordinator = OTSManager.get_current().get_control().
                                               get_coordinator().register_resource(ref);
						
        }
      }
      catch (Exception e)
      {
        System.err.println("ERROR - "+e);
      }

      return accRes;
   }
   ...
}

To be considered as a org.omg.CosTransactions.Resource, the AccountResource class shall extends the class org.omg.CosTransactions.ResourcePOA generated by the CORBA IDL compiler. The AccountRessource provides similar methods as the Account class (credit, withdraw and balance) with the appropriate methods to participate to the 2PC protocol. The following portion of code describes how the methods prepare, commit and rollback are implemented.

public class AccountResource extends org.omg.CosTransactions.ResourcePOA
{
   public AccountResource(Account account, String name )
   {
     _name = name;
     _account = account;
     _initial_balance = account._balance;
     _current_balance = _initial_balance;
   }

   public float balance()
   {
     return _current_balance;
   }

   public void credit( float value )
   {
     _current_balance += value;
   }

   public void debit( float value )
   {
     _current_balance -= value;
   }

   public org.omg.CosTransactions.Vote prepare()
	   throws org.omg.CosTransactions.HeuristicMixed, org.omg.CosTransactions.HeuristicHazard
    {
	  if ( _initial_balance == _current_balance )
       return org.omg.CosTransactions.Vote.VoteReadOnly;
     if ( _current_balance < 0 )
       return org.omg.CosTransactions.Vote.VoteRollback;
     return org.omg.CosTransactions.Vote.VoteCommit;
    }

   public void rollback()
     throws org.omg.CosTransactions.HeuristicCommit, org.omg.CosTransactions.HeuristicMixed,
                            org.omg.CosTransactions.HeuristicHazard
   {
     //Nothing to do
   }

   public void commit()
     throws org.omg.CosTransactions.NotPrepared, org.omg.CosTransactions.HeuristicRollback,
                      org.omg.CosTransactions.HeuristicMixed, org.omg.CosTransactions.HeuristicHazard
   {
      _account._balance = _current_balance;
   }

   public void commit_one_phase()
     throws org.omg.CosTransactions.HeuristicHazard
   {
     _account._balance = _current_balance;
   }

   .....
   private float _initial_balance;
   private float _current_balance;
   private Account _account;

   }
 

The bank client is an application program able to manage transactions either in a direct or indirect management mode, respectively with the interfaces org.omg.CosTransactions.TransactionFactory and org.omg.CosTransactions.Terminator or with the org.omg.CosTransactions.Current interface. Transactions created by the client in the Banking application are done in the indirect mode.

Invoking a remote object within a CORBA environment means that the remote object implements a CORBA interface defined in a CORBA idl file. The following Bank.idl describes the interfaces then the possible kind of distributed CORBA objects involved in the banking application. There is no any interface that inherits the CosTransactions::TransactionalObject interface, which means that for any remote invocations the transactional context is normally not propagated. However, since the Account object may have to register Resource objects that participate to transaction completion, a context is needed. In the following Bank.idl file operations defined in the Account interface have explicitly in their signature the CosTransactions::Control argument meaning that it passed explicitly by the caller - in this case the Bank Client program.


module arjuna {
   module demo {
     module jts {
      module explicitremotebank {

        interface Account :
        {
          float balance(in CosTransactions::Control ctrl);
          void credit( in CosTransactions::Control ctrl, in float value );
          void debit( in CosTransactions::Control ctrl, in float value );
        };

        exception NotExistingAccount
        { };

        interface Bank
        {
          Account create_account( in string name );
          Account get_account( in string name )
            raises( NotExistingAccount );
        };
       };
      };
     };
   };
   

The following portion of code illustrates how a JTS transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate objects instances (such Current).

Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the src/com/arjuna/demo/jts/explicitremotebank/BankClient.java)

package com.arjuna.demo.jta.remotebank;
import com.arjuna.ats.jts.OTSManager;
public class BankClient
{
   private Bank _bank;
   ....
   // This operation is used to make a transfer
   //from an account to another account
   private void makeTransfer()
   {
     //get the name of the supplier(name_supplier) and
     // the consumer(name_consumer)
     // get the amount to transfer (famount)
     ...
     try
      {
       //the following instruction asks a specific
       // class to obtain a Current instance
       Current current = OTSManager.get_current(); 
       System.out.println("Beginning a User transaction to get balance");
       current.begin();

       Account supplier = _bank.get_account( name_supplier );
       Account consumer = _bank.get_account( name_consumer );
       supplier.debit( current.get_control(), famount );
       //The Control is explicitly propagated
       consumer.credit( current.get_control(), famount );
       current.commit( );
      }
     catch (Exception e)
      {
       ...
      }
   }

Since JTS is used invocations against an ORB are needed, such ORB and Object Adapter instantiation and initialisation. To ensure a better portability, the ORB Portability API provides a set of methods that can be used as described below.

public static void main( String [] args )
{
  ....
  myORB = ORB.getInstance("test");// Create an ORB instance
  myORB.initORB(args, null); //Initialise the ORB
  
  org.omg.CORBA.Object obj = null;
  try
  {
     //Read the reference string from a file then convert to Object
     ....
      obj = myORB.orb().string_to_object(stringTarget);
  }
  catch ( java.io.IOException ex )
  {
     ...
  }
  Bank bank = BankHelper.narrow(obj);
   ....
}

The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object. The following lines decribe the implementation of the Bank CORBA object

public class BankImpl extends BankPOA {
     public BankImpl(OA oa)
     {
       _accounts = new java.util.Hashtable();
       _oa = oa;
     }

     public Account create_account( String name )
     {
         AccountImpl acc = new AccountImpl(name);
         _accounts.put( name, acc );
          return com.arjuna.demo.jts.remotebank.AccountHelper.
               narrow(_oa.corbaReference(acc));
     }

     public Account get_account(String name)
          throws NotExistingAccount
     {
      AccountImpl acc = ( AccountImpl ) _accounts.get( name );
      if ( acc == null )
       throw new NotExistingAccount("The Account requested does not exist");
      return com.arjuna.demo.jts.remotebank.AccountHelper.
           narrow(_oa.corbaReference(acc));
     }
     private java.util.Hashtable _accounts;// Accounts created by the Bank
     private OA _oa;
}

After having defined an implementation of the Bank object, we should now create an instance and make it available for client requests. This is the role of the Bank Server that has the responsibility to create the ORB and the Object Adapater instances, then the Bank CORBA object that has its object reference stored in a file well known by the bank client. The following lines describe how the Bank server is implemented.

public class BankServer
{
      public static void main( String [] args )
      {
       ORB myORB = null;
       RootOA myOA = null;
       try
       {
        myORB = ORB.getInstance("ServerSide");
        myOA = OA.getRootOA(myORB);
        myORB.initORB(args, null);
        myOA.initOA();
        ....
        BankImpl bank = new BankImpl(myOA);

        String reference = myORB.orb().
             object_to_string(myOA.corbaReference(bank));
        //Store the Object reference in the file
        ...

        System.out.println("The bank server is now ready...");
        myOA.run();
      }
}

The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.

The AccountResource object is in fact an object that implements the org.omg.CosTransactions.Resource, then able to participate to the transaction commitment. For this aim, the Account object has to register the AccountResource object as a participant, after having obtaining the reference of the org.omg.CosTransactions.Coordinator object , itself obtained via the org.omg.CosTransactions.Control object


package com.arjuna.demo.jta.remotebank;

import org.omg.CosTransactions.*;
import ....

public class AccountImpl extends AccountPOA
{
   float _balance;
   AccountResource accRes = null;

   public Account(String name )
   {
     _name = name;
     _balance = 0;
   }

   public float balance(Control ctrl)
   {
     return getResource(ctrl).balance();;
   }

   public void credit(Control ctrl, float value )
   {
     getResource(ctrl).credit( value );
   }

   public void debit(Control ctrl, float value )
   {
     getResource(ctrl).debit( value );
   }

   public AccountResource getResource(Control control)
   {
      try
      {
         if (accRes == null) {
            accRes = new AccountResource(this, _name) ;
           
           //The invocation on the ORB illustrates the fact that the same
           //ORB instance created by the Bank Server is returned.
           ref = org.omg.CosTransactions.ResourceHelper.
              narrow(OA.getRootOA(ORB.getInstance("ServerSide")).
              corbaReference(accRes));
           RecoveryCoordinator recoverycoordinator =
              control.get_coordinator().register_resource(ref);
         }
      }
      catch (Exception e){...}
      return accRes;
       }
   ...
}

To be considered as a org.omg.CosTransactions.Resource, the AccountResource class shall extends the class org.omg.CosTransactions.ResourcePOA generated by the CORBA IDL compiler. The AccountRessource provides similar methods as the Account class (credit, withdraw and balance) with the appropriate methods to participate to the 2PC protocol. The following portion of code describes how the methods prepare, commit and rollback are implemented.

public class AccountResource extends org.omg.CosTransactions.ResourcePOA
{
   public AccountResource(Account account, String name )
   {
     _name = name;
     _account = account;
     _initial_balance = account._balance;
     _current_balance = _initial_balance;
   }

   public float balance()
   {
     return _current_balance;
   }

   public void credit( float value )
   {
     _current_balance += value;
   }

   public void debit( float value )
   {
     _current_balance -= value;
   }

   public org.omg.CosTransactions.Vote prepare()
	   throws org.omg.CosTransactions.HeuristicMixed,
	   org.omg.CosTransactions.HeuristicHazard
  {
    if ( _initial_balance == _current_balance )
       return org.omg.CosTransactions.Vote.VoteReadOnly;
    if ( _current_balance < 0 )
       return org.omg.CosTransactions.Vote.VoteRollback;
    return org.omg.CosTransactions.Vote.VoteCommit;
  }

   public void rollback()
     throws org.omg.CosTransactions.HeuristicCommit,
     org.omg.CosTransactions.HeuristicMixed,
     org.omg.CosTransactions.HeuristicHazard
   {
     //Nothing to do
   }

   public void commit()
     throws org.omg.CosTransactions.NotPrepared,
     org.omg.CosTransactions.HeuristicRollback,
     org.omg.CosTransactions.HeuristicMixed,
     org.omg.CosTransactions.HeuristicHazard
   {
      _account._balance = _current_balance;
   }

   public void commit_one_phase()
     throws org.omg.CosTransactions.HeuristicHazard
   {
     _account._balance = _current_balance;
   }

   .....
   private float _initial_balance;
   private float _current_balance;
   private Account _account;

   }
 

The bank client is an application program able to manage transactions either in a direct or indirect management mode, respectively with the interfaces org.omg.CosTransactions.TransactionFactory and org.omg.CosTransactions.Terminator or with the org.omg.CosTransactions.Current interface. Transactions created by the client in the Banking application are done in the indirect mode.

Invoking a remote object within a CORBA environment means that the remote object implements a CORBA interface defined in a CORBA idl file. The following Bank.idl describes the interfaces then the possible kind of distributed CORBA objects involved in the banking application. Only the Account interface inherits the CosTransactions::TransactionalObject interface, this means that an Account CORBA object is expected to invoked within a scope of transaction and the transactional context is implicitly propagated.


module arjuna {
   module demo {
     module jts {
      module remotebank {

        interface Account : CosTransactions::TransactionalObject
        {
          float balance();
          void credit( in float value );
          void debit( in float value );
        };

        exception NotExistingAccount
        { };

        interface Bank
        {
          Account create_account( in string name );
          Account get_account( in string name )
            raises( NotExistingAccount );
        };
       };
      };
     };
   };

The following portion of code illustrates how a JTS transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate standard JTS API objects instances (such Current).

Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the src/com/arjuna/demo/jts/localbank/BankClient.java)


package com.arjuna.demo.jta.remotebank;
import com.arjuna.ats.jts.OTSManager;

public class BankClient
{
   private Bank _bank;
   ....
   // This operation is used to make a transfer
   // from an account to another account
   private void makeTransfer()
   {
     //get the name of the supplier(name_supplier)
     // and the consumer(name_consumer)
     // get the amount to transfer (famount)
     ...

     try
      {
       //the following instruction asks a
       // specific  class
       // to obtain a Current instance
       Current current = OTSManager.get_current(); 
       System.out.println("Beginning a User
              transaction to get balance");
       current.begin();

       Account supplier = _bank.get_account( name_supplier );
       Account consumer = _bank.get_account( name_consumer );
       supplier.debit( famount );
       consumer.credit( famount );

       current.commit( );
      }
     catch (Exception e)
      {
       ...
      }
   }

Since JTS is used invocations against an ORB are needed, such ORB and Object Adapter instantiation and initialisation. To ensure a better portability, the ORB Portability API provides a set of methods that can be used as described below.

public static void main( String [] args )
{  ....
  myORB = ORB.getInstance("test");
     myORB.initORB(args, null); //Initialise the ORB

     org.omg.CORBA.Object obj = null;
     try
      {
        //Read the reference string from
        // a file then convert to Object
        ....
        obj = myORB.orb().string_to_object(stringTarget);
      }
     catch ( java.io.IOException ex )
     {
       ...
     }
     Bank bank = BankHelper.narrow(obj);
    ....
}

The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object. The following lines decribe the implementation of the Bank CORBA object

public class BankImpl extends BankPOA {
     public BankImpl(OA oa)
     {
       _accounts = new java.util.Hashtable();
       _oa = oa;
     }

     public Account create_account( String name )
     {
         AccountImpl acc = new AccountImpl(name);
         _accounts.put( name, acc );
          return com.arjuna.demo.jts.remotebank.AccountHelper.
               narrow(_oa.corbaReference(acc));
     }

     public Account get_account(String name)
          throws NotExistingAccount
     {
        AccountImpl acc = ( AccountImpl ) _accounts.get( name );
        if ( acc == null )
          throw new NotExistingAccount("The Account requested
                      does not exist");
        return com.arjuna.demo.jts.remotebank.AccountHelper.
             narrow(_oa.corbaReference(acc));
     }
     private java.util.Hashtable _accounts;
        // Accounts created by the Bank
     private OA _oa;
}

After having defined an implementation of the Bank object, we should now create an instance and make it available for client requests. This is the role of the Bank Server that has the responsibility to create the ORB and the Object Adapater instances, then the Bank CORBA object that has its object reference stored in a file well known by the bank client. The following lines describe how the Bank server is implemented.

public class BankServer
{
      public static void main( String [] args )
      {
       ORB myORB = null;
       RootOA myOA = null;
       try
       {
        myORB = ORB.getInstance("ServerSide");
        myOA = OA.getRootOA(myORB);
        myORB.initORB(args, null);
        myOA.initOA();
        ....
        BankImpl bank = new BankImpl(myOA);

        String reference = myORB.orb().
               object_to_string(myOA.corbaReference(bank));
        //Store the Object reference in the file
        ...
        System.out.println("The bank server is now ready...");
        myOA.run();
      }
}

The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.

The AccountResource object is in fact an object that implements the org.omg.CosTransactions.Resource, then able to participate to the transaction commitment. For this aim, the Account object has to register the AccountResource object as a participant, after having obtaining the reference of the org.omg.CosTransactions.Coordinator object , itself obtained via the org.omg.CosTransactions.Control object

package com.arjuna.demo.jta.remotebank;
import ....

public class AccountImpl extends AccountPOA
{
   float _balance;
   AccountResource accRes = null;

   public Account(String name )
   {
     _name = name;
     _balance = 0;
   }

   public float balance()
   {
     return getResource().balance();;
   }

   public void credit( float value )
   {
     getResource().credit( value );
   }

   public void debit( float value )
   {
     getResource().debit( value );
   }


   public AccountResource getResource()
   {
     try
     {
      if (accRes == null) {
        accRes = new AccountResource(this, _name) ;
        //The invocation on the ORB illustrates the
        // fact that the same ORB instance created
        // by the Bank Server is returned.
        ref = org.omg.CosTransactions.ResourceHelper.
	     narrow(OA.getRootOA(ORB.getInstance("ServerSide")).
	     corbaReference(accRes));
        RecoveryCoordinator recoverycoordinator = OTSManager.get_current().
	     get_control().get_coordinator().register_resource(ref);
       
      }
    }
    catch (Exception e)
    {....}
      return accRes;
   }
   ...
}

To be considered as a org.omg.CosTransactions.Resource, the AccountResource class shall extends the class org.omg.CosTransactions.ResourcePOA generated by the CORBA IDL compiler. The AccountResource provides similar methods as the Account class (credit, withdraw and balance) with the appropriate methods to participate to the 2PC protocol. The following portion of code describes how the methods prepare, commit and rollback are implemented.


public class AccountResource 
      extends org.omg.CosTransactions.ResourcePOA
{
   public AccountResource(Account account, String name )
   {
     _name = name;
     _account = account;
     _initial_balance = account._balance;
     _current_balance = _initial_balance;
   }

   public float balance()
   {
     return _current_balance;
   }

   public void credit( float value )
   {
     _current_balance += value;
   }

   public void debit( float value )
   {
     _current_balance -= value;
   }

   public org.omg.CosTransactions.Vote prepare()
	   throws org.omg.CosTransactions.HeuristicMixed,
	   org.omg.CosTransactions.HeuristicHazard
	   {
	   	  if ( _initial_balance == _current_balance )
	          return org.omg.CosTransactions.Vote.VoteReadOnly;
	        if ( _current_balance < 0 )
                  return org.omg.CosTransactions.Vote.VoteRollback;
              return org.omg.CosTransactions.Vote.VoteCommit;
          }

   public void rollback()
     throws org.omg.CosTransactions.HeuristicCommit,
     org.omg.CosTransactions.HeuristicMixed,
     org.omg.CosTransactions.HeuristicHazard
   {
     //Nothing to do
   }

   public void commit()
     throws org.omg.CosTransactions.NotPrepared,
     org.omg.CosTransactions.HeuristicRollback,
     org.omg.CosTransactions.HeuristicMixed,
     org.omg.CosTransactions.HeuristicHazard
   {
      _account._balance = _current_balance;
   }

   public void commit_one_phase()
     throws org.omg.CosTransactions.HeuristicHazard
   {
     _account._balance = _current_balance;
   }

   ....
   private float _initial_balance;
   private float _current_balance;
   private Account _account;

   }
 

From an architectural point of view of JTS, the bank client is considered as an application program able to manage transactions either in a direct or indirect management mode, respectively with the interfaces org.omg.CosTransactions.TransactionFactory and org.omg.CosTransactions.Terminator or with the org.omg.CosTransactions.Current interface. Transactions created by the client in the Banking application are done in the indirect mode.

The following portion of code illustrates how a JTS transaction is started and terminated when the client asks to transfer money from one account to another. This also describes what are packages that need to be used in order to obtain appropriate objects instances (such Current).

Note: The code below is a simplified view of the BankClient.java program. Only the transfer operation is illustrated; other operations manage transactions in the same way. (see for details the src/com/arjuna/demo/jts/localbank/BankClient.java)


package com.arjuna.demo.jta.localbank;
import com.arjuna.ats.jts.OTSManager;

public class BankClient
{
   private Bank _bank;
    ....
   // This operation is used to make
   //a transfer from an account to another account
   private void makeTransfer()
   {
     System.out.print("Take money from : ");
     String name_supplier = input();

     System.out.print("Put money to : ");
     String name_consumer = input();

     System.out.print("Transfert amount : ");
     String amount = input();

     float famount = 0;
     try
      {
        famount = new Float( amount ).floatValue();
      }
     catch ( java.lang.Exception ex )
      {
        System.out.println("Invalid float number,
                     abort operation...");
        return;
      }

     try
      {
       //the following instruction asks a specific
       //  class to obtain a Current instance
       Current current = OTSManager.get_current(); 
       System.out.println("Beginning a User
                     transaction to get balance");
       current.begin();

       Account supplier = _bank.get_account( name_supplier );
       Account consumer = _bank.get_account( name_consumer );
       supplier.debit( famount );
       consumer.credit( famount );

       current.commit( );
      }
     catch (Exception e)
      {
       System.err.println("ERROR - "+e);
      }
   }
   

Since JTS is used invocations against an ORB are needed, such ORB and Object Adapter instantiation and initialisation. To ensure a better portability, the ORB Portability API provides a set of methods that can be used as described below.


public static void main( String [] args )
{
  try
   { 
    // Create an ORB instance
    myORB = ORB.getInstance("test");
    //Obtain the Root POA
    myOA = OA.getRootOA(myORB);
    //Initialise the ORB
    myORB.initORB(args, null);
    //Initialise the POA
    myOA.initOA();
     ....
     
   }
   catch(Exception e)
   { ....}
}

The Bank object has mainly two operations: creating an account, which is added in the account list, and returning an Account object. No transactional instruction is performed by the Bank object

package com.arjuna.demo.jta.localbank;
public class Bank {
   private java.util.Hashtable _accounts;

   public Bank()
   {
     _accounts = new java.util.Hashtable();
   }

   public Account create_account( String name )
   {
     Account acc = new Account(name);
     _accounts.put( name, acc );
      return acc;
   }

   public Account get_account(String name)
   throws NotExistingAccount
   {
     Account acc = ( Account ) _accounts.get( name );
     if ( acc == null )
       throw new NotExistingAccount("The Account
                      requested does not exist");
     return acc;
   }
}

The Account object provides mainly three methods balance, credit and withdraw. However, in order to provide the transactional behaviour, rather than to modify the current account directly (according to credit or withdraw) this task is delegated to an AccountResource object that is able, according to the transaction outcome, to set the account value either to its initial state or its final state.

The AccountResource object is in fact an object that implements the org.omg.CosTransactions.Resource, then able to participate to the transaction commitment. For this aim, the Account object has to register the AccountResource object as a participant, after having obtaining the reference of the org.omg.CosTransactions.Coordinator object , itself obtained via the org.omg.CosTransactions.Control object


package com.arjuna.demo.jta.localbank;

public class Account
{
 float _balance;
 AccountResource accRes = null;

 public Account(String name )
 {
   _name = name;
   _balance = 0;
 }

 public float balance()
 {
   return getResource().balance();;
 }

 public void credit( float value )
 {
   getResource().credit( value );
 }

 public void debit( float value )
 {
   getResource().debit( value );
 }

 public AccountResource getResource()
 {
   try
   {
    if (accRes == null) {
     accRes = new AccountResource(this, _name) ;
     Resource  ref = org.omg.CosTransactions.ResourceHelper.
      narrow(OA.getRootOA(ORB.getInstance("test")).corbaReference(accRes));
     RecoveryCoordinator recoverycoordinator = OTSManager.get_current().
      get_control().get_coordinator().register_resource(ref);
    }
  }
  catch (Exception e)
   {...}
   return accRes;
 }
  ...
}

To be considered as a org.omg.CosTransactions.Resource, the AccountResource class shall extends the class org.omg.CosTransactions.ResourcePOA generated by the CORBA IDL compiler. The AccountRessource provides similar methods as the Account class (credit, withdraw and balance) with the appropriate methods to participate to the 2PC protocol. The following portion of code describes how the methods prepare, commit and rollback are implemented.


public class AccountResource extends org.omg.CosTransactions.ResourcePOA
{
   public AccountResource(Account account, String name )
   {
     _name = name;
     _account = account;
     _initial_balance = account._balance;
     _current_balance = _initial_balance;
   }

   public float balance()
   {
     return _current_balance;
   }

   public void credit( float value )
   {
     _current_balance += value;
   }

   public void debit( float value )
   {
     _current_balance -= value;
   }

   public org.omg.CosTransactions.Vote prepare()
	   throws org.omg.CosTransactions.HeuristicMixed,
	   org.omg.CosTransactions.HeuristicHazard
    {
	  if ( _initial_balance == _current_balance )
       return org.omg.CosTransactions.Vote.VoteReadOnly;
     if ( _current_balance < 0 )
       return org.omg.CosTransactions.Vote.VoteRollback;
     return org.omg.CosTransactions.Vote.VoteCommit;
    }

   public void rollback()
     throws org.omg.CosTransactions.HeuristicCommit,
     org.omg.CosTransactions.HeuristicMixed,
     org.omg.CosTransactions.HeuristicHazard
   {
     //Nothing to do
   }

   public void commit()
     throws org.omg.CosTransactions.NotPrepared,
     org.omg.CosTransactions.HeuristicRollback,
     org.omg.CosTransactions.HeuristicMixed,
     org.omg.CosTransactions.HeuristicHazard
   {
      _account._balance = _current_balance;
   }

   public void commit_one_phase()
     throws org.omg.CosTransactions.HeuristicHazard
   {
     _account._balance = _current_balance;
   }
   .....
   private float _initial_balance;
   private float _current_balance;
   private Account _account;

   }
 

ArjunaCore exploits object-oriented techniques to present programmers with a toolkit of Java classes from which application classes can inherit to obtain desired properties, such as persistence and concurrency control. These classes form a hierarchy, part of which is shown below.

Figure 1 - ArjunaCore class hierarchy.

Apart from specifying the scopes of transactions, and setting appropriate locks within objects, the application programmer does not have any other responsibilities: ArjunaCore and Transactional Objects for Java (TXOJ) guarantee that transactional objects will be registered with, and be driven by, the appropriate transactions, and crash recovery mechanisms are invoked automatically in the event of failures.

Making an object persistent and recoverable means that we shall be able to store its final state or to retrieve its initial state according to the final status of a transaction even in the presence of failures. ArjunaCore provides a set of techniques to save to and to retrieve from the Object Store states of objects. All objects made persistent with these ArjunaCore mechanisms are assigned unique identifiers (instances of the Uid class), when they are created, and this is to identify them within the object store. Due to common functionality for persistency and recovery required by several applications, objects are stored and retrieved from the object store using the same mechanism: the classes OutputObjectState and InputObjecState.

At the root of the class hierarchy, given in Figure 1, is the class StateManager. This class is responsible for object activation and deactivation and object recovery. The simplified signature of the class is:

public abstract class StateManager
{
   public boolean activate ();
   public boolean deactivate (boolean commit);
   public Uid get_uid (); // object’s identifier.

   // methods to be provided by a derived class
   public boolean restore_state (InputObjectState os);
   public boolean save_state (OutputObjectState os);

   protected StateManager ();
   protected StateManager (Uid id);
};

Objects are assumed to be of three possible flavours. They may simply be recoverable, in which case StateManager will attempt to generate and maintain appropriate recovery information for the object. Such objects have lifetimes that do not exceed the application program that creates them. Objects may be recoverable and persistent, in which case the lifetime of the object is assumed to be greater than that of the creating or accessing application, so that in addition to maintaining recovery information StateManager will attempt to automatically load (unload) any existing persistent state for the object by calling the activate (deactivate) operation at appropriate times. Finally, objects may possess none of these capabilities, in which case no recovery information is ever kept nor is object activation/deactivation ever automatically attempted.

According to the its activation or deactivation a transactional object for Java move from a passive state to an active state and vice-versa. The fundamental life cycle of a persistent object in TXOJ is shown in Figure 2.

Figure 2 - The life cycle of a persistent object.

  • The object is initially passive, and is stored in the object store as an instance of the class OutputObjectState.
  • When required by an application the object is automatically activated by reading it from the store using a read_committed operation and is then converted from an InputObjectState instance into a fully-fledged object by the restore_state operation of the object.
  • When the application has finished with the object it is deactivated by converting it back into an OutputObjectState instance using the save_state operation, and is then stored back into the object store as a shadow copy using write_uncommitted. This shadow copy can be committed, overwriting the previous version, using the commit_state operation. The existence of shadow copies is normally hidden from the programmer by the transaction system. Object de-activation normally only occurs when the top-level transaction within which the object was activated commits.

While deactivating and activating a transactional object for java, the operations save_state and restore_state are respectively invoked. These operations must be implemented by the programmer since StateManager cannot detect user level state changes. This gives the programmer the ability to decide which parts of an object’s state should be made persistent. For example, for a spreadsheet it may not be necessary to save all entries if some values can simply be recomputed. The save_state implementation for a class Example that has two integer member variables called A and B and one String member variable called C could simply be:

public boolean save_state(OutputObjectState o)
{
   if (!super.save_state(o))
      return false;
   try
   {
     o.packInt(A);
     o.packInt(B);
     o.packString(C));
   }
   catch (Exception e)
   {
     return false;
   }
   return true;
}

while, the corresponding restore_state implementation allowing to retrieve similar values is:

public boolean restore_state(InputObjectState o)
{
   if (!super.restore_state(o))
      return false;
   try
   {
     A = o.unpackInt();
     B = o.unpackInt();
     S = o.unpackString());
   }
   catch (Exception e)
   {
     return false;
   }
   return true;
}

Classes OutputObjectState and InputObjectState provide respectively operations to pack and unpack instances of standard Java data types. In other words for a standard Java data type, for instance Long or Short, there are corresponding methods to pack and unpack, i.e., packLong or packShort and unpackLong or unpackShort.

Note: it is necessary for all save_state and restore_state methods to call super.save_state and super.restore_state. This is to cater for improvements in the crash recovery mechanisms.

The concurrency controller is implemented by the class LockManager which provides sensible default behaviour while allowing the programmer to override it if deemed necessary by the particular semantics of the class being programmed. The primary programmer interface to the concurrency controller is via the setlock operation. By default, the runtime system enforces strict two-phase locking following a multiple reader, single writer policy on a per object basis. However, as shown in Figure 1, by inheriting from the Lock class it is possible for programmers to provide their own lock implementations with different lock conflict rules to enable type specific concurrency control.

Lock acquisition is (of necessity) under programmer control, since just as StateManager cannot determine if an operation modifies an object, LockManager cannot determine if an operation requires a read or write lock. Lock release, however, is under control of the system and requires no further intervention by the programmer. This ensures that the two-phase property can be correctly maintained.

public abstract class LockManager extends StateManager
{
   public LockResult setlock (Lock toSet, int retry, int timeout);
};

The LockManager class is primarily responsible for managing requests to set a lock on an object or to release a lock as appropriate. However, since it is derived from StateManager, it can also control when some of the inherited facilities are invoked. For example, LockManager assumes that the setting of a write lock implies that the invoking operation must be about to modify the object. This may in turn cause recovery information to be saved if the object is recoverable. In a similar fashion, successful lock acquisition causes activate to be invoked.

The code below shows how we may try to obtain a write lock on an object:

public class Example extends LockManager
{
   public boolean foobar ()
   {
     AtomicAction A = new AtomicAction;
     /*
     
     * The ArjunaCore AtomicAction class is here used to create
     * a transaction. Any interface provided by the JTA or
     * JTS interfaces that allow to create transactions can
     * be used in association with the Locking mechanisms
     * described in this trail.
     */
     boolean result = false;

     A.begin();
     if (setlock(new Lock(LockMode.WRITE), 0) == Lock.GRANTED)
     {
       /*
       * Do some work, and TXOJ will
       * guarantee ACID properties.
       */
       // automatically aborts if fails
       if (A.commit() == AtomicAction.COMMITTED)
       {
         result = true;
       }
     }
    else
       A.rollback();

    return result;
   }
}

The banking application consists of a Bank object that contains a list of Account object, which in turn have a String (name) and a float (the value) as member variables. It appears clearly that from the persistent point of view, an Account Object need to store its name and its current balance or value, while the Bank Object need to store the list of accounts that it manages.

To take benefit from the persistency and locking mechanism provided by ArjunaCore, a user class can inherit from the appropriate class (StateManager for recovery, and LockManager for recovery and concurrency control). The AccountImpl class that implements the Account interface inherits the LockManager and implements the AccountOperations interface generated by the CORBA IDL compiler. Since multiple inheritance is not allowed in Java, inheriting the AccountPOA class, as made in simple jts remote version, in addition to the LockManager is not possible. That we use in this version a CORBA TIE mechanism to associate a servant to an CORBA object reference.

The Java interface definition of the AccountImpl class is given below:

public class AccountImpl extends LockManager implements AccountOperations
{
  float _balance;
  String _name;
  public AccountImpl(String name );
  public AccountImpl(Uid uid);
  public void finalize ();
  public float balance();
  public void credit( float value );
  public void debit( float value );
  public boolean save_state (OutputObjectState os, int ObjectType);
  public boolean restore_state (InputObjectState os, int ObjectType);
  public String type();
}
public void finalize ()
{
  super.terminate();
}
public String type ()
{
  return "/StateManager/LockManager/BankingAccounts";
}
  • Constructors and Destructor

    To use an existing persistent object requires the use of a special constructor that is required to take the Uid of the persistent object; the implementation of such a constructor is given below:

    public AccountImpl(Uid uid)
    {
      super(uid);
      // Invoking super will lead to invoke the
      //restore_state method of this AccountImpl class
    }

    There is no particular behaviour applied by the Constructor with the Uid parameter The following constructor is used for a new Account creation.

    
    public AccountImpl(String name )
    {
      super(ObjectType.ANDPERSISTENT);
      _name = name;
      _balance = 0;
    }
    

    The destructor of the queue class is only required to call the terminate operation of LockManager.

  • save_state, restore_state and type

    The implementations of save_state and restore_state are relatively simple for this example:

    public boolean save_state (OutputObjectState os, int ObjectType)
    {
       if (!super.save_state(os, ObjectType))
          return false;
    
       try
       {
          os.packString(_name);
          os.packFloat(_balance);
          return true;
       }
       catch (Exception e)
       {
          return false;
       }
    }
    public boolean restore_state (InputObjectState os, int ObjectType)
    {
       if (!super.restore_state(os, ObjectType))
          return false;
    
       try
       {
         _name = os.unpackString();
         _balance = os.unpackFloat();
          return true;
       }
       catch (Exception e)
       {
          return false;
       }
    } 

    Because the AccountImpl class is derived from the LockManager class, the operation type should be:

  • account management operations
    public float balance()
    {
      float result = 0;
      if (setlock(new Lock(LockMode.READ), 0) == LockResult.GRANTED)
      {
        result = _balance;
      }
      ...
    
      return result;
    }

    Since the balance operation consists only to get the current balance, acquiring a lock in READ mode is enough. This is not the case of the credit and debit methods that need to modify the current balance, that is a WRITE mode is needed.

    
    public void credit( float value )
    {
      if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
      {
        _balance += value;
      }
      ...
    }
    
    public void debit( float value )
    {
      if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
      {
        _balance -= value;
      }
      ...
    }

To take benefit from the persistency and locking mechanism provided by ArjunaCore, a user class can inherit from the appropriate class (StateManager for recovery, and LockManager for recovery and concurrency control). The BankImpl class that implements the Bank interface inherits the LockManager and implements the BankOperations interface generated by the CORBA IDL compiler. Since multiple inheritance is not allowed in Java, inheriting the BankPOA class, as made in simple jts remote version, in addition to the LockManager is not possible. That we use in this version a CORBA TIE mechanism to associate a servant to an CORBA object reference.

The Java interface definition of the BankImpl class is given below:

public class BankImpl extends LockManager implements BankOperations
{
  public BankImpl(OA oa);
  public BankImpl(Uid uid, OA oa);
  public BankImpl(Uid uid);
  public Account create_account( String name );
  public Account get_account( String name );
  public boolean save_state (OutputObjectState os, int ObjectType);
  public boolean restore_state (InputObjectState os, int ObjectType);
  public String type();

  public static final int ACCOUNT_SIZE = 10;
  // ACCOUNT_SIZE is the maximum number of accounts
  private String [] accounts;
  private int numberOfAccounts;
  private ORB _orb;
  private OA _oa;
  private java.util.Hashtable _accounts; //The list of accounts

}
  • Constructors and Destructor

    To use an existing persistent object requires the use of a special constructor that is required to take the Uid of the persistent object; the implementation of such a constructor is given below:

    public BankImpl(Uid uid)
    {
      super(uid);
      _accounts = new java.util.Hashtable();
      numberOfAccounts = 0;
      accounts = new String[ACCOUNT_SIZE];
    }

    The following constructor is invoked during the first creation of the Bank Object.

    
    public BankImpl(OA oa)
    { super(ObjectType.ANDPERSISTENT);
      _accounts = new java.util.Hashtable();
      _oa = oa;
      numberOfAccounts = 0;
      accounts = new String[ACCOUNT_SIZE];
    }

    The following constructor is invoked on successive BankServer restart. A bank already exists and should be recreated. Invoking super or the constructor of the inherited class leads to execute the restore_state method, described below, of the BankImpl class to rebuild the list of accounts previously created, if any.

    
    public BankImpl(Uid uid, OA oa)
    { super(uid);
      _accounts = new java.util.Hashtable();
      _oa = oa;
      numberOfAccounts = 0;
      accounts = new String[ACCOUNT_SIZE];
    }
    

    The destructor of the queue class is only required to call the terminate operation of LockManager.

public void finalize ()
{
  super.terminate();
}
  • account management operations
    public Account create_account( String name )
    {
      AccountImpl acc;
      AccountPOA account = null;
      //Attempt to obtain the lock for change
      if (setlock(new Lock(LockMode.WRITE), 0) == LockResult.GRANTED)
       {
         //Check if the maximum number of accounts is not reached
         if (numberOfAccounts < ACCOUNT_SIZE)
         {
         acc = new AccountImpl(name); //Create a new account
         //Use the TIE mechanism to create a CORBA object
          account = new AccountPOATie(acc);
          //Add the account to the list of accounts that
          //facilitate to retrieve accounts
          _accounts.put( name, acc);
           //The Uid of the created account is put in the array
           accounts[numberOfAccounts] = acc.get_uid().toString();
           numberOfAccounts++;
         }
      }
      return com.arjuna.demo.jts.txojbank.
           AccountHelper.narrow(_oa.corbaReference(account));
    }
    
    public Account get_account(String name)
      throws NotExistingAccount
    {
      // Only the hashtable list is used to retrieve the account
      AccountImpl acc = ( AccountImpl ) _accounts.get( name );
      AccountPOA account = new AccountPOATie(acc);
      if ( acc == null )
         throw new NotExistingAccount("The Account
            requested does not exist");
      return com.arjuna.demo.jts.txojbank.
        AccountHelper.narrow(_oa.corbaReference(account));
    }
  • save_state, restore_state and type
    public boolean save_state (OutputObjectState os, int ObjectType)
    {
       if (!super.save_state(os, ObjectType))
         return false;
    
       try
       {
         os.packInt(numberOfAccounts);
         if (numberOfAccounts > 0)
         {
          // All Uid located in the array will be saved
          for (int i = 0; i < numberOfAccounts; i++)
            os.packString(accounts[i]);
         }
         return true;
       }
       catch (Exception e)
       {
         return false;
       }
    }
    public boolean restore_state (InputObjectState os, int ObjectType)
    {
       if (!super.restore_state(os, ObjectType))
       {
         return false;
       }
       try
       {
          numberOfAccounts = os.unpackInt();
    
          if (numberOfAccounts > 0)
          {
            for (int i = 0; i < numberOfAccounts; i++)
            {
              accounts[i] = os.unpackString();
              //each stored Uid is re-used to recreate
              //a stored account object
              AccountImpl acc = new AccountImpl(new Uid(accounts[i]));
              acc.activate();
              //Once recreated the account object
              //is activated and added to the list.
             _accounts.put( acc.getName(), acc);
            }
          }
          return true;
       }
       catch (Exception e)
       {
          return false;
       }
    } 
    public String type ()
    {
       return "/StateManager/LockManager/BankServer";
    }

The role of the BankServer class is mainly to initialise the ORB and the Object Adapter and to create the default Bank object responsible to create banking accounts.

Globally the BankServer has the following structure.

...
myORB = ORB.getInstance("ServerSide");
myOA = OA.getRootOA(myORB);
myORB.initORB(args, null);
myOA.initOA();
...
  • Initialise the ORB

    This done using the ORB Portability API

  • Create the BankImpl object, an instance that implements the Bank interface. Two ways are provided to build such Bank object according to the fact it's the first time we create such object or not. This depends on the existence or not of the file named "
    ...
    java.io.FileInputStream file = new java.io.FileInputStream("UidBankFile");
    java.io.InputStreamReader input = new java.io.InputStreamReader(file);
    java.io.BufferedReader reader = new java.io.BufferedReader(input);
    String stringUid = reader.readLine();
    file.close();
    _bank = new BankImpl(new Uid(stringUid), myOA);
    boolean result =_bank.activate();
    ...
    
    • If the file does not exist, a new BankImpl object is created, then the Uid of the created object is stored in the file named "UidBankFile"
    ...
    _bank = new BankImpl(myOA);
    java.io.FileOutputStream file = new java.io.FileOutputStream("UidBankFile");
    java.io.PrintStream pfile=new java.io.PrintStream(file);
    pfile.println(_bank.get_uid().toString());
    file.close();
    ...
  • Store the CORBA object reference of the BankImpl object in a file in such way the client can retrieve it from that file.

JTS supports the construction of both local and distributed transactional applications which access databases using the JDBC APIs. JDBC supports two-phase commit of transactions, and is similar to the XA X/Open standard. The JDBC support is found in the com.arjuna.ats.jdbc package.

The JTS approach to incorporating JDBC connections within transactions is to provide transactional JDBC drivers through which all interactions occur. These drivers intercept all invocations and ensure that they are registered with, and driven by, appropriate transactions. There is a single type of transactional driver through which any JDBC driver can be driven; obviously if the database is not transactional then ACID properties cannot be guaranteed. This driver is com.arjuna.ats.jdbc.TransactionalDriver, which implements the java.sql.Driver interface.

The driver may be directly instantiated and used within an application. For example:

 TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver(); 

It can be registered with the JDBC driver manager (java.sql.DriverManager) by adding them to the Java system properties. The jdbc.drivers property contains a list of driver class names, separated by colons, that are loaded by the JDBC driver manager when it is initialised, for instance:

jdbc.drivers=foo.bar.Driver:mydata.sql.Driver:bar.test.myDriver

On running an application, it is the DriverManager's responsibility to load all the drivers found in the system property jdbc.drivers. For example, this is where the driver for the Oracle database may be defined. When opening a connection to a database it is the DriverManager' s role to choose the most appropriate driver from the previously loaded drivers.

A program can also explicitly load JDBC drivers at any time. For example, the my.sql.Driver is loaded with the following statement:

Class.forName("my.sql.Driver"); 

Calling Class.forName() will automatically register the driver with the JDBC driver manager. It is also possible to explicitly create an instance of the JDBC driver using the registerDriver method of the DriverManager. This is the case for instance for the TransactionalDriver that can be registered as follow:

TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver();
DriverManager.registerDriver(arjunaJDBC2Driver);

When you have loaded a driver, it is available for making a connection with a DBMS.

Once a driver is loaded and ready for a connection to be made, instances of a Connection class can be created using the getConnection method on the DriverManager, as follow:

Connection con = DriverManager.getConnection(url, username, password);

From its version 2.0, the JDBC API has introduced a new way to obtain instances of the Connection class. This is the case of the interfaces DataSource and XADataSource that creates transactional connections. When using a JDBC 2.0 driver, will use the appropriate DataSource whenever a connection to the database is made. It will then obtain XAResources and register them with the transaction via the JTA interfaces. It is these XAResources which the transaction service will use when the transaction terminates in order to drive the database to either commit or rollback the changes made via the JDBC connection.

There are two ways in which the JDBC 2.0 support can obtain XADataSources. These will be explained in the following sections. Note, for simplicity we shall assume that the JDBC 2.0 driver is instantiated directly by the application.

  • Java Naming and Directory Interface (JNDI)

    To get the ArjunaJDBC2Driver class to use a JNDI registered XADataSource it is first necessary to create the XADataSource instance and store it in an appropriate JNDI implementation. Details of how to do this can be found in the JDBC 2.0 tutorial available at JavaSoft. An example is show below:

    XADataSource ds = MyXADataSource();
    Hashtable env = new Hashtable();
    String initialCtx = PropertyManager.
      getProperty("Context.INITIAL_CONTEXT_FACTORY");
    env.put(Context.INITIAL_CONTEXT_FACTORY, initialCtx);
    initialContext ctx = new InitialContext(env);
    ctx.bind("jdbc/foo", ds);

    Where the Context.INITIAL_CONTEXT_FACTORY property is the JNDI way of specifying the type of JNDI implementation to use.

    Then the application must pass an appropriate connection URL to the JDBC 2.0 driver:

    Properties dbProps = new Properties();
    dbProps.setProperty(TransactionalDriver.userName, "user");
    dbProps.setProperty(TransactionalDriver.password, "password");
    TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver();
    Connection connection = arjunaJDBC2Driver.
      connect("jdbc:arjuna:jdbc/foo", dbProps);

    The JNDI URL must be pre-pended with jdbc:arjuna: in order for the ArjunaJDBC2Driver to recognise that the DataSource must participate within transactions and be driven accordingly.

  • Dynamic class instantiation

    Many JDBC implementations provide proprietary implementations of XADataSources that provide non-standard extensions to the specification. In order to allow the application to remain isolated from the actual JDBC 2.0 implementation it is using and yet continue to be able to use these extensions, hides the details of these proprietary implementations using dynamic class instantiation. In addition, the use of JNDI is not required when using this mechanism because the actual implementation of the XADataSource will be directly instantiated, albeit in a manner which will not tie an application or driver to a specific implementation. therefore has several classes which are for specific JDBC implementations, and these can be selected at runtime by the application setting the dynamicClass property appropriately:

Database Type Property Name
Cloudscape 3.6 com.arjuna.ats.internal.jdbc.drivers.cloudscape_3_6
Sequelink 5.1 com.arjuna.ats.internal.jdbc.drivers.sequelink_5_1
Oracle 8.1.6 com.arjuna.ats.internal.jdbc.drivers.oracle_8_1_6
SQL Server 2000 com.arjuna.ats.internal.jdbc.drivers.sqlserver_2_2

The application code must specify which dynamic class the TransactionalDriver should instantiate when setting up the connection:

Properties dbProps = new Properties();
dbProps.setProperty(TransactionalDriver.userName, "user");
dbProps.setProperty(TransactionalDriver.password, "password");
dbProps.setProperty(TransactionalDriver.dynamicClass,
    "com.arjuna.ats.internal.jdbc.drivers.sequelink_5_0");
TransactionalDriver arjunaJDBC2Driver = new TransactionalDriver();
Connection connection = arjunaJDBC2Driver.connect("jdbc:arjuna:
    sequelink://host:port;databaseName=foo",dbProperties);

    Note on properties used by the com.arjuna.ats.jdbc.TransactionalDriver class

    • userName : the user name to use when attempting to connect to the database.
    • password : the password to use when attempting to connect to the database.
    • createDb : if set to true, the driver will attempt to create the database when it connects. This may not be supported by all JDBC 2.0 implementations.
    • dynamicClass : this specifies a class to instantiate to connect to the database, rather than using JNDI.

    The following Banking application illustrates some methods that use the JDBC API. In this application, the way to create a jdbc connection is made via an XADataSource obtained with JNDI operations, es explained in the previous trail jdbc introduction The BankClient class instantiates an XADataSource and bind it to a jndi naming in order to be retrieved to create transactional connections. This portion of code illustrates how this made against oracle (tested on version 9i). A similar code could tested against an other database by providng the appropriate XADataSource implementation. Details of the BankClient class can be found in the file src/com/arjuna/demo/jta/jdbcbank/BankClient.java

    
      package com.arjuna.demo.jta.jdbcbank;
    
      import javax.naming.*;
      import java.util.Hashtable;
      import oracle.jdbc.xa.client.OracleXADataSource;
      import com.arjuna.ats.jdbc.common.jdbcPropertyManager;
    
      public class BankClient
      {
       .....
       public static void main(String[] args)
        {
          //Provide the apporopriate information to access the database
          for (int i = 0; i < args.length; i++)
          {
              if (args[i].compareTo("-host") == 0)
                  host = args[i + 1]
    		  if (args[i].compareTo("-port") == 0)
                    port = args[i + 1];
    		  if (args[i].compareTo("-username") == 0)
                    user = args[i + 1];
              if (args[i].compareTo("-password") == 0)
                    password = args[i + 1];
    	      if (args[i].compareTo("-dbName") == 0)
                    dbName = args[i + 1];
              ....
          }
    
         try
         {
           // create DataSource
           OracleXADataSource ds = new OracleXADataSource();
           ds.setURL("jdbc:oracle:thin:@"+host+":"+port+":"+dbName);
    
           // now stick it into JNDI
           Hashtable env = new Hashtable();
           env.put (Context.INITIAL_CONTEXT_FACTORY,
    	   "com.sun.jndi.fscontext.RefFSContextFactory");
    	    env.put (Context.PROVIDER_URL, "file:/tmp/JNDI");
    	    InitialContext ctx = new InitialContext(env);
    	    ctx.rebind("jdbc/DB", ds);
         }
    	 catch (Exception ex)
    	 { }
      	 //Set the jndi information to be user by the Arjuna JDBC Property Manager
    	 jdbcPropertyManager.propertyManager.setProperty("Context.INITIAL_CONTEXT_FACTORY",
    	   "com.sun.jndi.fscontext.RefFSContextFactory");
    	 jdbcPropertyManager.propertyManager.setProperty("Context.PROVIDER_URL",
    	   "file:/tmp/JNDI");
    
    	 Bank bank = new Bank();
         BankClient client = new BankClient(bank);
    
       }
      

    While the BankClient class is responsible to obtain information to access the database, tocreate the XADataSource and bind it to jndi, and also to get order from a user (create_account, debit, transfer, ..), the Bank class is resposnible to create jdbc connections to perform user's requests. The Bank class is illustarted below where. All methods are not illusrated here but have a similar behavior; they could be found in details in the src/com/arjuna/demo/jta/jdbcbank/Bank.java">Bank.java program. Note that for simplicity, much error checking code has been removed.

    public Bank()
    {
      try
      {
        DriverManager.registerDriver(new TransactionalDriver());
        dbProperties = new Properties();
        dbProperties.put(TransactionalDriver.userName, user);
        dbProperties.put(TransactionalDriver.password, password);
        arjunaJDBC2Driver = new TransactionalDriver(); //
    	create_table();
      }
       catch (Exception e)
       {
       e.printStackTrace();
       System.exit(0);
       }
    
       _accounts = new java.util.Hashtable();
       reuseConnection = true;
       }
    
       public void create_account( String _name, float _value )
       {
        try
        {
    	  Connection conne = arjunaJDBC2Driver.connect("jdbc:arjuna:jdbc/DB", dbProperties);
          Statement stmtx = conne.createStatement(); // tx statement
          stmtx.executeUpdate
            ("INSERT INTO accounts (name, value)
              VALUES ('"+_name+"',"+_value+")");
        }
        catch (SQLException e)
        {
          e.printStackTrace();
        }
       }
    
      public float get_balance(String _name)
         throws NotExistingAccount
      {
        float theBalance = 0;
        try
        {
    	  Connection conne = arjunaJDBC2Driver.connect("jdbc:arjuna:jdbc/DB", dbProperties);
          Statement stmtx = conne.createStatement(); // tx statement
          ResultSet rs = stmtx.executeQuery
             ("SELECT value from accounts
               WHERE name    = '"+_name+"'");
          while (rs.next()) {
            theBalance = rs.getFloat("value");
          }
        }
        catch (SQLException e)
        {
          e.printStackTrace();
          throw new NotExistingAccount("The Account requested does not exist");
        }
        return theBalance;
      }
    
     ...
    }
    

    The recovery manager provides support for recovering XAResources whether or not they are Serializable. XAResources that do implement the Serializable interface are handled without requiring additional programmer defined classes. For those XAResources that need to recover but which cannot implement Serializable, it is possible to provide a small class which is used to help recover them.

    This example shows the recovery manager recovering a Serializable XAResource and a non-Serializable XAResource.

    When recovering from failures, requires the ability to reconnect to the resource managers that were in use prior to the failures in order to resolve any outstanding transactions. In order to recreate those connections for non-Serializable XAResources it is necessary to provide implementations of the following interface com.arjuna.ats.jta.recovery.XAResourceRecovery.

    To inform the recovery system about each of the XAResourceRecovery instances, it is necessary to specify their class names through property variables in the jbossts-properties.xml file. Any property variable which starts with the name XAResourceRecovery will be assumed to represent one of these instances, and its value should be the class name.

    When running XA transaction recovery it is necessary to tell which types of Xid it can recover. Each Xid that creates has a unique node identifier encoded within it and will only recover transactions and states that match a specified node identifier. The node identifier to use should be provided to via a property that starts with the name com.arjuna.ats.jta.xaRecoveryNode (multiple values may be provided). A value of * will force to recover (and possibly rollback) all transactions irrespective of their node identifier and should be used with caution.

    The recovery module for the non-Serializable XAResource must be deployed in order to provide support to recover the non-Serializable XAResource. If this step was missed out the Serializable XAResource would recover OK but would have no knowledge of the non-Serializable XAResource and so it could not recover it. To register the non-Serializable XAResource XAResourceRecovery module, add an entry to the jbossts-properties.xml.

    Under the element <properties depends="jts" name="jta">, add:

    <property name="com.arjuna.ats.jta.recovery.XAResourceRecovery1" value= "com.arjuna.demo.recovery.xaresource.NonSerializableExampleXAResourceRecovery"/> <property name="com.arjuna.ats.jta.xaRecoveryNode" value="*"/>

    WARNING: Implementing a RecoveryModule and AbstractRecord is a very advanced feature of the transaction service. It should only be performed by users familiar with the all the concepts used in the product. Please see the ArjunaCore guide for more information about RecoveryModules and AbstractRecords.

    The following sample gives an overview how the Recovery Manager invokes a module to recover from failure. This basic sample does not aim to present a complete process to recover from failure, but mainly to illustrate the way to implement a recovery module. More details can be found in "Failure Recovery Guide".

    The application used here consists to create an atomic transaction, to register a participant within the created transaction and finally to terminate it either by commit or abort. A set of arguments are provided:
    • to decide committing or aborting the transaction,
    • to decide generating a crash during the commitment process.

    The failure recovery subsystem of ensure that results of a transaction are applied consistently to all resources affected by the transaction, even if any of the application processes or the hardware hosting them crash or lose network connectivity. In the case of hardware crashes or network failures, the recovery does not take place until the system or network are restored, but the original application does not need to be restarted. Recovery is handled by the Recovery Manager process. For recover to take place, information about the transaction and the resources involved needs to survive the failure and be accessible afterward. This information is held in the ActionStore , which is part of the ObjectStore . If the ObjectStore is destroyed or modified, recovery may not be possible.

    Until the recovery procedures are complete, resources affected by a transaction which was in progress at the time of the failure may be inaccessible. Database resources may report this as as tables or rows held by in-doubt transactions . For TXOJ resources, an attempt to activate the Transactional Object, such as when trying to get a lock, fails.

    Recovery of XA resources accessed via JDBC is handled by the XARecoveryModule . This module includes both transaction-initiated and resource-initiated recovery.

    • Transaction-initiated recovery is possible where the particular transaction branch progressed far enough for a JTA_ResourceRecord to be written in the ObjectStore. The record contains the information needed to link the transaction to information known by the rest of in the database.

    • Resource-initiated recovery is necessary for branches where a failure occurred after the database made a persistent record of the transaction, but before the JTA_ResourceRecord was written. Resource-initiated recovery is also necessary for datasources for which it is impossible to hold information in the JTA_ResourceRecord that allows the recreation in the RecoveryManager of the XAConnection or XAResource used in the original application.

    Transaction-initiated recovery is automatic. The XARecoveryModule finds the JTA_ResourceRecord which needs recovery, using the two-pass mechanism described above. It then uses the normal recovery mechanisms to find the status of the transaction the resource was involved in, by running replay_completion on the RecoveryCoordinator for the transaction branch. Next, it creates or recreates the appropriate XAResource and issues commit or rollback on it as appropriate. The XAResource creation uses the same database name, username, password, and other information as the application.

    Resource-initiated recovery must be specifically configured, by supplying the RecoveryManager with the appropriate information for it to interrogate all the XADataSources accessed by any application. The access to each XADataSource is handled by a class that implements the com.arjuna.ats.jta.recovery.XAResourceRecovery interface. Instances of this class are dynamically loaded, as controlled by property JTAEnvironmentBean.xaResourceRecoveryInstances .

    The XARecoveryModule uses the XAResourceRecovery implementation to get an XAResource to the target datasource. On each invocation of periodicWorkSecondPass , the recovery module issues an XAResource.recover request. This request returns a list of the transaction identifiers that are known to the datasource and are in an in-doubt state. The list of these in-doubt Xids is compared across multiple passes, using periodicWorkSecondPass-es . Any Xid that appears in both lists, and for which no JTA_ResourceRecord is found by the intervening transaction-initiated recovery, is assumed to belong to a transaction involved in a crash before any JTA_Resource_Record was written, and a rollback is issued for that transaction on the XAResource .

    This double-scan mechanism is used because it is possible the Xid was obtained from the datasource just as the original application process was about to create the corresponding JTA_ResourceRecord. The interval between the scans should allow time for the record to be written unless the application crashes (and if it does, rollback is the right answer).

    An XAResourceRecovery implementation class can contain all the information needed to perform recovery to a specific datasource. Alternatively, a single class can handle multiple datasources which have some similar features. The constructor of the implementation class must have an empty parameter list, because it is loaded dynamically. The interface includes an initialise method, which passes in further information as a string . The content of the string is taken from the property value that provides the class name. Everything after the first semi-colon is passed as the value of the string. The XAResourceRecovery implementation class determines how to use the string.

    An XAResourceRecovery implementation class, com.arjuna.ats.internal.jdbc.recovery.BasicXARecovery , supports resource-initiated recovery for any XADataSource. For this class, the string received in method initialise is assumed to contain the number of connections to recover, and the name of the properties file containing the dynamic class name, the database username, the database password and the database connection URL. The following example is for an Oracle 8.1.6 database accessed via the Sequelink 5.1 driver:

    XAConnectionRecoveryEmpay=com.arjuna.ats.internal.jdbc.recovery.BasicXARecovery;2;OraRecoveryInfo
          

    This implementation is only meant as an example, because it relies upon usernames and passwords appearing in plain text properties files. You can create your own implementations of XAConnectionRecovery . See the javadocs and the example com.arjuna.ats.internal.jdbc.recovery.BasicXARecovery .

    Example 3.29. XAConnectionRecovery implementation

    package com.arjuna.ats.internal.jdbc.recovery;
    
    import com.arjuna.ats.jdbc.TransactionalDriver;
    import com.arjuna.ats.jdbc.common.jdbcPropertyManager;
    import com.arjuna.ats.jdbc.logging.jdbcLogger;
    
    import com.arjuna.ats.internal.jdbc.*;
    import com.arjuna.ats.jta.recovery.XAConnectionRecovery;
    import com.arjuna.ats.arjuna.common.*;
    import com.arjuna.common.util.logging.*;
    
    import java.sql.*;
    import javax.sql.*;
    import jakarta.transaction.*;
    import javax.transaction.xa.*;
    import java.util.*;
    
    import java.lang.NumberFormatException;
    
    /**
     * This class implements the XAConnectionRecovery interface for XAResources.
     * The parameter supplied in setParameters can contain arbitrary information
     * necessary to initialise the class once created. In this instance it contains
     * the name of the property file in which the db connection information is
     * specified, as well as the number of connections that this file contains
     * information on (separated by ;).
     *
     * IMPORTANT: this is only an *example* of the sorts of things an
     * XAConnectionRecovery implementor could do. This implementation uses
     * a property file which is assumed to contain sufficient information to
     * recreate connections used during the normal run of an application so that
     * we can perform recovery on them. It is not recommended that information such
     * as user name and password appear in such a raw text format as it opens up
     * a potential security hole.
     *
     * The db parameters specified in the property file are assumed to be
     * in the format:
     *
     * DB_x_DatabaseURL=
     * DB_x_DatabaseUser=
     * DB_x_DatabasePassword=
     * DB_x_DatabaseDynamicClass=
     *
     * DB_JNDI_x_DatabaseURL= 
     * DB_JNDI_x_DatabaseUser= 
     * DB_JNDI_x_DatabasePassword= 
     *
     * where x is the number of the connection information.
     *
     * @since JTS 2.1.
     */
    
    public class BasicXARecovery implements XAConnectionRecovery
    {    
        /*
         * Some XAConnectionRecovery implementations will do their startup work
         * here, and then do little or nothing in setDetails. Since this one needs
         * to know dynamic class name, the constructor does nothing.
         */
        public BasicXARecovery () throws SQLException
        {
            numberOfConnections = 1;
            connectionIndex = 0;
            props = null;
        }
    
        /**
         * The recovery module will have chopped off this class name already.
         * The parameter should specify a property file from which the url,
         * user name, password, etc. can be read.
         */
    
        public boolean initialise (String parameter) throws SQLException
        {
            int breakPosition = parameter.indexOf(BREAKCHARACTER);
            String fileName = parameter;
    
            if (breakPosition != -1)
                {
                    fileName = parameter.substring(0, breakPosition -1);
    
                    try
                        {
                            numberOfConnections = Integer.parseInt(parameter.substring(breakPosition +1));
                        }
                    catch (NumberFormatException e)
                        {
                            //Produce a Warning Message
                            return false;
                        }
                }
    
            PropertyManager.addPropertiesFile(fileName);
    
            try
                {
                    PropertyManager.loadProperties(true);
    
                    props = PropertyManager.getProperties();
                }
            catch (Exception e)
                {
                    //Produce a Warning Message 
    
                    return false;
                }  
    
            return true;
        }    
    
        public synchronized XAConnection getConnection () throws SQLException
        {
            JDBC2RecoveryConnection conn = null;
    
            if (hasMoreConnections())
                {
                    connectionIndex++;
    
                    conn = getStandardConnection();
    
                    if (conn == null)
                        conn = getJNDIConnection();
    
                    if (conn == null)
                        //Produce a Warning message
                        }
    
            return conn;
        }
    
        public synchronized boolean hasMoreConnections ()
        {
            if (connectionIndex == numberOfConnections)
                return false;
            else
                return true;
        }
    
        private final JDBC2RecoveryConnection getStandardConnection () throws SQLException
        {
            String number = new String(""+connectionIndex);
            String url = new String(dbTag+number+urlTag);
            String password = new String(dbTag+number+passwordTag);
            String user = new String(dbTag+number+userTag);
            String dynamicClass = new String(dbTag+number+dynamicClassTag);
            Properties dbProperties = new Properties();
            String theUser = props.getProperty(user);
            String thePassword = props.getProperty(password);
    
            if (theUser != null)
                {
                    dbProperties.put(ArjunaJDBC2Driver.userName, theUser);
                    dbProperties.put(ArjunaJDBC2Driver.password, thePassword);
    
                    String dc = props.getProperty(dynamicClass);
    
                    if (dc != null)
                        dbProperties.put(ArjunaJDBC2Driver.dynamicClass, dc);
    
                    return new JDBC2RecoveryConnection(url, dbProperties);
                }
            else
                return null;
        }
    
        private final JDBC2RecoveryConnection getJNDIConnection () throws SQLException
        {
            String number = new String(""+connectionIndex);
            String url = new String(dbTag+jndiTag+number+urlTag);
            String password = new String(dbTag+jndiTag+number+passwordTag);
            String user = new String(dbTag+jndiTag+number+userTag);
            Properties dbProperties = new Properties();
            String theUser = props.getProperty(user);
            String thePassword = props.getProperty(password);
    
            if (theUser != null)
                {
                    dbProperties.put(ArjunaJDBC2Driver.userName, theUser);
                    dbProperties.put(ArjunaJDBC2Driver.password, thePassword);    
                    return new JDBC2RecoveryConnection(url, dbProperties);
                }
            else
                return null;
        }
        private int        numberOfConnections;
        private int        connectionIndex;
        private Properties props;   
        private static final String dbTag = "DB_";
        private static final String urlTag = "_DatabaseURL";
        private static final String passwordTag = "_DatabasePassword";
        private static final String userTag = "_DatabaseUser";
        private static final String dynamicClassTag = "_DatabaseDynamicClass";
        private static final String jndiTag = "JNDI_";
    
        /*
         * Example:
         *
         * DB2_DatabaseURL=jdbc\:arjuna\:sequelink\://qa02\:20001
         * DB2_DatabaseUser=tester2
         * DB2_DatabasePassword=tester
         * DB2_DatabaseDynamicClass=
         *      com.arjuna.ats.internal.jdbc.drivers.sequelink_5_1 
         *
         * DB_JNDI_DatabaseURL=jdbc\:arjuna\:jndi
         * DB_JNDI_DatabaseUser=tester1
         * DB_JNDI_DatabasePassword=tester
         * DB_JNDI_DatabaseName=empay
         * DB_JNDI_Host=qa02
         * DB_JNDI_Port=20000
         */
    
        private static final char BREAKCHARACTER = ';';  // delimiter for parameters
    }
    

    Multiple recovery domains and resource-initiated recovery

    XAResource.recover returns the list of all transactions that are in-doubt with in the datasource. If multiple recovery domains are used with a single datasource, resource-initiated recovery sees transactions from other domains. Since it does not have a JTA_ResourceRecord available, it rolls back the transaction in the database, if the Xid appears in successive recover calls. To suppress resource-initiated recovery, do not supply an XAConnectionRecovery property, or confine it to one recovery domain.

    Property OTS_ISSUE_RECOVERY_ROLLBACK controls whether the RecoveryManager explicitly issues a rollback request when replay_completion asks for the status of a transaction that is unknown. According to the presume-abort mechanism used by OTS and JTS, the transaction can be assumed to have rolled back, and this is the response that is returned to the Resource , including a subordinate coordinator, in this case. The Resource should then apply that result to the underlying resources. However, it is also legitimate for the superior to issue a rollback, if OTS_ISSUE_RECOVERY_ROLLBACK is set to YES .

    The OTS transaction identification mechanism makes it possible for a transaction coordinator to hold a Resource reference that will never be usable. This can occur in two cases:

    • The process holding the Resource crashes before receiving the commit or rollback request from the coordinator.

    • The Resource receives the commit or rollback, and responds. However, the message is lost or the coordinator process has crashed.

    In the first case, the RecoveryManager for the Resource ObjectStore eventually reconstructs a new Resource (with a different CORBA object reference (IOR), and issues a replay_completion request containing the new Resource IOR. The RecoveryManager for the coordinator substitutes this in place of the original, useless one, and issues commit to the new reconstructed Resource . The Resource has to have been in a commit state, or there would be no transaction intention list. Until the replay_completion is received, the RecoveryManager tries to send commit to its Resource reference.–This will fail with a CORBA System Exception. Which exception depends on the ORB and other details.

    In the second case, the Resource no longer exists. The RecoveryManager at the coordinator will never get through, and will receive System Exceptions forever.

    The RecoveryManager cannot distinguish these two cases by any protocol mechanism. There is a perceptible cost in repeatedly attempting to send the commit to an inaccessible Resource . In particular, the timeouts involved will extend the recovery iteration time, and thus potentially leave resources inaccessible for longer.

    To avoid this, the RecoveryManager only attempts to send commit to a Resource a limited number of times. After that, it considers the transaction assumed complete . It retains the information about the transaction, by changing the object type in the ActionStore , and if the Resource eventually does wake up and a replay_completion request is received, the RecoveryManager activates the transaction and issues the commit request to the new Resource IOR. The number of times the RecoveryManager attempts to issue commit as part of the periodic recovery is controlled by the property variable COMMITTED_TRANSACTION_RETRY_LIMIT , and