LSPS documentation logo
LSPS Documentation
Persistent Data

Under normal circumstances, record data ceases to exit as soon as the model instance ceases to exist. If you want to persist your data, use shared records.

Shared records and their Relationships are reflected in the database as tables: When you create an instance of a shared record, a database entry is created. Any readings, modifications, and deletions of shared record instances are reflected in their database entry. If it is necessary to postpone persisting of changes on a shared record, you can use a change proxy object.

The data type model is applied on the database when module is uploaded to the server. The way it is applied is defined by the database schema update strategy(Connecting PDS to an LSPS Server) (similar to the hbm2ddl Hibernate configuration):

Mind that if the data type model is already used in production and cannot be dropped and recreated since it contains business data, you will need to migrate the database and potentially update running model instances before using the new data type model.

Important: When you start using shared records, consider using a database-versioning tool, such as, Flyway: This will allow you to track how you change the shared records and their relationship. Failing to use a database-versioning tool, might case a non-trivial effort when updating the data model that is in production later (for details refer to the migration instructions).

The persistence mechanism of shared Records relies on Hibernate: Based on the data models, the system generates the respective tables and a single common Hibernate setting file: as a consequence, if you upload multiple versions of a data type hierarchy in multiple modules, only the last data type model is applied.

Note: Variables of a shared record type must be fetched anew in each new transaction: This might cause performance issues. For further details on model transactions, refer to the Modeling Language Guide.

Defining Data Model Properties

Each data type definition can specify properties that define the database where the tables are persisted. To change the properties of data types in a definition file, such as, target database, table names, foreign key names, and index name prefixes, do the following:

  1. Open the datatypes definition for editing.
  2. In the Outline view, select the root Data Types item.
  3. In the Properties windows, change the settings:
    • Database: JNDI name of the data source used for the database

      This allows you to store the instances of shared Records in another database accessible to your application server.

    • Table name prefix: prefix used in the names of database tables created based on this data type definition

      It is good practise to use a prefix so you can easily find your tables. You can check the prefixed table name for individual records and relationship in their Properties view.

    • Foreign key name prefix: prefix for the names of the foreign key columns created based on this data type definition
    • Index name prefix: prefix of the index column name created based on this data type definition
    • Table name suffix: suffix used in the name of the database tables created based on this data type definition
    • Foreign key name suffix: suffix for the names of the foreign key columns created based on this data type definition
    • Index name suffix: suffix of the index column name created based on this data type definition
databaseProperties.png
Defining database properties for a data type hierarchy

Extracting Affixes from Data Model

If a data type definition contains shared Records with a common prefix or suffix, foreign key or index names on the DB Mapping tab of the Properties view, you can extract the affixes so that they are set for the entire data type definition.

To extract affixes from a database with shared records, do the following:

  1. In GO-BPMN Explorer, click the data types file.
  2. In Outline view, click the Data Types root node.
  3. In the Properties windows, open the Detail tab.
  4. Click the Extract button.
  5. Modify the affixes are required and click Apply.

Generating a Data Model from a Database Schema

To generate a data type model from a database schema, do the following:

  1. Connect to the server with the data source.
  2. In the GO-BPMN Explorer view, right-click the GO-BPMN module.
  3. Click Generate Types from DB Schema.
  4. In the Generate Types from DB Schema dialog box, select the data source and click Next.

    If the data source is not listed, click New and define its properties.

  5. Select the database schema and click Next.
  6. Select the tables and columns to be included in the data type model.

    generatingTypesFromDB.png
    Selection of Database Entities
  7. Select the target location and enter the resource name, and click Finish.

Note the following limitations:

  • The generated data type definition does not generate any Diagram views of the shared Records: Drag the shared Records from the GO-BPMN Explorer onto the data type canvas if required.
  • The tool fails to detect that a shared record for a table already exists in the module and generates a new record.

    Consequently, if a record that is being generated has a relationship to an already existing record, the relationship is not generated either.

  • The fields of generated records are ordered randomly.
addingRecordToDiagram.png
Adding a depiction of a generated shared record onto a data type diagram

Creating a Shared Record

Shared records are defined just like common records with the additional shared flag, which is equivalent to @Entity in Hibernate, and the following properties related to persisting:

  • Database Mapping:
    • Table name: target database table
    • Schema: target database schema

      Important: If multiple shared records have the same target table and schema, the shared records will be mapped to the same table. This might results in issue due to incompatible Schema incompatibilities. To prevent such issues, consider setting Table name prefix for your data type definition.

    • Catalog: target database catalog
    • Batch size: the number of the fetched entities when the record is accessed from its related record
    • Cache Region: cache region of the shared record
    • O-R inheritance mapping: available only if the shared record is the supertype of others.
      • Each record to own table: maps each shared record in the hierarchy as its own table

        If the record is a child of another record, the inherited fields are stored in the table of the parent.

      • Single table per hierarchy: a single table that unifies properties of all sub-records is used to store the hierarchy. This option is convenient if you plan to query the database based on the Record type. For this option, you can specify the name of a DB column name that discriminates the type (the default values is TYPE_ID).
    • Inheritance FK name: name of the Foreign Key used on inheritance
    • Update schema: if true, the existing table schema is updated using the current schema
  • Indexes: indexes on the related table of the Records

Important: When working with shared Records that are a target or source ends of a relationship, make sure to define indexes for the foreign keys on the records and their relationship to prevent potential performance issues. Note that you can generate the indexes in the data types file automatically: right-click the canvas of your Record diagram and select Generate Indexes.

Creating a Shared Record Field

To create a Record field in a shared Record, do the following:

  1. Select the shared Record or its field and press the Insert key to add a new Field.
  2. Select the field and define its generic properties on the Detail tab of its Properties view.

    The type of a shared field should be set to a simple data type. If such a field is of another data type, consider creating a related shared Record to prevent performance issues due to frequent serialization and deserialization.

  3. Define the database-related attributes on the DB Mapping tab:
    • Column name: the name of the mapped database column (The target database table is defined in the parent shared Record.)
    • Text length: the maximum length of the database entry for the String fields
    • Precision: maximum number of digits for the Decimal and Integer fields
    • Scale: number of digits after the decimal point for Fields of the Decimal data type

      Note: When writing values into the fields, Decimal fields behave like Java's BigDecimal.

    • BLOB length: space reserved for the field value in bytes

      The property applies to fields of other than simple types that need to be serialized when stored in the database, such as, Record, Collections, etc.

    • Not null: if true, the field must not be null. If you try to create a record
    • Unique: When generating the database schema of the data model, the field will be translated into a column with unique values.
    • Version: if true, the field is used to store the version number which is bumped whenever the record changes making it subject to optimistic locking;
    • Exclude from optimistic lock: if selected, a change of the field never results in a locking conflict and transaction rollback.
    • Primary key: if true, the field is considered the primary key in the database table.
    • Auto generated: if true, the field value is generated automatically when the instance of the shared record is created.

      The attribute is available only for fields that are simple primary keys with integer values. Depending on the target database, either a sequence is generated, or auto-incrementation is used.

      Important: When creating a new record with a specified property that is set as auto-generated over an H2 database, the system will silently ignore the specified value and use the auto-generated value. For example, if a shared Record Book defines the field ID that is auto-generated and you instantiate a new Book as new Book (id->1), the ID value 1 will be ignored and the auto-generated ID will be used instead. On other databases, such code causes an exception.

    • Sequence name: the database sequence used for the auto generated field (If it does not exist, it is be created.

Optimistic Locking on a Shared Record

The optimistic locking mechanism prevents changes to shared record instances if the record changed since it was loaded.

For example, if you save a to-do that works with a shared record and another user changes the record from another to-do and you attempt to submit the to-do with your changes, the action fails with an Conflict on entity exception and your transaction is rolled back unless the exception is handled.

With optimistic locking, the record instances store their version in a dedicated field: the version field is updated always when the given record instance changes. If you are changing a record instance and the stored version changed in the meantime, the server returns an exception when you try to apply your changes.

To allow some of the record fields to be changed freely without ever causing a conflict, exclude them explicitly from optimistic locking: a change on such fields does not cause an update of the record version nor does it trigger the version check.

Setting up Optimistic Locking on a Shared Record

To set up locking on a shared Record, create the version field on the record:

  1. Add an Integer or Date field to the record; it is recommended to use the Integer type.
  2. On the DB Mapping tab of its Properties view, select Version.
    versionField.png
    The Version field of a Shared Record
  3. Optionally, exclude record fields which can be changed between their load and save:

    On the DB Mapping tab of the fields' Properties view, select Exclude from optimistic lock.

Important: For record relationships with optimistic locking, make sure to name both ends of the relationship so that any of the ends can be used by Hibernate as the owner of the relationships.

Setting up Relationships Between Shared Records

Relationships between shared records establish a relationship between the record tables. Unlike in JPA, the relationship is symmetrical (set on both ends of the relationship) and it is not necessary to make one of the tables the Owner.

To define a relationship between one or two shared records, do the following:

  1. Create and define the generic relationship as for non-shared records.
  2. On relationship end pointing to a shared record, define the properties related to the database mapping:
    • on the Source/Target tab of a end:
      • for Set multiplicity define how the related record should be ordered in Order by value
        orderByOnSharedRecEnd.png
      • for List multiplicity define the index column
    • on the Source/Target DB Mapping tab:
      • Excluded from auditing: if true, the relationship end is excluded from revision history keeping of shared Records.
      • Cache Region: name of the cache region of the relationship
      • BLOB length on ends pointing to a non-shared record from a shared record: the space reserved for the non-shared record value since the entire tree of the non-shared record is serialized and stored.
  3. On relationships between two shared records:
    • On the DB Mapping tab, define whether the underlying database structure should be updated, when the relationship structure is modified on the DB Mapping tab.
    • On the Foreign keys tab: adapt the foreign mapping if required. Make sure the types of the primary key and of the foreign key match.

Defining Indexes

To allow quick look-up of shared records in relationships, create indexes of foreign keys for the underlying database tables: you can do so directly in the database or you can define the indexes.

Important: The absence of indexes on your shared records can cause performance issues. It is recommended to define indexes to prevent slow search on your database data.

To define indexes for a table of a shared record, do the following:

  1. Display the properties of the record in the Properties view: click the record either in the GO-BPMN Explorer or in the record diagram.
  2. In the Properties view, open the Indexes tab.
  3. Click the Add button on the right.
  4. In the Database Index dialog, select the column that should be indexed and click > to add it to the indexed columns.
    indexCreating.png
    Defining index
  5. Upload the Module.

Generating Indexes

To generate indexes on foreign keys for all shared records that are related to another record in the data types file, right-click the file in GO-BPMN Explorer and click the Generate Indexes button in the Properties view; alternatively you can right-click into the canvas in a Record diagram and select Generate Indexes.

Defining a Shared Field with a Foreign Key of a Related Record

To allow for a more efficient recovery of IDs of related shared records, you can define the foreign key of the relationship end as the column name of the record:

  1. Name the relationship end targeted to the related record.
  2. On the target record, define a primary key field.
  3. On the Foreign Keys tab of the Relationship properties, check and possibly modify the foreign key of the primary key field.
    fieldwithforeignkey_relationshipfk.png
    Foreign key on the relationship end
  4. On the source record, create a read-only field that will be mapped to the foreign key.
    fieldwithforeignkey_fieldmapping.png
    DB mapping of the field
  5. On the DB Mapping tab of the field Properties view, insert the name of the foreign key column.

Now you can access the primary key of the related shared record using the read-only field.

Such foreign key fields, if set as primary keys, are set automatically when the related record is assigned. For example, if Parent has a relationship to Child and one of the Child's primary-key fields is mapped to the primary key of Parent, the field is automatically filled with the Parent id:

def MyParent p1 := new MyParent();
def MyChild c1 := new MyChild(id -> 1, parent -> p1);

Note that you need to make the relationship with the parent object: do not assign the foreign key directly as, for example, new MyChild(id -> 1, parentId -> p1.id);

Auditing: Shared Record Versioning

The auditing of shared records refers to storing of all versions as the record instances as they change, thus providing a version-control mechanism.

When you change instances of an audited Record in a transaction, the auditing mechanism creates a revision entity with a "snapshots" of the changed records in the auditing table of the record. Optionally, it enters the Record name and revision ID for each changed record into the entity name table so you can look up any other audited records that were changed in that transactions as well.

Example: Book is an audited record with the field title and you create a new book and edit an existing one:

> new Book(title -> "Something Happened"); // record id is 2
> getBookByTitle("Catch 22").title := "Catch-22"; //record id is 1

Auditing will perform the following:

  1. Create a revision entity of the Revision Entity type, for example, with the ID 1.
  2. Record the changes on the Book instances: auditing adds two entries, one for the new book and another one for the changed book with the following details:
    • record ID
    • revision ID set for both to 1
    • type of change
    • title of the book as after change
  3. Optionally, the entity name table records for each change a new entry with the record type: Hence two entries with record BOOK and the revision ID 1 are created.

Setting up Auditing

To set up auditing, do the following:

  1. Create the Revision Entity record that reflects the table with revisions:
    1. Create a RevisionEntity shared record.
    2. On its Auditing tab in the Properties view, select the Revision entity option.
    3. Typically you will add also the timestamp field to the record to be able to request the timestamps of revisions: add a field of type Integer or Date and set it as Revision timestamp on the Auditing tab in its Properties view.
      auditrevisionentity.png
      Revision entity shared record

      Note: By default the Revision Entity uses the LSPS implementation of the Revision Listener to enter revision data into the database table. The listener enters the id of the revision and optionally the timestamp into the database table. If you want the system to enter further data about the revision, you need to implement your own Revision Listener that will extend the LSPSRevisionListener class (refer to Customizing Entity Auditing).

  2. Optionally, you can store the list of revision IDs with the entities that were changed so you can easily look up revisions based on the entities they changed efficiently:
    1. Create the EntityName shared record related to your RevisionEntity record.
    2. In its Properties view on the Auditing tab, select the Modified entity name property.
    3. On the relationship end pointing to the EntityName record, set the multiplicity to Set.
    4. On the EntityName record, add a String field that will hold the name of the changed entity: On the Auditing tab of its Properties view, select its Entity name property.
      auditentityname.png
      Shared record of the modified entities of a revision
  3. Upload the module.

    Important: Only one revision entity record and modified entity name record can exist on an LSPS Server.

  4. Now you can set your shared records as audited: open the record's properties and on the Auditing tab select the Audited option and upload the Module.

Auditing a Shared Record

To enable auditing of a shared record, do the following:

  1. Open the Record's Properties view.
  2. On the Auditing tab:
    1. Select the Audited option.
    2. Optionally, in Audit table name, enter the name of the table that will hold the Record revisions.
  3. If the record has a relationship to another audited record with one-to-many multiplicity, name both ends of the relationship unless you excluded the relationship from auditing.
  4. Upload the Module with the data model.

Note: Make sure that you have uploaded the Revision Entity shared Record to your server: the LSPS database will contain the ENTITY_REVISION table.

auditedrecord.png
Audited shared Record

Excluding a Shared Field or Record from Auditing

If you want to exclude a Record Field from auditing, open its Properties and on the Auditing tab, select the Excluded from auditing option.

auditingexcludedfield.png
Field excluded from auditing

Including and Excluding a Relationship End from Auditing

By default relationships between audited records are audited as well.

To exclude a relationship end from auditing, open the relationship Properties and on the tab for the relationship end, either the Source tab or the Target tab, select the Excluded from auditing option.

Important: When auditing a record relationship with one-to-many multiplicity, both ends of the relationship must be named.

Caching a Shared Record

The caching mechanisms for shared records reduces the load on the underlying databases. It ensures that shared records that might be required by the same model instance, user, as well as other users and transactions are kept in memory.

LSPS applies first-level caching within individual transactions, that is, any data is cached within a transactions. The cache regions implement second-level caching that is applied on shared records. The cache exists regardless of the transaction or model instance life.

Note: LSPS applies first-level caching within individual transactions, that is, any data is cached within a transactions. The cache regions implement second-level caching that is applied on shared Records. The cache exists regardless of the instance transaction or model instance life.

The caching is defined by cache regions, which are added to the LSPS Server cache on module upload.

Defining Cache Regions

To define a cache region, do the following:

  1. In the GO-BPMN Explorer view, double-click the respective cache region definition. The Cache Region Editor is opened in the editor area.
  2. In the Cache Regions area, click Add.
  3. In the Cache Region Details area, define the cache region attributes.
    • Name: name of the cache region
    • Database: JNDI database name on which the cache region is applied (for example, jdbc/my_database)

      If undefined, the LSPS system database is used.

    • Max elements in memory: maximum number of objects to keep in the memory cache
    • Eternal: if true, the cached objects are not scheduled for discarding
      • Time to idle: time in seconds an object remains cached if not accessed
      • Time to live: seconds an object remains cached regardless of the accesses
    • Overflow to disk: if true, the file system is used to store cached objects.
      • Disk persistent: if true, the cache remains unchanged after the restart of the process engine.
      • Disk expiry thread interval: interval for the cleaning of expired cached business objects in seconds
      • Max elements on disk: maximum number of objects to keep in the disk cache
    • Memory store eviction policy: If a memory store has a limited size, the objects will be evicted from the memory when it exceeds this limit. The following strategies are available:
      • LFU: The least frequently used objects are evicted.
      • LRU: The least recently used objects are evicted.
      • FIFO: objects are evicted in the same order as they are cached.
    • Description: free text area to describe the cache region.

Disabling Cache Regions

To disable the LSPS system cache regions, define the disabled cache regions in the <YOUR_CUSTOM_APP>-ejb/src/main/resources/cache-regions.properties file of your custom LSPS application.