OperatorFabric Getting Started

1. Prerequisites

To use OperatorFabric, you need a linux OS with the following:

  • Docker install with 4Gb of space

  • 4Gb of free RAM

2. Install and run server

To start OperatorFabric, you first need to clone the getting started git

git clone https://github.com/opfab/operatorfabric-getting-started.git

Launch the startserver.sh in the server directory. You need to wait for all the services to start (it usually takes one minute to start), it is done when the console prompt is available again.

Test the connection to the UI: to connect to OperatorFabric, open in a browser the following page: localhost:2002/ and use operator1_fr as login and test as password.

After connection, you should see the following screen

empty opfab screenshot

To stop the server (if you started it in background), use:

docker-compose down &

3. Examples

For each example, relevant files and scripts are in the directory client/exampleX.

All examples assume you connect to the server from localhost (otherwise change the provided scripts)

3.1. Example 1: Send and update a basic card

Go in the directory client/example1.

To receive the test cards it is necessary to configure a perimeter which you will have a closer look at in Example 5.

Configure the required perimeter by executing the provided script:

./setupPerimeter.sh perimeter.json

Send a card, using the provided script :

./sendCard.sh card.json

The result should be a 201 Http status.

See the result in the UI, you should see a card, if you click on it you’ll see the detail

detail card screenshot

3.1.1. Anatomy of the card :

A card is containing information regarding the publisher, the recipients, the process, the data to show…​

More information can be found in the Card Structure section of the reference documentation.

{
        "publisher" : "message-publisher",
        "processVersion" : "1",
        "process" :"defaultProcess",
        "processInstanceId" : "hello-world-1",
        "state" : "messageState",
        "groupRecipients": ["Dispatcher"],
        "severity" : "INFORMATION",
        "startDate" : 1553186770681,
        "summary" : {"key" : "defaultProcess.summary"},
        "title" : {"key" : "defaultProcess.title"},
        "data" : {"message" :"Hello World !!! That's my first message"}
}
If you open the json file of the card, you will see '${current_date_in_milliseconds_from_epoch}' for the field 'startDate'. We have used this so that the date of the card is the current day (or the next day in some other examples). Indeed, in the shell script that sends the card, you will see that we create an environment variable with the current date which is then injected into the json file.

3.1.2. Update the card

We can send a new version of the card (updateCard.json):

  • change the message, field data.message in the JSON File

  • the severity , field severity in the JSON File

{
        "publisher" : "message-publisher",
        "processVersion" : "1",
        "process"  :"defaultProcess",
        "processInstanceId" : "hello-world-1",
        "state" : "messageState",
        "groupRecipients": ["Dispatcher"],
        "severity" : "ALARM",
        "startDate" : 1553186770681,
        "summary" : {"key" : "defaultProcess.summary"},
        "title" : {"key" : "defaultProcess.title"},
        "data" : {"message" :":That's my second message"}
}

You can send the updated card with:

./sendCard.sh cardUpdate.json

The card should be updated on the UI.

3.1.3. Delete the card

You can delete the card using DELETE HTTP code with reference to publisher and processInstanceId

curl -s -X DELETE http://localhost:2102/cards/defaultProcess.hello-world-1

or use provided script:

./deleteCard.sh

3.2. Example 2: Publish a new bundle

The way the card is displayed on the UI is defined by a Bundle containing templates and a process description.

The bundle structure is the following:

├── css : stylesheets files
└── template : handlebar templates for detail card rendering
config.json : process description and global configuration
i18n.json : internationalization file

The bundle is provided in the bundle directory of example2. It contains a new version of the bundle used in example1.

We just change the template and the stylesheet instead of displaying:

Message :  The message

we display:

You received the following message

The message

If you look at the template file (template/template.handlebars):

<h2> You received the following message </h2>

{{card.data.message}}

In the stylesheet css/style.css we just change the color value to red (#ff0000):

h2{
        color:#ff0000;
        font-weight: bold;
}

The global configuration is defined in config.json :

{
        "id":"defaultProcess",
        "version":"2",
        "states":{
                "messageState" : {
                        "templateName" : "template",
                        "styles" : [ "style" ]
                }
        }
}

To keep the old bundle, we create a new version by setting version to 2.

3.2.1. Package your bundle

Your bundle needs to be packaged in a tar.gz file, a script is available:

./packageBundle.sh

A file name bundle.tar.gz will be created.

3.2.2. Get a Token

To send the bundle you need to be authenticated. To get a token you can source the provided script:

source ../getToken.sh

This will run the following command:

curl -s -X POST -d "username=admin&password=test&grant_type=password&client_id=opfab-client" http://localhost:2002/auth/token

This should return a JSON a response like this:

{"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJSbXFOVTNLN0x4ck5SRmtIVTJxcTZZcTEya1RDaXNtRkw5U2NwbkNPeDBjIn0.eyJqdGkiOiIzZDhlODY3MS1jMDhjLTQ3NDktOTQyOC1hZTdhOTE5OWRmNjIiLCJleHAiOjE1NzU1ODQ0NTYsIm5iZiI6MCwiaWF0IjoxNTc1NTQ4NDU2LCJpc3MiOiJodHRwOi8va2V5Y2xvYWs6ODA4MC9hdXRoL3JlYWxtcy9kZXYiLCJhdWQiOiJhY2NvdW50Iiwic3ViIjoiYTNhM2IxYTYtMWVlYi00NDI5LWE2OGItNWQ1YWI1YjNhMTI5IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoib3BmYWItY2xpZW50IiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiODc3NzZjOTktYjA1MC00NmQxLTg5YjYtNDljYzIxNTQyMDBhIiwiYWNyIjoiMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIl19LCJyZXNvdXJjZV9hY2Nlc3MiOnsiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsIm1hbmFnZS1hY2NvdW50LWxpbmtzIiwidmlldy1wcm9maWxlIl19fSwic2NvcGUiOiJlbWFpbCBwcm9maWxlIiwic3ViIjoiYWRtaW4iLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsInByZWZlcnJlZF91c2VybmFtZSI6ImFkbWluIn0.XMLjdOJV-A-iZrtq7sobcvU9XtJVmKKv9Tnv921PjtvJ85CnHP-qXp2hYf5D8TXnn32lILVD3g8F9iXs0otMAbpA9j9Re2QPadwRnGNLIzmD5pLzjJ7c18PWZUVscbaqdP5PfVFA67-j-YmQBwxiys8psF8keJFvmg-ExOGh66lCayClceQaUUdxpeuKFDxOSkFVEJcVxdelFtrEbpoq0KNPtYk7vtoG74zO3KjNGrzLkSE_e4wR6MHVFrZVJwG9cEPd_dLGS-GmkYjB6lorXPyJJ9WYvig56CKDaFry3Vn8AjX_SFSgTB28WkWHYZknTwm9EKeRCsBQlU6MLe4Sng","expires_in":36000,"refresh_expires_in":1800,"refresh_token":"eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIzZjdkZTM0OC05N2Q5LTRiOTUtYjViNi04MjExYTI3YjdlNzYifQ.eyJqdGkiOiJhZDY4ODQ4NS1hZGE0LTQwNWEtYjQ4MS1hNmNkMTM2YWY0YWYiLCJleHAiOjE1NzU1NTAyNTYsIm5iZiI6MCwiaWF0IjoxNTc1NTQ4NDU2LCJpc3MiOiJodHRwOi8va2V5Y2xvYWs6ODA4MC9hdXRoL3JlYWxtcy9kZXYiLCJhdWQiOiJodHRwOi8va2V5Y2xvYWs6ODA4MC9hdXRoL3JlYWxtcy9kZXYiLCJzdWIiOiJhM2EzYjFhNi0xZWViLTQ0MjktYTY4Yi01ZDVhYjViM2ExMjkiLCJ0eXAiOiJSZWZyZXNoIiwiYXpwIjoib3BmYWItY2xpZW50IiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiODc3NzZjOTktYjA1MC00NmQxLTg5YjYtNDljYzIxNTQyMDBhIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUifQ.sHskPtatqlU9Z8Sfq6yvzUP_L6y-Rv26oPpykyPgzmk","token_type":"bearer","not-before-policy":0,"session_state":"87776c99-b050-46d1-89b6-49cc2154200a","scope":"email profile"}

Your token is the access_token value in the JSON, which the script will export to a $token environment variable.

The sendBundle.sh script below will use this variable.

The token will be valid for 10 hours, after that you will need to ask for a new one.

3.2.3. Send the bundle

Executing the sendBundle.sh script will send the bundle.

You can now execute the script, it will send the bundle.

./sendBundle.sh

You should receive the following JSON in response, describing your bundle.

{"id":"defaultProcess","name":"process.name","version":"2","states":{"messageState":{"responseData":null,"acknowledgmentAllowed":"Always","color":null,"name":null,"description":null,"showDetailCardHeader":null,"userCard":null,"templateName":"template","styles":["style"],"type":null,"response":null}},"uiVisibility":null}

3.2.4. Send a card

You can send the following card to test your new bundle:

{
        "publisher" : "message-publisher",
        "processVersion" : "2",
        "process"  :"defaultProcess",
        "processInstanceId" : "hello-world-1",
        "state": "messageState",
        "groupRecipients": ["Dispatcher"],
        "severity" : "INFORMATION",
        "startDate" : 1553186770681,
        "summary" : {"key" : "defaultProcess.summary"},
        "title" : {"key" : "defaultProcess.title"},
        "data" : {"message":"Hello world in new version"}
}

To use the new bundle, we set processVersion to "2"

To send the card:

./sendCard.sh

You should see in the UI the detail card with the new template.

3.3. Example 3: Process with state

For this example, we will set the following process:

  • Step 1: A critical situation arises on the High Voltage grid

  • Step 2: The critical situation evolve

  • Step 3: The critical situation ends

To model this process in OperatorFabric, we will use a "Process" with "States", we will model this in the config.json of the bundle:

{
        "id":"criticalSituation",
        "name": "Critical situation process",
        "version":"1",
        "states":{
                "criticalSituation-begin" : {
                        "templateName" : "criticalSituationTemplate",
                        "styles" : [ "style" ],
                        "acknowledgmentAllowed": "Always"
                },
                "criticalSituation-update" : {
                        "templateName" : "criticalSituationTemplate",
                        "styles" : [ "style" ],
                        "acknowledgmentAllowed": "Always"
                },
                "criticalSituation-end" : {
                        "templateName" : "endCriticalSituationTemplate",
                        "styles" : [ "style" ],
                        "acknowledgmentAllowed": "Always"
                }
        }
}

You can see in the JSON we define a process name "criticalSituation" with 3 states: criticalSituation-begin, criticalSituation-update and criticalSituation-end. For each state we define a title for the card, and the template of stylesheet to use.

The title is a key which refers to the i18n.json file:

{
        "criticalSituation-begin":{
                "title":"CRITICAL SITUATION",
                "summary":" CRITICAL SITUATION ON THE GRID , SEE DETAIL FOR INSTRUCTION"
        },
        "criticalSituation-update":{
                "title":"CRITICAL SITUATION - UPDATE",
                "summary":" CRITICAL SITUATION ON THE GRID , SEE DETAIL FOR INSTRUCTION"
        },
        "criticalSituation-end":{
                "title":"CRITICAL SITUATION - END",
                "summary":" CRITICAL SITUATION ENDED"
        }

}

The templates can be found in the template directory.

Before sending cards it is necessary to configure the required perimeter by executing the provided script:

./setupPerimeter.sh perimeter.json

We can now send cards and simulate the process, first we send a card at the beginning of the critical situation:

{
        "publisher" : "alert-publisher",
        "processVersion" : "1",
        "process"  :"criticalSituation",
        "processInstanceId" : "alert1",
        "state": "criticalSituation-begin",
        "groupRecipients": ["Dispatcher"],
        "severity" : "ALARM",
        "startDate" : 1553186770681,
        "summary" : {"key" : "criticalSituation-begin.summary"},
        "title" : {"key" : "criticalSituation-begin.title"},
        "data" : {"instruction":"Critical situation on the grid : stop immediatly all maintenance on the grid"}
}

The card refers to the process "criticalSituation" as defined in the config.json, the state attribute is put to "criticalSituation-begin" which is the first step of the process, again as defined in the config.json. The card can be sent via the provided script :

./sendCard.sh card.json

Two other cards have been provided to continue the process

  • cardUpdate.json: the state is criticalSituation-update

  • cardEnd.json: the state is criticalSituation-end and severity set to "compliant"

You can send these cards:

./sendCard.sh cardUpdate.json
./sendCard.sh cardEnd.json

3.4. Example 4: Time Line

To view the card in the time line, you need to set times in the card using timeSpans attributes as in the following card:

 {
        "publisher" : "scheduledMaintenance-publisher",
        "processVersion" : "1",
        "process"  :"maintenanceProcess",
        "processInstanceId" : "maintenance-1",
        "state": "planned",
        "groupRecipients": ["Dispatcher"],
        "severity" : "INFORMATION",
        "startDate" : 1553186770681,
        "summary" : {"key" : "maintenanceProcess.summary"},
        "title" : {"key" : "maintenanceProcess.title"},
        "data" : {
                "operationDescription":"Maintenance operation on the International France England (IFA) High Voltage line ",
                "operationResponsible":"RTE",
                "contactPoint":"By Phone : +33 1 23 45 67 89 ",
                "operationStartingTime":"Wed 11 dec 2019 8pm",
                "operationEndTime":"Thu 12 dec 2019 10am",
                "comment":"Operation has no impact on service"
                },
        "timeSpans" : [
        {"start" : 1576080876779},
        {"start" : 1576104912066}
            ]
}

For this example, we use a new publisher called "scheduledMaintenance-publisher". You won’t need to post the corresponding bundle to the businessconfig service as it has been loaded in advance to be available out of the box (only for the getting started). If you want to take a look at its content you can find it under server/businessconfig-storage/scheduledMaintenance-publisher/1.

Before sending the provided cards, you need to set the correct time values as epoch (ms) in the json. For each value you set, you will have a dot on the timeline. In our example, the first dot represents the beginning of the maintenance operation, and the second the end of the maintenance operation.

For example cards the dates are calculated automatically in the provided sendCard.sh script.

It is possible to change dates values by editing the card’s json file. To get the dates in Epoch, you can use the following commands:

For the first date:

date -d "+ 60 minutes" +%s%N | cut -b1-13

And for the second

date -d "+ 120 minutes" +%s%N | cut -b1-13

Before sending cards it is necessary to configure the required perimeter by executing the provided script:

./setupPerimeter.sh perimeter.json

To send the card use the provided script in example4 directory

./sendCard.sh card.json

A second card (card2.json) is provided as example, as before you can eventually change times values in the json file and then send it

./sendCard.sh card2.json

This time the severity of the card is ALERT, you should see the dot in red on the timeline

example 4 screenshot

3.5. Example 5: Card routing mechanism

3.5.1. Card sent to a group

As we saw previously, if a card is sent to a group, then you need to be a member of the group and have the process / state of the card within the group’s perimeter to receive it.

3.5.2. Card sent to an entity

If a card is sent to an entity, then you must be a member of this entity and have the process / state of the card within the user’s perimeter. As the perimeters are attached to groups, the user must therefore be a member of a group attached to this perimeter.

Let’s send this card :

{
    "publisher" : "message-publisher",
    "processVersion" : "1",
    "entityRecipients" : ["ENTITY1_FR"],
    "process" :"defaultProcess",
    "processInstanceId" : "cardExample5",
    "state" : "messageState1",
    "severity" : "INFORMATION",
    "startDate" : 1553186770681,
    "summary" : {"key" : "defaultProcess.summary"},
    "title" : {"key" : "defaultProcess.title"},
    "data" : {"message" : "Hello World !!! Here is a message for ENTITY1_FR"}
}

Use the provided script :

./sendCard.sh cardSentToEntity.json

The result should be a 201 Http status.

Look at the result in the UI, you should not be able to see the card.

To receive the card you need to create a perimeter and to do it you need to be authenticated. To get a token you can source the provided script:

source ../getToken.sh

Now let’s create this perimeter :

{
  "id" : "getting-startedPerimeter",
  "process" : "defaultProcess",
  "stateRights" : [
    {
      "state" : "messageState1",
      "right" : "Receive"
    }
  ]
}

You can use this command line :

curl -X POST http://localhost:2103/perimeters -H "Content-type:application/json" -H "Authorization:Bearer $token" --data @perimeter.json

or use the provided script :

./createPerimeter.sh perimeter.json

The result should be a 201 Http status, and a json object such as:

{"id":"getting-startedPerimeter","process":"defaultProcess","stateRights":[{"state":"messageState","right":"Receive"}]}

Now let’s attach this perimeter to the Dispatcher group. You can use this command line :

curl -X PUT http://localhost:2103/perimeters/getting-startedPerimeter/groups -H "Content-type:application/json" -H "Authorization:Bearer $token" --data "[\"Dispatcher\"]"

or use the provided script :

./putPerimeterForGroup.sh

The result should be a 200 Http status.

Now, if you refresh the UI or send again the card, you should see the card.

3.5.3. Card sent to a group and an entity

If a card is sent to a group and an entity, then to receive the card the user must be both a member of this entity and a member of this group and have the process / state of the card within the group’s perimeter.

Let’s send this card (for ENTITY1_FR and Dispatcher group with process/state not in user’s perimeter) :

{
        "publisher" : "message-publisher",
        "processVersion" : "1",
        "entityRecipients" : ["ENTITY1_FR"],
        "process"  :"defaultProcess",
        "processInstanceId" : "cardExample5_1",
        "state": "messageState2",
        "groupRecipients": ["Dispatcher"],
        "severity" : "INFORMATION",
        "startDate" : 1553186770681,
        "summary" : {"key" : "defaultProcess.summary"},
        "title" : {"key" : "defaultProcess.title"},
        "data" : {"message" : "Hello World !!! Here is a message for ENTITY1_FR and group Dispatcher - process/state not in operator1_fr perimeter "}
}

Use the provided script :

./sendCard.sh cardSentToEntityAndGroup_1.json

The result should be a 201 Http status.

Look at the result in the UI, you should not be able to see the card.

Now let’s send this card (for ENTITY1_FR and Dispatcher group with process/state in user’s perimeter) :

{
        "publisher" : "message-publisher",
        "processVersion" : "1",
        "entityRecipients" : ["ENTITY1_FR"],
        "process"  :"defaultProcess",
        "processInstanceId" : "cardExample5_2",
        "state": "messageState",
        "groupRecipients": ["Dispatcher"],
        "severity" : "INFORMATION",
        "startDate" : 1553186770681,
        "summary" : {"key" : "defaultProcess.summary"},
        "title" : {"key" : "defaultProcess.title"},
        "data" : {"message" : "Hello World !!! Here is a message for ENTITY1_FR and group Planner - process/state in operator1_fr perimeter "}
}

Use the provided script :

./sendCard.sh cardSentToEntityAndGroup_2.json

The result should be a 201 Http status.

Look at the result in the UI, you should be able to see the card.

3.6. Example 6: Sending cards from the UI

3.6.1. Updating the perimeter

In the top right corner, left to your username, there is the button to send a new card.

example 6 screen shot

If you click on it now, you shouldn’t be able to send a card because none of the groups you’re part of has the right to send a card. So first we need to set up a new perimeter with the correct rights and associate it to one of your group.

{
        "id" : "example6-Perimeter",
        "process" : "defaultProcess",
        "stateRights" : [
                {
                        "state" : "messageState",
                        "right" : "ReceiveAndWrite"
                }
        ]
}

Use the provided script :

./setupPerimeter.sh perimeter.json

3.6.2. Creating the template to send a card

Now you need to update the bundle to include a template for the card sending screen. In the config.json file of the bundle, it is necessary to define the userCard object.

{
        "id":"defaultProcess",
        "name": "Message process",
        "version":"2",
        "states": {
                "messageState": {
                  "name": "Message",
                  "description": "Message",
                  "userCard" : {
                        "template" : "usercard_message",
                        "startDateVisible": false,
                        "endDateVisible" : false,
                        "lttdVisible" : false,
                        "expirationDateVisible" : false
                  },
                  "templateName": "message"
                }
        }
}

The bundle folder now contains usercard_message.handlebars:

<div class="opfab-textarea">
        <label> MESSAGE </label>
        <textarea id="message" name="message" placeholder="Write something.."
                style="width:100%">{{card.data.message}}</textarea>
</div>


<script>
    usercardTemplateGateway.getSpecificCardInformation = function () {
        const message = document.getElementById('message').value;
        const card = {
        summary : {key : "message.summary"},
                title : {key : "message.title"},
        data : {message: message},
        };
        if (message.length<1) return { valid:false , errorMsg:'You must provide a message'}
        return {
            valid: true,
            card: card
        };

    }

</script>

You can package it and send it to the server with the provided scripts :

./packageBundle.sh | ./sendBundle.sh

Now if you reload the page, you should be able to send a card. Don’t forget to send it to an entity you’re a part of to receive it. For instance Control Center FR North.

example 6 screen shot #2

3.6.3. Customize the card sending template

To go further you can check out the folder named bundle_updated to see a template where it is possible to add an object to the message similar to e-mails.

You can see the updated UI using this command line and refreshing the page to update the bundle.

./packageBundle_updated.sh | ./sendBundle.sh
example 6 screen shot #3

The object attribute is defined in the userCard template in the usercard_message.handlebars file.

        <div class="opfab-textarea">
            <label> Object </label>
            <textarea id="object" name="object" placeholder="Write something.."
                style="width:100%">{{card.data.object}}</textarea>
        </div>
        <br/>

        <div class="opfab-textarea">
            <label> MESSAGE </label>
            <textarea id="message" name="message" placeholder="Write something.."
                style="width:100%">{{card.data.message}}</textarea>
        </div>

<script>
    usercardTemplateGateway.getSpecificCardInformation = function () {
        const message = document.getElementById('message').value;
        const object = document.getElementById('object').value;
        const card = {
        summary : {key : "message.summary"},
                title : {key : "message.title"},
        data : {message: message, object: object},
        };
        if (message.length<1) return { valid:false , errorMsg:'You must provide a message'}
        return {
            valid: true,
            card: card
        };

    }

</script>

If you look at the message.handlebars file in the bundle, you will see how the template was changed to take the object attribute into account:

<h2><b>Object:</b> {{card.data.object}}</h2>

{{card.data.message}}

See the card documentation for further information on cards.

3.7. Example 7: Message card using "build-in" templates

Instead of coding your own templates for message cards or user cards, you can use opfab build-in templates if it suits your needs. In the following example, we are going to modify the templates of the previous bundle (the message card).

If you want to show only a simple message, you can use the message build-in template, to achieve this, first you have to put in your handlebar file message.handlebars:

<opfab-message-card> </opfab-message-card>
The build-in template supposes the message is stored in the card in field data.message.

Then you have to put in your handlebar file message_usercard.handlebars:

<opfab-message-usercard> </opfab-message-usercard>
The message will be stored in the field data.message of the card.

You can package the bundle and send it to the server with the provided scripts :

./packageBundle.sh | ./sendBundle.sh

Now if you reload the page, you should be able to create a card using build-in template.

example 7 screen shot #1

You can change the text header by providing the message-header attribute:

<opfab-message-card  message-header="a new header"> </opfab-message-card>

By using attributes you can set some parameters regarding recipients, see <<'build-in_templates_common_usercard_attributes,common attributes for user cards'>>

You can see the updated UI using this command line and refreshing the page to update the bundle.

./packageBundle_updated.sh | ./sendBundle.sh
example 7 screen shot #2

3.8. Example 8: Question card using "build-in" templates

Another example of templating is a question/response use case. Opfab allows you to send a response which will be attached to the card. This mechanism can be illustrated via the build-in template for question card.

To show a question and see user responses, first you have to put in your handlebar file question.handlebars:

<opfab-question-card> </opfab-question-card>
The build-in template supposes the question is stored in the card in the field data.question.

Then you have to put in your handlebar file question_usercard.handlebars:

<opfab-question-usercard> </opfab-question-usercard>
The question will be stored in the field data.question of the card.

Like the message build-in template, you can set default attributes : <<'build-in_templates_common_usercard_attributes,common attributes for user cards'>>

Use the provided script to add rights to send question card :

./setupPerimeter.sh perimeter.json

You can package the bundle and send it to the server with the provided scripts :

./packageBundle.sh | ./sendBundle.sh

Example for using a question card with the previous build-in template :

First, operator1_fr log in and create the question card :

example 8 screen shot #1

Then, operator3_fr log in, read the question card and answer to it :

example 8 screen shot #2

Finally, operator1_fr sees the answer of operator3_fr :

example 8 screen shot #3

4. Troubleshooting

4.1. My bundle is not loaded

The server send a {"status":"BAD_REQUEST","message":"unable to open submitted file","errors":["Error detected parsing the header"]}, despite correct http headers.

The uploaded bundle is corrupted. Test your bundle in a terminal (Linux solution).

Example for a bundle archive named MyBundleToTest.tar.gz giving the mentioned error when uploaded :

tar -tzf MyBundleToTest.tar.gz >/dev/null
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Exiting with failure status due to previous errors

4.1.1. Solution

Extract content if possible and compress it to a correct compressed archive format.

4.2. I can’t upload my bundle

The server responds with a message like the following: {"status":"BAD_REQUEST","message":"unable to open submitted file","errors":["Input is not in the .gz format"]}

The bundle has been compressed using an unmanaged format.

4.2.1. Format verification

4.2.1.1. Linux solution

Command line example to verify the format of a bundle archive named MyBundleToTest.tar.gz(which gives the mentioned error when uploaded):

 tar -tzf MyBundleToTest.tar.gz >/dev/null

which should return in such case the following messages:

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now

4.2.2. Solution

Use tar.gz format for the archive compression. Shell command is tar -czvf MyBundleToTest.tar.gz config.json i18n.json template/ css/ for a bundle containing templates and css files.

4.3. My bundle is rejected due to internal structure

The server sends {"status":"BAD_REQUEST","message":"Incorrect inner file structure","errors":["$OPERATOR_FABRIC_INSTANCE_PATH/d91ba68c-de6b-4635-a8e8-b58 fff77dfd2/config.json (Aucun fichier ou dossier de ce type)"]}

Where $OPERATOR_FABRIC_INSTANCE_PATH is the folder where businessconfig files are stored server side.

4.3.1. Reason

The internal file structure of your bundle is incorrect. config.json file and folders need to be at the first level.

4.3.2. Solution

Add or organize the files and folders of the bundle to fit the Businessconfig bundle requirements.

4.4. My template is not used.

It need to be declared in the config.json of the bundle.

4.4.1. Solution

Add or verify the name of the templates declared in the config.json file of the bundle.

4.5. My value is not displayed in the detail template

There are several possibilities:

  • the path of the data used in the template is incorrect;

  • number of pair of { and } must be 2 (example : {{card.data.myValue}})

OperatorFabric Architecture

5. Introduction

The aim of this document is to describe the architecture of the solution, first by defining the business concepts it deals with and then showing how this translates into the technical architecture.

6. Business Architecture

OperatorFabric is based on the concept of cards, which contain data regarding events that are relevant for the operator. A third party tool publishes cards and the cards are received on the screen of the operators. Depending on the type of the cards, the operator can send back information to the third party via a "response card".

6.1. Business components

functional diagram

To do the job, the following business components are defined :

  • Card Publication : this component receives the cards from third-party tools or users

  • Card Consultation : this component delivers the cards to the operators and provide access to all cards exchanged (archives)

  • Card rendering and process definition : this component stores the information for the card rendering (templates, internationalization, …​) and a light description of the process associate (states, response card, …​). This configuration data can be provided either by an administrator or by a third party tool.

  • User Management : this component is used to manage users, groups, entities and perimeters.

  • External devices : this optional component permit to send alarm to an external device instead of playing a sound on the user computer. For now, only one driver exists using modbus protocol.

  • Cards Reminder : this component is used to handle cards reminders.

  • Cards External Diffusion : this optional component allows to send cards notifications via email

  • Supervisor : this optional component allows to monitor users connections and card acknowledgements.

6.2. Business objects

The business objects can be represented as follows :

business objects diagram
  • Card : the core business object which contains the data to show to the user(or operator)

  • Publisher : the emitter of the card (be it a third-party tool or an entity)

  • User : the operator receiving cards and responding via response cards

  • Entity : an entity (containing a list of users) , it can be used to model organizations (examples : control center, company , department…​ ) . An entity can be part of another entity or even of several entities.

  • Group : a group (containing a list of users) , it can be used to model roles in organizations setting the group type to 'ROLE' (examples : supervisor, dispatcher …​ ) and it can be used to define the rights on business processes by setting the group type to 'PERMISSION' (example: GROUP_ALLOWED_TO_SEND_CARD_PROCESS_X)

  • Process : the process the card is about

  • Process Group : a way to group processes on the user interface

  • State : the step in the process

  • Perimeter : for a defined group the visibility of a card for a specific process and state

  • Card Rendering : data for card rendering

A card can have a parent card, in this case the card can be named child card.

7. Technical Architecture

The architecture is based on independent modules. All business services are accessible via REST API.

functional diagram

7.1. Business components

We find here the business component seen before:

  • We have a "UI" component which stores the static pages and the UI code that is downloaded by the browser. The UI is based an Angular and Handlebars for the card templating.

  • The business component named "Card rendering and process definition" is at the technical level known as "Businessconfig service". This service receive card rendering and process definition as a bundle. The bundle is a tar.gz file containing

    • json process configuration file (containing states & actions)

    • templates for rendering

    • stylesheets

    • internationalization information

Business components are based on SpringBoot or Node.js and are packaged via Docker.

Spring WebFlux is used to provide the card in a fluid way.

7.2. Technical components

7.2.1. Gateway

It provides a filtered view of the APIS and static served pages for external access through browsers or other http compliant accesses. It provides the rooting for accessing the services from outside. It is a nginx server package with docker, this component contains the angular UI component.

7.2.2. Broker

The broker is used to share information asynchronously across the whole services. It is implemented via RabbitMQ

7.2.3. Authentication

The architecture provides a default authentication service via KeyCloak but it can delegate it to an external provider. Authentication is done through the use of Oauth2, three flows are supported : implicit, authorization code and password.

7.2.4. Database

The cards are stored in a MongoDb database. The bundles are stored in a file system.

8. Software architecture

The current software does not yet fully follow the architecture described here, it is a work in progress.

8.1. Back Architecture

The main idea is to make the business code as technology independent as possible. This means that the business code should be independent of the Spring Framework or Node.js for example. Inspired by the hexagonal architecture, this leads to the following simplified view:

back software architecture diagram

Unit testing consists in testing the business code and using mock objects to simulate the servers (mongoDB, rabbit …​)

8.2. Front Architecture

The main idea is to make the business code as technology independent as possible. This means that the business code should be independent of the Angular Framework for example. Inspired by the hexagonal architecture, this leads to the following simplified view:

back software architecture diagram

The UI component shall contain as little as possible business code.

Unit testing consists in testing the business code and using mock objects to simulate the servers (in app/tests/mocks)

OperatorFabric Reference Documentation

The aim of this document is to:

  • Explain what OperatorFabric is about and define the concepts it relies on

  • Give a basic tour of its features from a user perspective

9. Introduction

To perform their duties, an operator has to interact with multiple applications (perform actions, watch for alerts, etc.), which can prove difficult if there are too many of them.

The idea is to aggregate all the notifications from all these applications into a single screen, and to allow the operator to act on them if needed.

Feed screen layout

These notifications are materialized by cards sorted in a feed according to their period of relevance and their severity. When a card is selected in the feed, the right-hand pane displays the details of the card.

In addition, the cards will also translate as events displayed on a timeline at the top of the screen.

Part of the value of OperatorFabric is that it makes the integration very simple on the part of the third-party applications. To start publishing cards to users in an OperatorFabric instance, all they have to do is:

  • Register as a publisher through the "Businessconfig" service and provide a "bundle" containing handlebars templates defining how cards should be rendered, i18n info etc.

  • Publish cards as json containing card data through the card publication API

OperatorFabric will then:

  • Dispatch the cards to the appropriate users (by computing the actual users who should receive the card from the recipients rules defined in the card)

  • Take care of the rendering of the cards

  • Display relevant information from the cards in the timeline

A card is not only information, it could be question(s) the operator has to answer. When the operator is responding, a card is emitted to the sender of the initial card and the response could be seen by other operators.

Feed screen layout

It is also possible for users to directly send card to other users using predefined card templates.

OperatorFabric user interface is running on a browser (recent version of chrome, firefox and edge are supported).

The supported screen resolutions are :

  • Resolutions greater than or equals to 1680x1050.

  • Resolutions with a width between 450-900px and a minimum height of 700px.

10. Sending cards

The Cards Publication Service exposes a REST API through which third-party applications, or "publishers" can post cards to OperatorFabric. It then handles those cards:

  • Time-stamping them with a "publishDate"

  • Sending them to the message broker (RabbitMQ) to be delivered in real time to the appropriate operators

  • Persisting them to the database (MongoDB) for later consultation

10.1. Card Structure

Cards are represented as Json objects. The technical design of cards is described in the cards api documentation . A card correspond to the state of a Process in OperatorFabric.

10.1.1. Technical Information of the card

Those attributes are used by OperatorFabric to manage how cards are stored, to whom and when they’re sent.

10.1.1.1. Mandatory information

Below, the json technical key is in the '()' following the title.

Publisher (publisher)

The publisher field bears the identifier of the emitter of the card, be it an entity or an external service.

Process (process)

This field indicates which process the card is attached to. This information is used to resolve the presentation resources (bundle) used to render the card and card details.

Process Version (processVersion)

The rendering of cards of a given process can evolve over time. To allow for this while making sure previous cards remain correctly handled, OperatorFabric can manage several versions of the same process. The processVersion field indicate which version of the process should be used to retrieve the presentation resources (i18n, templates, etc.) to render this card.

Process Instance Identifier (processInstanceId)

A card is associated to a given process, which defines how it is rendered, but it is also more precisely associated to a specific instance of this process. The processInstanceId field contains the unique identifier of the process instance.

State in the process (state)

The card represents a specific state in the process. In addition to the process, this information is used to resolve the presentation resources used to render the card and card details.

Start Date (startDate)

Start date of the active period of the card (process business time).

Severity (severity)

The severity is a core principe of the OperatorFabric Card system. There are 4 severities available. A color is associated in the GUI to each severity. Here the details about severity and their meaning for OperatorFabric:

  1. ALARM: represents a critical state of the associated process, need an action from the operator. In the UI, the card is red;

  2. ACTION: the associated process need an action form operators in order to evolve correctly. In the UI, the card is orange;

  3. COMPLIANT: the process related to the card is in a compliant status. In the UI, the card is green.;

  4. INFORMATION: give information to the operator. In the UI, the card is blue.

Title (title)

This attribute is display as header of a card in the feed of the GUI. It’s the main User destined Information of a card. The value refer to an i18n value used to localize this information.

Summary (summary)

This attribute is display as a description of a card in the feed of the GUI, when the card is selected by the operator. It’s completing the information of the card title. The value refer to an i18n value used to localize this information.

10.1.1.2. Optional information
End Date (endDate)

End date of the active period of the card (process business time).

Expiration Date (expirationDate)

Expiration date of the active period of the card (process business time). When the expiration date has passed, the card will be automatically removed from the card feed.

Tags (tag)

Tags are intended as an additional way to filter cards in the feed of the GUI.

Grouping cards is an experimental feature

Tags can also be used to group cards together in the card feed. When feed.enableGroupedCards is enabled in the web⁠-ui.json configuration file, cards that have the same tags are grouped together. In the feed window, only the top card will be visible and can be clicked to show cards with the same tags.

Closed grouped cards Open grouped cards

EntityRecipients (entityRecipients)

Used to send cards to entity : all users members of the listed entities who have the right for the process/state of the card will receive it.

GroupRecipients (groupRecipients)

Used to send cards to groups : all users members of the groups will receive it. If this field is used in conjunction with entityRecipients, to receive the cards :

  • users must be members of one of the entities AND one of the groups to receive the cards.

OR

  • users must be members of one of the entities AND have the right for the process/state of the card.

UserRecipients (userRecipients)

Used to send cards directly to users without using groups or entities for card routing.

Last Time to Decide (lttd)

Fixes the moment until when a response is possible for the card. After this moment, the response button won’t be usable. When lttd time is approaching, a clock is visible on the card in the feed with the residual time. The lttd time can be set for cards that don’t expect any response

SecondsBeforeTimeSpanForReminder (secondsBeforeTimeSpanForReminder)

Fixes the time for remind before the event define by the card see Card reminder

ToNotify (toNotify)

Boolean attribute. If the card must not be displayed in the feed and in monitoring screen, this field must be set to false. In that case, it means the card is stored only in archivedCards collection and not in cards collection.

Publisher type (publisherType)
  • EXTERNAL - The sender is an external service

  • ENTITY - The sender of the card is the user on behalf of the entity

Representative (representative)

Used in case of sending card as a representative of an entity or a publisher (unique ID of the entity or publisher)

Representative Type (representativeType)
  • EXTERNAL - The representative is an external service

  • ENTITY - The representative is an entity

Geographical information (wktGeometry and wktProjection)
Geographical information is an experimental feature

You can add geographical location in wktGeometry and the projection in wktProjection fields.

When feed.enableMap is enabled in the web⁠-ui.json configuration file and the card is visible in the line feed, a geographical map will be drawn. When the card has set its wktGeometry, the location will be highlighted on the card. Two geometrical shapes are supported POINT, which will show a circle on the map, and POLYGON which will draw the specified area on the map. For example show a circle based on the card location:

"wktGeometry": "POINT (5.8946407 51.9848624)",
"wktProjection": "EPSG:4326",

Example to highlight an area on the map:

"wktGeometry": "POLYGON ((5.5339097 52.0233042,  5.7162495 51.7603784, 5.0036701 51.573684, 4.8339214 52.3547498, 5.5339097 52.0233042))",
"wktProjection": "EPSG:4326",

The specifications of the Well-known Text Representation of coordinate reference systems can be found at WKT Specification.

Only the POINT and POLYGON are supported.
Actions (actions)

A list of predetermined actions that will be executed upon receiving the card. The available actions include: - KEEP_CHILD_CARDS : used to keep child cards when the parent card is modified. - PROPAGATE_READ_ACK_TO_PARENT_CARD : used only for response cards. When receiving the child card, the status of the parent card should be considered as 'unread' and 'not acknowledged' until the user reads or acknowledge it again.

10.1.1.3. Business period

We define the business period as starting form startDate to endDate. The card will be visible on the UI if the business period overlap the user chosen period (i.e. the period selected on the timeline). If endDate is not set, the card will be visible as soon as the startDate is between start and end date of the chosen period.

10.1.1.4. Store information
uid (uid)

Unique identifier of the card in the OperatorFabric system. This attribute is always set by OperatorFabric.

id (id)

State id of the associated process, determined by OperatorFabric can be set arbitrarily by the publisher. The id is determined by 'OperatorFabric' as follows : process.processInstanceId

Publish Date (publishDate)

Indicates when the card has been registered in OperatorFabric system. This is technical information exclusively managed by OperatorFabric.

10.1.2. User destined Information of the card

There are two kind of User destined information in a card. Some are restricted to the card format, others are defined by the publisher as long as there are encoded in json format.

10.1.2.1. in Card Format
Title (title)

See Title .

Summary (summary)

See Summary .

10.1.2.2. Custom part
Data (data)

Determines where custom information is store. The content in this attribute, is purely publisher choice. This content, as long as it’s in json format can be used to display details. For the way the details are displayed, see below.

You must not use dot in json field names. In this case, the card will be refused with following message : "Error, unable to handle pushed Cards: Map key xxx.xxx contains dots but no replacement was configured!""

10.1.3. Presentation Information of the card

10.1.3.1. TimeSpans (timeSpans)

When the simple startDate and endDate are not enough to characterize your process business times, you can add a list of TimeSpan to your card. TimeSpans are rendered in the timeline component as cluster bubbles. This has no effect on the feed content.

example :

to display the card two times in the timeline you can add two TimeSpan to your card:

{
	"publisher":"Dispatcher",
	"publisherVersion":"0.1",
	"process":"process",
	"processInstanceId":"process-000",
	"startDate":1546297200000,
	"severity":"INFORMATION",
	...
	"timeSpans" : [
        {"start" : 1546297200000},
        {"start" : 1546297500000}
    ]

}

In this sample, the card will be displayed twice in the timeline. The card start date will be ignored.

For timeSpans, you can specify an end date, but it is not implemented in OperatorFabric (it was intended for future uses, but it will be deprecated).

10.2. Cards Examples

Before detailing the content of cards, let’s show you what cards look like through few examples of json.

10.2.1. Minimal Card

The OperatorFabric Card specification defines mandatory attributes, but some optional attributes are needed for cards to be useful in OperatorFabric. Let’s clarify those point through few examples of minimal cards and what happens when they’re used as if.

10.2.1.1. Rules for receiving cards

Whatever the recipient(s) of the card (user directly, group and/or entity), the user must have the receive right on the process/state of the card to receive it (Receive or ReceiveAndWrite). So the rules for receiving cards are :

1) If the card is sent to user1, the card is received and visible for user1 if he has the receive right for the corresponding process/state

2) If the card is sent to GROUP1 (or ENTITY1_FR), the card is received and visible for user if all the following is true :

  • he’s a member of GROUP1 (or ENTITY1_FR)

  • he has the receive right for the corresponding process/state

3) If the card is sent to ENTITY1_FR and GROUP1, the card is received and visible for user if all the following is true :

  • he’s a member of ENTITY1_FR (either directly or through one of its children entities)

  • he’s a member of GROUP1

  • he has the receive right for the corresponding process/state

In this chapter, when we talk about receive right, it means Receive or ReceiveAndWrite.

10.2.1.2. Send to One User

The following card contains only the mandatory attributes.

{
	"publisher":"TEST_PUBLISHER",
	"processVersion":"0.1",
	"process":"process",
	"processInstanceId":"process-000",
	"state":"myState",
	"startDate":1546297200000,
	"severity":"INFORMATION",
	"title":{"key":"card.title.key"},
	"summary":{"key":"card.summary.key"},
	"userRecipients": ["operator1_fr"]

}

This an information about the process instance process-000 of process process, sent by TEST_PUBLISHER. The title and the summary refer to i18n keys defined in the associated i18n file of the process. This card is displayable since the first january 2019 and should only be received by the user using the operator1_fr login (provided that this user has receive right on this process/state).

10.2.1.3. Send to several users
Simple case (sending to a group)

The following example is nearly the same as the previous one except for the recipient.

{
	"publisher":"TEST_PUBLISHER",
	"processVersion":"0.1",
	"process":"process",
	"processInstanceId":"process-000",
	"state":"myState",
	"startDate":1546297200000,
	"severity":"INFORMATION",
	"title":{"key":"card.title.key"},
	"summary":{"key":"card.summary.key"},
	"groupRecipients": ["Dispatcher"]
}

Here, the recipient is a group, the Dispatcher. So all users who are members of this group and who have receive right on the process/state of the card will receive it.

Simple case (sending to an entity)

The following example is nearly the same as the previous one except for the recipient.

{
	"publisher":"TEST_PUBLISHER",
	"processVersion":"0.1",
	"process":"process",
	"processInstanceId":"process-000",
	"state":"myState",
	"startDate":1546297200000,
	"severity":"INFORMATION",
	"title":{"key":"card.title.key"},
	"summary":{"key":"card.summary.key"},
	"entityRecipients" : ["ENTITY1_FR"]
}

Here, the recipient is an entity, ENTITY1_FR, and there is no group recipient anymore. So all users who are members of this entity and who have a receive right for the process/state of the card will receive it. More information on perimeters can be found in <<'users_management,user documentation'>>

Example : Given this perimeter :

{
    "id" : "perimeter1",
    "process" : "process",
    "stateRights" : [
        {
            "state" : "myState",
            "right" : "Receive"
        },
        {
            "state" : "myState2",
            "right" : "ReceiveAndWrite"
        }
    ]
}

Given this group :

{
    "id": "group1",
    "name": "group number 1",
    "description": "group number 1 for documentation example"
}

Perimeters can only be linked to groups, so let’s link the perimeter perimeter1 to the group group1. You can do this with this command line for example ($token is your access token) :

curl -X PUT http://localhost:2103/perimeters/perimeter1/groups -H "Content-type:application/json" -H "Authorization:Bearer $token" --data "[\"group1\"]"

Then you can see group1 is now :

{
    "id": "group1",
    "name": "group number 1",
    "description": "group number 1 for documentation example",
    "perimeters": ["perimeter1"]
}

If the connected user is a member of group1, then he has a Receive right on process/state process/myState (and also on`process/myState2`). So if the user is also a member of ENTITY1_FR then he will receive the card.

Simple case (sending to a group and an entity)

The following example is nearly the same as the previous one except for the recipient.

{
	"publisher":"TEST_PUBLISHER",
	"processVersion":"0.1",
	"process":"process",
	"processInstanceId":"process-000",
	"state":"myState",
	"startDate":1546297200000,
	"severity":"INFORMATION",
	"title":{"key":"card.title.key"},
	"summary":{"key":"card.summary.key"},
	"groupRecipients": ["Dispatcher"],
	"entityRecipients" : ["ENTITY1_FR"]
}

Here, the recipients are a group and an entity, the Dispatcher group and ENTITY1_FR entity. To receive the card, the user must be a member of both ENTITY1_FR and GROUP1 and must have the receive right for the corresponding process/state.

Complex case

If this card need to be viewed by a user who is not in the Dispatcher group, it’s possible to tune more precisely the definition of the recipient. If the operator2_fr needs to see also this card, the recipient definition could be(the following code details only the recipient part):

"groupRecipients": ["Dispatcher"],
"userRecipients": ["operator2_fr"]

So here, all the users of the Dispatcher group will receive the INFORMATION as should the tos2-operator user.

Another example, if a card is destined to the operators of Dispatcher and Planner and needs to be also seen by the admin, the recipient configuration looks like:

"groupRecipients": ["Dispatcher", "Planner"],
"userRecipients": ["admin"]

10.2.2. Regular Card

The previous cards were nearly empty regarding information carrying. In fact, cards are intended to contain more information than a title and a summary. The optional attribute data is here for that. This attribute is destined to contain any json object. The creator of the card is free to put any information needed as long as it’s in a json format.

10.2.2.1. Full of Hidden data

For this example we will use our previous example for the Dispatcher group with a data attribute containing the definition of a json object containing two attributes: stringExample and numberExample.

{
	"publisher":"TEST_PUBLISHER",
	"processVersion":"0.1",
	"process":"process",
	"processInstanceId":"process-000",
	"state":"myState",
	"startDate":1546297200000,
	"severity":"INFORMATION",
	"title":{"key":"card.title.key"},
	"summary":{"key":"card.summary.key"},
	"userRecipients": ["operator1_fr"],
	"data":{
		"stringExample":"This is a not so random string of characters.",
		"numberExample":123
		}

}

This card contains some data but when selected in the feed nothing more than the previous example of card happen because there is no rendering configuration.

10.2.2.2. Fully useful

When a card is selected in the feed (of the GUI), the data is displayed in the detail panel. The way details are formatted depends on the template contained in the bundle associated with the process as described here . To have an effective example without too many actions to perform, the following example will use an already existing configuration. You can find the corresponding bundle of the following example in the test directory of OperatorFabric (src/test/resources/bundles).

At the card level, the attributes in the card telling OperatorFabric which template to use are the process and state attributes, the templateName can be retrieved from the definition of the bundle.

{
	"publisher":"TEST_PUBLISHER",
	"processVersion":"1",
	"process":"defaultProcess",
	"processInstanceId":"process-000",
	"state":"messageState",
	"startDate":1546297200000,
	"severity":"INFORMATION",
	"title":{"key":"message.title"},
	"summary":{"key":"message.summary"},
	"userRecipients": ["operator1_fr"],
	"data":{"message":"Data displayed in the detail panel"},

}

So here a single custom data is defined, and it’s message. This attribute is used by the template called by the templateName attribute.

11. Card rendering

As stated above, third applications interact with OperatorFabric by sending cards.

The Businessconfig service allows them to tell OperatorFabric for each process how these cards should be rendered including translation if several languages are supported. Configuration is done via files zipped in a "bundle", these files are send to OperatorFabric via a REST end point.

In addition, it lets third-party applications define additional menu entries for the navbar (for example linking back to the third-party application) that can be integrated either as iframe or external links.

11.1. Process: Declaration and Configuration

To declare and configure a Process, OperatorFabric uses bundles. This section describes their content and how to use them. An OperatorFabric Process is a way to define a business configuration. Once this bundle fully created, it must be uploaded to the server through the Businessconfig service.

Some examples show how to configure a process using a bundle before diving in more technical details of the configuration. The following instructions describe tests to perform on OperatorFabric to understand how customizations are possible.

11.1.1. Bundle as Process declaration

A bundle contains all the configuration regarding a given business process, describing for example the various steps of the process but also how the associated cards and card details should be displayed.

Bundle are technically tar.gz archives containing at least a descriptor file named config.json. To display the card date, some css files, i18n file and handlebars templates must be added.

For didactic purposes, in this section, the businessconfig bundle name is BUNDLE_TEST (to match the parameters used by the script). The l10n (localization) configurations is English.

As detailed in the Businessconfig core service README the bundle contains at least a metadata file called config.json, a file i18n.json, a css folder and a template folder.

Except for the config.json file, all elements are optional.

The file organization within a bundle:

bundle
├── config.json
├── i18n.json
├── css
│   └── bundleTest.css
└── template
    ├── template1.handlebars
    └── template2.handlebars

11.1.2. The config.json file

It’s a description file in json format. It lists the content of the bundle.

example

{
  "id": "TEST",
  "version": "1",
  "uiVisibility": {
    "monitoring": true,
    "logging": true,
    "calendar": true
  },
  "name": "process.label",
  "defaultLocale": "fr",
  "states": {
    "firstState": {
      "name":  "state.label",
      "color": "blue",
      "templateName": "operation",
      "acknowledgmentAllowed": "Never"
    }
  }
}
  • id: id of the process;

  • name: process name;

  • version: enables the correct display of the card data, even for the old ones. The server store the previous versions in its file system. This field value should match a businessconfig configuration for a correct rendering;

  • states: lists the available states which each declares associated actions, associated templates and if cards could be acknowledged by users;

  • uiVisibility: in the monitoring, logging and calendar screens, not all the cards are visible, it depends on the business process they are part of. For a card to be visible in these screens, the corresponding parameter must be set to true.

The mandatory field are id,name and version.

See the Businessconfig API documentation for details.

11.1.3. The i18n.json file

This file contains internationalization information, in particular the translation for title and summary fields If there is no i18n file or key is missing, OperatorFabric displays i18n key, such as BUNDLE_TEST.1.missing-i18n-key. In the case where the bundle declares no i18n key corresponds to missing-i18n-key.

The choice of i18n keys is up to the maintainer of the Businessconfig process.

11.1.3.1. Template folder

The template folder contains one template file for each process/state. They will be used for the card details rendering.

Example

For this example, the name of the process is Bundle Test and its technical name is BUNDLE_TEST. The bundle provides an english l10n.

Title and summary have to be localized.

Here is the content of i18n.json

{
  "TEST": {
    "title": "Test: Process {{value}}",
    "summary": "This sums up the content of the card: {{value}}",
    "detail": {
      "title": "card title" 
    }
  },
  "process": {
    "label": "Test Process"
  },
  "state": {
    "label": "Test State"
  },
  "template": {
    "title": "Asset details"
  }
}

To check the i18n, after the upload of the bundle, use a GET request against the businessconfig service. The simpler is to ask for the i18n file, as described here .

Set the version of the bundle and the technical name of the businessconfig party to get json in the response.

For example, to check if the french l10n data of the version 1 of the BUNDLE_TEST businessconfig party use the following command line:

curl "http://localhost:2100/businessconfig/processes/BUNDLE_TEST/i18n?version=1" \
-H  "Authorization: Bearer ${token}"

where ${token} is a valid token for OperatorFabric use.

The businessconfig service should answer with a 200 status associated with the following json:

{
  "TEST": {
    "title": "Test: Process {{value}}",
    "summary": "This sums up the content of the card: {{value}}",
    "detail": {
      "title": "card title" 
    }
  },
  "process": {
    "label": "Test Process"
  },
  "state": {
    "label": "Test State"
  },
  "template": {
    "title": "Asset details"
  }
}
11.1.3.2. Processes and States

Each Process declares associated states. Each state declares specific templates for card details and specific actions.

The purpose of this section is to display elements of businessconfig card data in a custom format.

configuration

The process entry in the configuration file is a dictionary of processes, each key maps to a process definition. A process definition is itself a dictionary of states, each key maps to a state definition.

Templates

For demonstration purposes, there will be two simple templates. For more advance feature go to the section detailing the handlebars templates and associated helpers available in OperatorFabric. As the card used in this example are created above, the bundle template folder needs to contain 2 templates: template1.handlebars and template2.handlebars.

Examples of template (i18n versions)

The following template displays a title and a line containing the value of the scope property card.level1.level1Prop. The value of this key is 'This is a root property'.

/template/template1.handlebars

<h2>Template Number One</h2>
<div class="bundle-test">'{{card.data.level1.level1Prop}}'</div>

The following template example displays also a title and a list of numeric values from 1 to 3.

/template/template2.handlebars

<h2>Second Template</h2>
<ul class="bundle-test-list">
        {{#each card.data.level1.level1Array}}
                <li class="bunle-test-list-item">{{this.level1ArrayProp}}</li>
        {{/each}}
</ul>
CSS

This folder contains regular css files. The file name must be declared in the config.json file in order to be used in the templates and applied to them.

Examples

As above, all parts of files irrelevant for our example are symbolised by a character.

Declaration of css files in config.json file

{
        
    "states" : {
            "state1" : {
                  
                        "styles":["bundleTest"]
                  }
              }
        
}

CSS Class used in ./template/template1.handlebars

<div class="bundle-test">'{{card.data.level1.level1Prop}}'</div>

As seen above, the value of {{card.data.level1.level1Prop}} of a test card is This is a level1 property

Style declaration in ./css/bundleTest.css

.h2{
        color:#fd9312;
        font-weight: bold;
}

Expected result

Formatted root property
11.1.3.3. Upload

To upload a bundle to the OperatorFabric server use a POST http request as described in the Businessconfig Service API documentation .

Example

cd ${BUNDLE_FOLDER}
curl -X POST "http://localhost:2100/businessconfig/processes"\
        -H  "accept: application/json"\
        -H  "Content-Type: multipart/form-data"\
        -F "file=@bundle-test.tar.gz;type=application/gzip"

Where:

  • ${BUNDLE_FOLDER} is the folder containing the bundle archive to be uploaded.

  • bundle-test.tar.gz is the name of the uploaded bundle.

These command line should return a 200 http status response with the details of the bundle in the response body such as :

{
  "id":"BUNDLE_TEST"
  "name": "BUNDLE_TEST",
  "version": "1",
  "states" : {
          "start" : {
            "templateName" : "template1"
          },
          "end" : {
            "templateName" : "template2",
            "styles" : [ "bundleTest.css" ]
          }
      }
}

For further help check the Troubleshooting section which resumes how to resolve common problems.

11.1.4. Processes groups

OperatorFabric offers the possibility of defining process groups. These groups have an impact only on the UI, for example on the notification configuration screen, by offering a more organized view of all the processes.

A process can only belong to one process group.

To define processes groups, you have to upload a file via a POST http request as described in the

Example

cd ${PROCESSES_GROUPS_FOLDER}
curl -X POST "http://localhost:2100/businessconfig/processgroups"\
        -H "accept: application/json"\
        -H "Content-Type: multipart/form-data"\
        -F "file=@processesGroups.json"\
        -H "Authorization: Bearer ${token}"

Where:

  • ${PROCESSES_GROUPS_FOLDER} is the folder containing the processes groups file to upload.

  • processesGroups.json is the name of the uploaded file.

  • ${token} is a valid token for OperatorFabric use.

Example of content for uploaded file :

{
  "groups": [
    {
      "id": "processgroup1",
      "name": "Process Group 1",
      "processes": [
        "process1",
        "process2"
      ]
    },
    {
      "id": "processgroup2",
      "name": "Process Group 2",
      "processes": [
        "process3",
        "process4"
      ]
    }
  ]
}

These command line should return a 201 http status.

11.2. Templates

Templates are Handlebars template files. Templates are then filled with data coming from two sources:

  • a card property (See card data model for more information)

  • a userContext :

    • login: user login

    • token: user jwt token

    • firstName: user first name

    • lastName: user last name

    • groups : user groups as an array of groups ID

    • entities : user entities as an array of entities ID

To use these data in the template , you need to reference them inside double braces. For example if you want to display the user login:

Your login is : {{userContext.login}}

To display specific business data from the card, write for example:

My data : {{card.data.mydataField}}

Have a look to handlebarsjs.com/ for more information on the templating mechanism.

In addition to Handlebars basic syntax and helpers, OperatorFabric defines the following helpers :

11.2.1. OperatorFabric specific handlebars helpers

11.2.1.1. arrayContains

Verify if an array contains a specified element. If the array does contain the element, it returns true. Otherwise, it returns false.

<p {{#if (arrayContains colors 'red')}}class="text-danger"{{/if}}>test</p>

If the colors array contains 'red', the output is:

<p class="text-danger">test</p>
11.2.1.2. arrayContainsOneOf

If the first array contains at least one element of the second array, return true. Otherwise, return false.

{{#if (arrayContainsOneOf arr1 arr2)}}
  <p>Arr1 contains at least one element of arr2</p>
{{/if}}
11.2.1.3. bool

returns a boolean result value on an arithmetical operation (including object equality) or boolean operation.

Arguments: - v1: left value operand - op: operator (string value) - v2: right value operand

arithmetical operators:

  • ==

  • ===

  • !=

  • !==

  • <

  • >

  • >=

boolean operators:

  • &&

  • ||

examples:

{{#if (bool v1 '<' v2)}}
  v1 is strictly lower than v2
{{else}}
 V2 is lower or equal to v1
{{/if}}
11.2.1.4. conditionalAttribute

Adds the specified attribute to an HTML element if the given condition is truthy. This is useful for attributes such as checked where it is the presence or absence of the attribute that matters (i.e. an checkbox with checked=false will still be checked).

<input type="checkbox" id="optionA" {{conditionalAttribute card.data.optionA 'checked'}}></input>
11.2.1.5. replace

Replaces all the occurrences in a given string You should specify the substring to find, what to replace it with and the input string.

{{replace "&lt;p&gt;" "<p>"  this.value}}
11.2.1.6. dateFormat

formats the submitted parameters (millisecond since epoch) using mement.format. The locale used is the current user selected one, the format is "format" hash parameter (see Handlebars doc Literals section).

{{dateFormat card.data.birthday format="MMMM Do YYYY, h:mm:ss a"}}

Note
You can also pass a milliseconds value as a string.

{{dateFormat card.data.birthdayAsString format="MMMM Do YYYY, h:mm:ss a"}}
11.2.1.7. json

Convert the element in json, this can be useful to use the element as a javascript object in the template. For example :

var myAttribute = {{json data.myAttribute}};
11.2.1.8. keepSpacesAndEndOfLine

Convert a string to a light HTML by replacing :

  • each new line character with <br/>

  • spaces with &nbsp; when there is at least two consecutive spaces.

11.2.1.9. keyValue

This allows to traverse a map.

Notice that this should normally be feasible by using the built-in each helper, but a client was having some troubles using it so we added this custom helper.

{{#keyValue studentGrades}}
  <p> {{index}} - {{key}}: {{value}} </p>
{{/keyValue}}

If the value of the studentGrades map is:

{
  'student1': 15,
  'student2': 12,
  'student3': 9
}

The output will be:

<p> 0 - student1: 15</p>
<p> 1 - student2: 12</p>
<p> 2 - student3: 9</p>
11.2.1.10. math

returns the result of a mathematical operation.

arguments:

  • v1: left value operand

  • op: operator (string value)

  • v2: right value operand

arithmetical operators:

  • +

  • -

  • *

  • /

  • %

example:

{{math 1 '+' 2}}
11.2.1.11. mergeArrays

Return an array that is a merge of the two arrays.

{{#each (mergeArrays arr1 arr2)}}
  <p>{{@index}} element: {{this}}</p>
{{/each}}
11.2.1.12. now

outputs the current date in millisecond from epoch. The date is computed from application internal time service and thus may be different from the date that one can compute from javascript api which relies on the browsers' system time.

NB: Due to Handlebars limitation you must provide at least one argument to helpers otherwise, Handlebars will confuse a helper and a variable. In the bellow example, we simply pass an empty string.

example:

<div>{{now ""}}</div>
<br>
<div>{{dateFormat (now "") format="MMMM Do YYYY, h:mm:ss a"}}</div>

outputs

<div>1551454795179</div>
<br>
<div>mars 1er 2019, 4:39:55 pm</div>

for a local set to FR_fr

11.2.1.13. numberFormat

formats a number parameter using Intl.NumberFormat. The locale used is the current user selected one, and options are passed as hash parameters (see Handlebars doc Literals section).

{{numberFormat card.data.price style="currency" currency="EUR"}}
11.2.1.14. padStart

pads the start of a string with a specific string to a certain length using String.prototype.padStart().

{{padStart card.data.byhour 2 '0'}}
11.2.1.15. preserveSpace

preserves space in parameter string to avoid html standard space trimming.

{{preserveSpace card.data.businessId}}
11.2.1.16. slice

extracts a sub array from ann array

example:

<!--
{"array": ["foo","bar","baz"]}
-->
<ul>
{{#each (slice array 0 2)}}
  <li>{{this}}</li>
{{/each}}
</ul>

outputs:

<ul>
  <li>foo</li>
  <li>bar</li>
</ul>

and

<!--
{"array": ["foo","bar","baz"]}
-->
<ul>
{{#each (slice array 1)}}
  <li>{{this}}</li>
{{/each}}
</ul>

outputs:

<ul>
  <li>bar</li>
  <li>baz</li>
</ul>
11.2.1.17. sort

sorts an array or some object’s properties (first argument) using an optional field name (second argument) to sort the collection on this fields natural order.

If there is no field argument provided :

  • for an array, the original order of the array is kept ;

  • for an object, the structure is sorted by the object field name.

<!--
users :

{"john": { "firstName": "John", "lastName": "Cleese"},
"graham": { "firstName": "Graham", "lastName": "Chapman"},
"terry": { "firstName": "Terry", "lastName": "Gilliam"},
"eric": { "firstName": "Eric", "lastName": "Idle"},
"terry": { "firstName": "Terry", "lastName": "Jones"},
"michael": { "firstName": "Michael", "lastName": "Palin"},
-->

<ul>
{{#each (sort users)}}
    <li>{{this.firstName}} {{this.lastName}}</li>
{{/each}}
</ul>

outputs :

<ul>
  <li>Eric Idle</li>
  <li>Graham Chapman</li>
  <li>John Cleese</li>
  <li>Michael Pallin</li>
  <li>Terry Gilliam</li>
  <li>Terry Jones</li>
</ul>

and

<ul>
{{#each (sort users "lastName")}}
    <li>{{this.firstName}} {{this.lastName</li>
{{/each}}
</ul>

outputs :

<ul>
  <li>Graham Chapman</li>
  <li>John Cleese</li>
  <li>Terry Gilliam</li>
  <li>Eric Idle</li>
  <li>Terry Jones</li>
  <li>Michael Pallin</li>
</ul>
11.2.1.18. split

splits a string into an array based on a split string.

example:

<ul>
{{#each (split 'my.example.string' '.')}}
  <li>{{this}}</li>
{{/each}}
</ul>

outputs

<ul>
  <li>my</li>
  <li>example</li>
  <li>string</li>
</ul>
11.2.1.19. times

Allows to perform the same action a certain number of times. Internally, this uses a for loop.

{{#times 3}}
  <p>test</p>
{{/times}}

outputs :

<p>test</p>
<p>test</p>
<p>test</p>
11.2.1.20. toBreakage

Change the breakage of a string. The arguments that you can specify are:

  • lowercase ⇒ The string will be lowercased

  • uppercase ⇒ The string will be uppercased

{{toBreakage key 'lowercase'}}s

If the value of the key variable is "TEST", the output will be:

tests
11.2.1.21. objectContainsKey

Verify if a JavaScript object contains the specified key. It returns true if it contains it, false otherwise.

{{objectContainsKey card.data.myObject 'myKey' }}
11.2.1.22. findObjectByProperty

Searching directly an object in a list using the value of a property. Returns the object if it contains it, null otherwise.

{{#with (findObjectByProperty card.data.myObjectsList propertyName propertyValue)}}
  <p>Property value of found object: {{this.propertyName}}</p>
{{/with}}

11.2.2. Naming convention

Please do not prefix id attributes of DOM elements or css class names of your templates with "opfab". Indeed, so that there is no confusion between the elements of OperatorFabric and those of your templates, we have prefixed all our id attributes and css classes with "opfab".

11.2.3. OperatorFabric css styles

OperatorFabric defines several css classes that you should use so your templates don’t clash with the rest of the OperatorFabric look and feel. These styles are especially useful for templates used in user card or card with responses.

Your can find example using these classes in the OperatorFabric core repository (src/test/resources/bundles).

The following css styles are available.

11.2.3.1. opfab-input

An input field, for example:

  <div class="opfab-input">
      <label> My input name </label>
      <input id="my_input" name="my_input">
  </div>
11.2.3.2. opfab-textarea

A text area input field, for example:

  <div class="opfab-textarea">
      <label> My input name </label>
      <textarea id="my_textearea_input" name="my_textearea_input"> </textarea>
  </div>
11.2.3.3. opfab-select

A select input field, for example:

  <div class="opfab-select" style="position:relative">
      <label> My select label </label>
      <select id="my_select" name="my_select">
          <option  value="option1"> Option 1 </option>
          <option  value="option2"> Option 2</option>
      </select>
  </div>

The use of position:relative is important here to avoid strange positioning of the label.

11.2.3.4. opfab-radio-button

A radio button input field, for example:

  <label class="opfab-radio-button">
      <span> My radio choice </span>
      <input type="radio" id="my_radio_button">
      <span class="opfab-radio-button-checkmark"></span>
  </label>
11.2.3.5. opfab-checkbox

A checkbox input field, for example:

  <label class="opfab-checkbox">
    My checkbox text
    <input type="checkbox" id="my_checkbox" name="my_checkbox" >
    <span class="opfab-checkbox-checkmark"> </span>
  </label>
11.2.3.6. opfab-table

An HTML table, for example:

  <div class="opfab-table">
    <table>
        .....
    </table>
  </div>
11.2.3.7. opfab-border-box

A box with a label, for example:

  <div class="opfab-border-box">
    <label>  My box name  </label>
    <div> My box text </div>
  </div>
11.2.3.8. opfab-color-danger, opfab-color-warning and opfab-color-success

Some styles for text standard colors, for example:

  <span class="opfab-color-danger"> my text in color </span>
11.2.3.9. opfab-btn, opfab-btn-cancel

Styles for buttons, for example:

  <button type="button" class="opfab-btn">OK</button>
  <button type="button" class="opfab-btn-cancel">CANCEL</button>

11.2.4. Tooltip

OperatorFabric provides a css tooltip component. To use it, you must use the opfab-tooltip class style. You can define where the tooltip will be displayed using left, top or bottom (by default, it is displayed on the right). For example :

  <span opfab-tooltip-text="Here is an example of tooltip" class="opfab-tooltip">Some tooltip text</span>

or for a tooltip displyed on the left :

  <span opfab-tooltip-text="Here is an example of tooltip" class="opfab-tooltip left">Some tooltip text</span>

11.2.5. Multi Select

OperatorFabric provides a multiselect component based on Virtual Select. To use it, one must use the OperatorFabric style and provide javascript to initialize the component:

  <div class="opfab-multiselect">
        <label>  MY MULTI SELECT   </label>
        <div id="my-multiselect"></div>
  </div>

 <script>
  myMultiSelect = opfab.multiSelect.init({
                id: "my-multiselect",
                options: [
                { label: 'Choice A', value: 'A' },
                { label: 'Choice B', value: 'B' },
                { label: 'Choice C', value: 'C' }
                { label: 'Choice D', value: 'D' },
                { label: 'Choice E', value: 'E' },
                { label: 'Choice F', value: 'F' }
                ],
                multiple: true,
                search: true
            });


  opfab.currentUserCard.registerFunctionToGetSpecificCardInformation(() => {

        const selectedValues = myMultiSelect.getSelectedValues();
  ...

You can set selected values via method setSelectedValues();

  myMultiSelect.setSelectedValues(['A','B']);

If you want to set the options list after init, use setOptions method :

  var options = [
    { label: 'Options 1', value: '1' },
    { label: 'Options 2', value: '2' },
    { label: 'Options 3', value: '3' },
  ];

  myMultiSelect.setOptions(options);

You can enable (or disable) the multiselect component using enable (or disable) method :

  myMultiSelect.enable();
  myMultiSelect.disable();
It strongly advice to NOT use directly virtual select (always use OperatorFabric js object and css) otherwise you will not be guaranteed compatibility when upgrading.

It is possible to use the component also as single value select. The advantage over a standard select is the possibility to benefit of the searching feature. To use the component as single select, initialize it with the multiple property set to false:

  <div class="opfab-multiselect">
        <label>  MY SINGLE SELECT   </label>
        <div id="my-singleselect">
        </div>
  </div>

 <script>

  opfab.multiSelect.init({
                id: "my-singleselect",
                options: [
                { label: 'Choice A', value: 'A' },
                { label: 'Choice B', value: 'B' },
                { label: 'Choice C', value: 'C' }
                { label: 'Choice D', value: 'D' },
                { label: 'Choice E', value: 'E' },
                { label: 'Choice F', value: 'F' }
                ],
                multiple: false,
                search: true
            });


   opfab.currentUserCard.registerFunctionToGetSpecificCardInformation(()  => {

        const selectedValues = document.getElementById('my-singleselect').value;

      ....

11.2.6. Rich Text Editor

OperatorFabric provides a rich text editor component based on github.com/quilljs/quill. The component provides the following text formatting tools :

  • Headings

  • Color

  • Bold

  • Underline

  • Italics

  • Link

  • Align

  • Bullet list

  • Ordered list

  • Indent

To use the rich text editor component just add a <opfab-richtext-editor> tag in the template using the OperatorFabric style. From javascript it is then possible to get formatted text content.

        <div class="opfab-textarea">
            <label> MESSAGE </label>
            <opfab-richtext-editor id="quill">{{card.data.richMessage}}</opfab-richtext-editor>
        </div>

 <script>

  const quillEditor = document.getElementById('quill');

  opfab.currentUserCard.registerFunctionToGetSpecificCardInformation(() => {

        const editorContent = quillEditor.getContents();
        const editorHtml = quillEditor.getHtml();
  ...

The editor use a custom JSON format called Delta to store the formatted content. To get the JSON content as a string you can call the getContents() method. To get the HTML formatted text you can call the getHtml() method.

To set the initial content of the editor you can either call the setContents(delta) method from javascript providing the content as Delta JSON string or directly in the template by putting the Delta JSON string as content of the <opfab-richtext-editor> tag.

To show the rich message an HTML element you can call the opfab.richTextEditor.showRichMessage(element) method. For example:

<span id="richMessage">{{card.data.richMessage}}</span>

<script>

  opfab.richTextEditor.showRichMessage(document.getElementById("richMessage"));

  ...
</script>

11.2.7. Charts

The library charts.js is integrated in OperatorFabric, it means it’s possible to show charts in cards, you can find a bundle example in the operator fabric git (src/test/resources/bundle/defaultProcess_V1).

The version of chartjs integrated in OperatorFabric is v3.7.1.

12. Card notification

When a user receives a card, he is notified via a resume of the card on the left panel of the UI, what is called the "feed".

12.1. Notification configuration

For each process/state, the user can choose to be notified or not when receiving a card. If he chooses not to be notified then the card will not be visible:

  • in the feed or the timeline

  • in the monitoring screen

  • in the calendar screen

However, it will be visible in the archives screen.

In order to have a better visual organization of the processes in the UI, you can define processes groups. You can find more information here

Some process/state may be non-filterable. In this case, the corresponding checkbox is disabled (and checked). To make a process/state filterable or not, the admin must modify the "filtering notification allowed" field, in the corresponding perimeter.

12.2. Sound notification

If the option is activated in the general configuration file web-ui.json, the user can choose to have a sound played when they receive a card (either by the browser or by an external device if configured). This can be managed by the user in the settings screen.

To customize the sounds played by the browser, see config/docker/custom-sounds/README.adoc.

12.3. Card read

When the user receives a new card, he can see it in the feed with a bold title and card resume. Once the card is opened, the text is not bold anymore and becomes grey.

12.4. Card acknowledgment

12.4.1. Acknowledgment done by the user

The user can set a card as "acknowledged," so he will not see it any more by default in the feed. It is as well possible to cancel it and set a card to "unacknowledged" (a filter permit to see acknowledged cards).

To offer the possibility for the user to acknowledge card, it has to be configured in process definition. The configuration is done on a state by setting the acknowledgmentAllowed field. Allowed values are:

  • "Never": acknowledgement not allowed

  • "Always": acknowledgement allowed (default value)

  • "OnlyWhenResponseDisabledForUser": acknowledgement allowed only when the response is disabled for the user

It is possible to cancel a card acknowledgment unless process state configuration has the cancelAcknowledgmentAllowed field set to false.

When the user acknowledges a card, the card detail is closed unless process state configuration has the closeCardWhenUserAcknowledges field set to false.

You can see examples in src/test/resources/bundles/defaultProcess_V1/config.json

12.4.2. Acknowledgment done by entity(ies)

A card can also be set as acknowledged if a member (or several) of the entity of the user has acknowledged it. A user can be member of several entities, so you can configure if acknowledgment done by only one entity (of the user) suffices for the card to appear as acknowledged or if all entities must acknowledge the card.

To configure how a card is considered as acknowledged for the user, it has to be configured in process definition. The configuration is done on a state by setting the consideredAcknowledgedForUserWhen field. Allowed values are :

  • "UserHasAcknowledged" : the card is set as acknowledged if the user acknowledges it

  • "AllEntitiesOfUserHaveAcknowledged" : the card is set as acknowledged if all the entities of the user acknowledge it

12.4.3. Displaying the entities acknowledgments

The entities acknowledgments can be displayed in the footer of the card. You can choose to display it (or not) via the option showAcknowledgmentFooter in a state definition. Allowed values are :

  • "OnlyForEmittingEntity" (default value) : the entities acknowledgments are displayed only for the members of the entity that has created the card

  • "OnlyForUsersAllowedToEdit" : the entities acknowledgments are displayed for the users allowed to edit the card

  • "ForAllUsers" : the entities acknowledgments are displayed for the members of all the entities

  • "Never" : The entities acknowledgments are displayed to no one

12.5. Pin cards

It is possible to configure the state of a process so that cards are automatically "pinned" when acknowledged. Pinned cards are displayed in Feed page as small boxes with the title of the card just under the timeline Pinned cards are still visible even if the card is no more visible in the feed. A pinned card remains visible until card end date. If no end date, card will always be visible unless it is updated or deleted.

To configure a process state to automatically pin cards set the automaticPinWhenAcknowledged property to true in state definition. For example:

"pinnedState": {
      "name": "⚠️ Network Contingencies ⚠️",
      "description": "Contingencies state",
      "templateName": "contingencies",
      "styles": [
        "contingencies"
      ],
      "acknowledgmentAllowed": "Always",
      "type" : "INPROGRESS",
      "automaticPinWhenAcknowledged" : true
    }

12.6. Card reminder

For certain process and state, it is possible to configure a reminder. The reminder triggers the resending of the card at a certain time with status "unread" and "unacknowledged" and publishDate updated.

The time for "reactivation" is defined with the parameter "secondsBeforeTimeSpanForReminder" in the card.

The reminder is done related to the timespans values :

  • the startDate

  • or recurrently if a recurrence objet is defined.

12.6.1. Simple reminder

If a timespan is present without a recurrence object, a reminder will arise at startDate - secondsBeforeTimeSpanForReminder.

12.6.2. Recurrent reminder

It is possible to set a recurrent reminder for a card. There are two ways to do it :

12.6.2.1. Using rRule field :

rRule field defines a regular event (as defined in the RFC 5545). It is defined with the following fields :

  • freq : frequency of the recurrence (possible values : 'SECONDLY', 'MINUTELY', 'HOURLY', 'DAILY', 'WEEKLY', 'MONTHLY', 'YEARLY')

  • byweekday : list of days of the week when the event arises (possible values : 'MO', 'TU', 'WE', 'TH', 'FR', 'SA', 'SU')

  • bymonth : list of months of the year when the event arises (possible values : number from 1 to 12, 1 being January and 12 being December)

  • byhour : list of hours of the day for the recurrence (from 0 to 23)

  • byminute : list of minutes within an hour for the recurrence (from 0 to 59)

The reminder will arise for each recurrent date of event - secondsBeforeTimeSpanForReminder starting from startDate.

Recurrent reminder example using rRule field :
rRule : {
    freq : 'DAILY',
    byweekday : ['TU', 'WE'],
    bymonth : [1, 3],
    byhour : [11],
    byminute : [30]
}
12.6.2.2. Using recurrence field in the timespan object (deprecated)

recurrence field defines a regular event in the timespan structure. It is defined with the following fields :

  • HoursAndMinutes : hours and minutes of day when the event arise

  • DaysOfWeek : a list of days of the week when the event arises. The day of week is a number with 1 being Monday and 7 being Sunday as defined in the ISO Standard 8601 (weekday number)

  • Months : a list of months of the year when the event arises. The month of year is a number with 0 being January and 11 being December

  • TimeZone : the time zone of reference for the recurrence definition (default value is Europe/Paris)

  • DurationInMinutes : the duration in minutes of the event

The reminder will arise for each recurrent date of event - secondsBeforeTimeSpanForReminder starting from startDate.

Recurrent reminder example using recurrence field :

If timespan is defined as follows :

startDate : 1231135161
recurrence : {
    hoursAndMinutes : { hours:10 ,minutes:30},
    daysOfWeek : [6,7],
    durationInMinutes : 15,
    months : [10,11]
}

If secondsBeforeTimeSpanForReminder is set to 600 seconds, the reminder will arise every Saturday and Sunday, in November and December at 10:20 starting from startDate.

12.6.3. Debugging

When a card with a reminder set is sent, the log of the "cards-reminder" service will contain a line with the date when the reminder will arise . For example :

2020-11-22T21:00:36.011Z Reminder Will remind card conferenceAndITIncidentExample.0cf5537b-f0df-4314-f17f-2797ccd8e4e9 at Sun Nov 22 2020 22:55:00 GMT+0100 (heure normale d’Europe centrale)

13. Email Notifications

OpFab can send card notifications via email. These notifications are sent when a new card is published for a user, remains unread for a configured period, and the user is not connected to OpFab. The email subject includes the card title, summary, and start and end dates.

13.1. Configuring Email Notifications

Users can enable email notifications in their user settings. They must provide an email address for receiving notifications and select the processes/states they want to be notified about.

13.2. Email Content

The email body contains a link to the card details in OpFab. In the config.json file containing the state definition, the "emailBodyTemplate" field allows to define a template specific to the email body content. This template can use handlebars helpers for formatting but can’t run javascript. If no template is specified, the mail is sent with just the link to the card in the body.

14. Response cards

Within your template, you can allow the user to perform some actions (respond to a form, answer a question, …​). The user fills these information and then clicks on a submit button. When he submits this action, a new card is created and emitted to a third-party tool.

This card is called "a child card" as it is attached to the card where the question came from : "the parent card". This child card is also sent to the users that have received the parent card. From the ui point of view, the information of the child cards can be integrated in real time in the parent card if configured.

The process can be represented as follows :

ResponseCardSequence

Notice that the response will be associated to the entity and not to the user, i.e the user responds on behalf of his entity. A user can respond more than one time to a card (a future evolution could add the possibility to limit to one response per entity).

You can view a screenshot of an example of card with responses :

ResponseCardScreenshot2

14.1. Steps needed to use a response card

14.1.1. Define a third party tool

The response card is to be received by a third party application for business processing. The third-party application will receive the card as an HTTP POST request. The card is in json format (the same format as when we send a card). The field data in the json contains the user response.

The url of the third party receiving the response card is to be set in the .yml of the publication service. It is also possible to configure whether the user token shall be propagated to the third party or not by setting the "propagateUserToken" boolean property. Here is an example with two third parties configured.

operatorfabric:
  cards-publication:
    external-recipients:
      recipients:
        - id: "third-party1"
          url: "http://thirdparty1/test1"
          propagateUserToken: true
        - id: "third-party2"
          url: "http://thirdparty2:8090/test2"
          propagateUserToken: false

The name to use for the third-party is the publisherId of the parent card.

For the url, do not use localhost if you run OperatorFabric in a docker, as the publication-service will not be able to join your third party.

14.1.2. Configure the response in config.json

A card can have a response only if it’s in a process/state that is configured for. To do that you need to define the appropriate configuration in the config.json of the concerned process. Here is an example of configuration:

{
  "id": "defaultProcess",
  "name": "Test",
  "version": "1",
  "states": {
    "questionState": {
      "name": "question.title",
      "color": "#8bcdcd",
      "response": {
        "state": "responseState",
        "externalRecipients":["externalRecipient1", "externalRecipient2"]
      },
      "templateName": "question",
      "styles": [
        "style"
      ],
      "acknowledgmentAllowed": "Never",
      "showDetailCardHeader" : true
    },
    "responseState": {
      "name" : "response.title",
      "isOnlyAChildState" : true
    }
  }
}

We define here a state name "questionState" with a response field. Now, if we send a card with process "defaultProcess" and state "questionState", the user will have the possibility to respond if he has the required privileges.

  • The field "state" in the response field is used to define the state to use for the response (the child card).

  • The field "externalRecipients" define the recipients of the response card. These recipients are the ids of the objects referenced in the config file of cards-publication service, in "external-recipients" element. This field is optional.

  • The field "emittingEntityAllowedToRespond" in the response field is used to allow the emitting entity to respond to a card. To be able to respond, however, the emitting entity has to be one of the recipients of the card. Default value is false.

  • The field "showDetailCardHeader" permits to display the card header or not. This header contains the list of entities that have already responded or not, and a countdown indicating the time remaining to respond, if necessary.

  • The field "isOnlyAChildState" indicates whether the state is only used for child cards or not. If yes, the state is displayed neither in the feed notification configuration screen nor in archives screen filters.

The state to be used for the response can also be set dynamically based on the contents of the card or the response by returning it in the method registered by opfab.currentCard.registerFunctionToGetUserResponse (see below for details).

14.1.3. Design the question form in the template

For the user to response you need to define the response form in the template with standard HTML syntax

To enable OperatorFabric to send the response, you need to implement a javascript function in your template which returns an object containing the following fields :

  • valid (boolean) : true if the user input is valid

  • errorMsg (string) : message in case of invalid user input. If valid is true this field is not necessary.

  • responseCardData (any) : the user input to send in the data field of the child card. If valid is false this field is not necessary.

  • responseState : name of the response state to use. This field is not mandatory, if it is not set the state defined in config.json will be used for the response.

  • publisher (string) : the id of the Entity to be used as publisher of the response. This field is not mandatory, if it is not set the publisher will be the user entity, in case the user belongs to a single entity, or it will be choosen by the user between his available entities.

  • actions (string array) : an optional field used to define a set of predetermined actions that will be executed upon receiving the response card. The available actions include:

    • PROPAGATE_READ_ACK_TO_PARENT_CARD : when receiving the child card, the status of the parent card should be considered as 'unread' and 'not acknowledged' until the user reads or acknowledge it again.

This method is to be registered via opfab.currentCard.registerFunctionToGetUserResponse, the method will be called by OperatorFabric when the user clicks on the button to send the response.

In the example below, the getUserResponse creates a responseCardData object by retrieving the user’s inputs from the HTML. In addition, if the user chose several options, it overrides the response state defined in the config.json with another state.

src/test/resources/bundles/defaultProcess_V1/template/question.handlebars
  opfab.currentCard.registerFunctionToGetUserResponse(() => {

    const responseCardData = {};
    const formElement = document.getElementById('question-form');
    for (const [key, value] of [... new FormData(formElement)]) {
        (key in responseCardData) ? responseCardData[key].push(value) : responseCardData[key] = [value];
    }

    const result = {
        valid: true,
        responseCardData: responseCardData
    };

      // If the user chose several options, we decide to move the process to a specific state, for example to ask a follow-up question (what's their preferred option).
    const choiceRequiresFollowUp = Object.entries(responseCardData).length>1;
    if(choiceRequiresFollowUp) result['responseState'] = 'multipleOptionsResponseState';

    return result;

    });

14.1.4. Define permissions

To respond to a card a user must have the right privileges, it is done using "perimeters". The user must be in a group that is attached to a perimeter with a right "ReceiveAndWrite" for the concerned process/state, the state being the response state defined in the config.json.

Here is an example of definition of a perimeter :

{
  "id" : "perimeterQuestion",
  "process" : "defaultProcess",
  "stateRights" : [
    {
      "state" : "responseState",
      "right" : "ReceiveAndWrite"
    }
  ]
}

To configure it in OperatorFabric , you need to make a POST of this json file to the end point /users/perimeters.

To add it to a group name for example "mygroup", you need to make a PATCH request to endpoint 'users/groups/mygroup/perimeters' with payload ["perimeterQuestion"]

14.2. Send a question card

The question card is like a usual card except that you have the field "entitiesAllowedToRespond" to set with the entities allowed to respond to the card. If the user is not in the entity, he will not be able to respond.

...
"process"  :"defaultProcess",
"processInstanceId" : "process4",
"state": "questionState",
"entitiesAllowedToRespond": ["ENTITY1_FR","ENTITY2_FR"],
"severity" : "ACTION",
...
By default, OperatorFabric considers that if the parent card (question card) is modified, then the child cards are deleted. If you want to keep the child cards when the parent card is changed, then you must add in the parent card the field "actions" as a string array containing the "KEEP_CHILD_CARDS" action.

The header in the card details will list the entities from which a response is expected, color-coding them depending on whether they’ve already responded (green) or not (orange).

You can also set the property entitiesRequiredToRespond to differentiate between entities can respond (entitiesAllowedToRespond) and those who must respond (entitiesRequiredToRespond).
...
"process"  :"defaultProcess",
"processInstanceId" : "process4",
"state": "questionState",
"entitiesAllowedToRespond": ["ENTITY1_FR","ENTITY2_FR","ENTITY3_FR"],
"entitiesRequiredToRespond": ["ENTITY1_FR","ENTITY2_FR"],
"severity" : "ACTION",
...

If entitiesRequiredToRespond is set and not empty, the card detail header will use this list instead of entitiesAllowedToRespond.

If set, entitiesRequiredToRespond does not have to be a subset of entitiesAllowedToRespond. To determine if a user has the right to respond, OperatorFabric consider the union of the two lists.

14.3. Integrate child cards

For each user response, a child card containing the response is emitted and stored in OperatorFabric like a normal card. It is not directly visible on the ui but this child card can be integrated in real time to the parent card of all the users watching the card. To do that, you need some code in the template to process child data:

You shall give a reference to a method for processing the child cards via opfab.currentCard.listenToChildCards .

The define method shall have an array of child cards as a parameter and will be called by OperatorFabric when loading the card and every time the list of child cards changes.

opfab.currentCard.listenToChildCards( (childCards) => { // process child cards });

14.3.1. Entity name

If you want to show the name of an entity that send the response, you need to get the id of the entity via the publisher field of the child card and then you can get the name of the entity by calling opfab.users.entities.getEntityName(entityId)

14.3.2. Example

14.4. Lock mechanism

When a user has never answered to a response card, the button will be marked as "VALIDATE ANSWER" and the card will be unlocked. When the user responds for the first time (and the response succeeds), the button will then be marked as "MODIFY ANSWER" and the information that the card has been locked can be sent to the third template (see opfab.currentCard.listenToResponseLock in frontend API).

Once a user has responded to a response card, its entity status will be considered as "already answered" for this card. Then all the users having the same entity will be in this status for this card.

From there, as soon as they will open this card the button will be marked as "MODIFY ANSWER" and this information (i.e. that this entity has already responded) is accessible to the card template (see opfab.currentCard.isResponseLocked in frontend API).

The user can then click on "MODIFY ANSWER" and the button will come back to its initial state ("VALIDATE ANSWER") and the information that the user wants to modify its initial answer can be sent to the card template (see opfab.currentCard.listenToResponseUnlock in frontend API).

15. User cards

Using the Create card menu, the user can send cards to entities. This feature needs to be configured.

15.1. Configure the bundle

A card is related to a process and a state, if you want users to be able to emit a card for a specific process and state, you need to define it in the bundle for this process.

For example :

"id": "conferenceAndITIncidentExample",
"name": "conferenceAndITIncidentExample.label",
"version": "1",
"states": {
  "messageState": {
    "name": "message.title",
    "userCard" : {
      "template" : "usercard_message",
      "severityVisible" : true,
      "startDateVisible" : true,
      "endDateVisible" : true,
      "expirationDateVisible" : false,
      "lttdVisible" : false,
    },
    "templateName": "message",
    "styles": [],
    "acknowledgmentAllowed": "Always"
  }
}

In this example, the field userCard states that we have a template called usercard_message that defines how the specific business input fields for this user card will be displayed in the card sending form that will be presented to the user (through the Create Card menu).

This template works the same as templates for card presentation. Here is an example :

<div class="opfab-textarea">
    <label> MESSAGE </label>
    <textarea id="message" name="message" placeholder="Write something.."
        style="width:100%"> {{card.data.message}} </textarea>
</div>


<script>
    opfab.currentCard.registerFunctionToGetSpecificCardInformation( () => {
        const message = document.getElementById('message').value;
        const card = {
          summary : {key : "message.summary"},
          title : {key : "message.title"},
          data : {message: message}
        };
        if (message.length<1) return { valid:false , errorMsg:'You must provide a message'}
        return {
            valid: true,
            card: card
        };

    }
</script>

The first part defines the HTML for the business-specific input fields. It should only include the form fields specific to your process, because the generic fields (like startDate , endDate , severity …​ ) are presented by default. It is possible to hide certain generic fields, by setting their visibility to false in the config.json (for example field severityVisible).

Please note that you should use an OpFab css class so the "business-specific" part of the form has the same look and feel (See OperatorFabric Style for more information)

Be aware that as we have a preview of the card, the application loads the html related to the usercard (build with the usercard template) and the html related to the card in preview (build with the card template) into the browser at the same time. This implies that you MUST NOT name the elements with the same names in both templates. This rule applies to HTML elements identifiers(id), global JavaScript variables and JavaScript function names.

Once the card has been sent, users with the appropriate rights can edit it. If they choose to do so, they’re presented with the same input form as for the card creation, but the fields are pre-filled with the current data of the card. This way, they can only change what they need without having to re-create the card from scratch. That’s what the reference to {{card.data.message}} is for. It means that this text-area input field should be filled with the value of the field message from the card’s data.

The second part is a javascript method you need to implement to allow OperatorFabric to get your specific data .

To have a better understanding of this feature, we encourage you to have a look at the examples in the OperatorFabric core repository under (src/test/resources/bundles/conferenceAndITIncidentExample).

15.2. Method registerFunctionToGetSpecificCardInformation

The following card fields can be set via the object card in the object returned by method the method register via registerFunctionToGetSpecificCardInformation:

  • title

  • summary

  • startDate (epoch date in ms)

  • endDate (epoch date in ms)

  • expirationDate (epoch date in ms)

  • lttd (epoch date in ms)

  • keepChildCards (deprecated, use 'actions' field including "KEEP_CHILD_CARDS" action instead)

  • secondsBeforeTimeSpanForReminder

  • severity (in case it is not visible from the user , when severityVisible set to false in config.json)

  • data

  • entityRecipients

  • entitiesAllowedToEdit

  • entitiesAllowedToRespond

  • entitiesRequiredToRespond

  • externalRecipients (used to send cards to third party , see Define a third party tool for more information).

  • rRule (used to define a card with recurrence, see Using rRule field for more information).

  • wktGeometry (string)

  • wktProjection (string)

  • actions (list of card actions)

If you send a card to an ExternalRecipient, when the user delete it, the external recipient will receive the information via an HTTP DELETE request with the id of the deleted card at the end of the request (example : myexternal_app/myendpoint/ID_CARD).

If you want the card to be visible in the calendar feature, you need to set 'timeSpans' field (as array of TimeSpans objects) in the object returned by the method. You can find an example in the file: src/test/resources/bundles/taskExample/template/usercard_task.handlebars.

If not using 'timeSpans' it’s possible to set the 'viewCardInCalendar' field to true, the card will be visible using card’s startDate and endDate as default timeSpan.

The 'recurrence' field is deprecated and could be removed in the future.

If the form is not filled correctly by the user, you can provide an error message (see example above). Again, have a look to the examples provided.

15.3. Define permissions

To send a user card, the user must be member of a group that has a perimeter defining the right ReceiveAndWrite or Write for the chosen process and state. For example:

{
  "id" : "perimeterUserCard",
  "process" : "conferenceAndITIncidentExample",
  "stateRights" : [
    {
      "state" : "messageState",
      "right" : "ReceiveAndWrite"
    }
  ]
}
Using the ReceiveAndWrite right instead of the Write right allows the user to receive the card they sent and edit or delete it.

15.4. Restrict the list of possible emitter entities

When sending a user card, if the user is member of multiple entities, it is possible to choose the emitter entity from all the available user entities. To limit the list of available emitter entities, it is possible to configure the property publisherList in userCard state definition with the list of allowed publisher entities. For example :

"processState": {
      "name": "Process example ",
      "description": "Process state",
      "userCard" : {
        "template" : "usercard_process",
        "expirationDateVisible" : true,
        "publisherList": [{"id":"ENTITY_FR", "levels":[1]},{"id":"IT_SUPERVISOR_ENTITY"}]
      }

In this example the list of available publisher entities will contain all the first level children of "ENTITY_FR" (level 1) and "IT_SUPERVISOR_ENTITY".

15.5. Set the list of recipients via the template

To do that, you have to provide the list of recipients when returning the card object in the field entityRecipients.

Example:

    opfab.currentUserCard.registerFunctionToGetSpecificCardInformation( () => {
        const message = document.getElementById('message').value;
        const card = {
          summary : {key : "message.summary"},
          title : {key : "message.title"},
          entityRecipients: ["ENTITY_FR","IT_SUPERVISOR_ENTITY"],
          data : {message: message}
        };
        if (message.length<1) return { valid:false , errorMsg:'You must provide a message'}
        return {
            valid: true,
            card: card
        }
  });

When recipient dropdown is not visible to the user (attribute recipientVisible set to false in state definition in config.json) the final recipients list will be the one defined in the template, otherwise it will be the union of user selection and template entityRecipients definition.

15.6. Card editing

Once a user card has been sent it can be edited by a user member of the publisher entity who has write access for the process/state of the card. It is possible to allow other entities to edit the card by specifying the 'entitiesAllowedToEdit' card field. It is possible to hide card edit button on UI by setting 'editCardEnabledOnUserInterface' to false in card’s process/state definition.

15.7. Card copy

A user can copy a card and send it if he has write access for the process/state of the card. Before sending the card, the user can modify it if he wants. It is possible to hide card copy button on UI by setting 'copyCardEnabledOnUserInterface' to false in card’s process/state definition.

15.8. Card delete

Once a user card has been sent it can be deleted by a user member of the publisher entity who has write access for the process/state of the card. It is possible to hide card delete button on UI by setting 'deleteCardEnabledOnUserInterface' to false in card’s process/state definition.

15.9. Send response automatically (experimental feature)

It is possible to configure a template to automatically send a response when sending a user card expecting an answers from one of the entities of the emitting user. The response card will be sent only if the user is enabled to respond to the card.

To enable the automated response the template should add a childCard field to the object returned by getSpecificCardInformation method. For example:

 <script>
     opfab.currentUserCard.registerFunctionToGetSpecificCardInformation( () =>  {
        const card = {...}

        childCard : {
          summary : {key : "example.summary"},
          title : {key : "example.title"},
          state : "mystateForResponse"
          data : {
                  // specific child  card date
                  }
          };
        ...
        return {
            valid: true,
            card: card,
            childCard: childCard
        };

    });
  </script>

The card preview will display the card detail with the automated response as it will be displayed in Feed page.

When editing a user card, the template can get the response sent by current user by calling the opfab.currentUserCard.getUserEntityChildCard function.

By default, the publisher of the childCard is the publisher of the parent card. In the template, it is possible to set another value for the publisher of the childCard, provided that the user is a member of the entity publisher you want to set.

15.10. Misc

When a user send a card, the card is also sent to the members of the entity that publish the card (and therefore also to the sender himself), whatever the user chooses in the recipient list.

16. Build-in templates

Instead of coding your own templates for cards or user cards, you can use opfab build-in templates if it suits your needs.

16.1. Message

16.1.1. Card template

If you want to show only a simple message, you can use the message build-in template, to do that just put in your handlebar file :

<opfab-message-card> </opfab-message-card>

The build-in template supposes the message is stored in the card in field data.message.

You can change the text header by providing the message-header attribute:

<opfab-message-card  message-header="a new header"> </opfab-message-card>

16.1.2. User card template

If you want to create a user card for a simple message, just add in your handlebar file :

<opfab-message-usercard> </opfab-message-usercard>

The message will be stored in the field data.message of the card

By using attributes you can set some parameters regarding recipients, see <<'build-in_templates_common_usercard_attributes,common attributes for user cards'>>

16.2. Question

16.2.1. Card template

If you want to show a question and see user responses, you can use the question build-in template, to do that just put in your handlebar file:

<opfab-question-card> </opfab-question-card>

The build-in template supposes the question is stored in the card in the field data.question.

16.2.2. User card template

If you want to create a user card for a simple question, just add in your handlebar file :

<opfab-question-usercard> </opfab-question-usercard>

The question will be stored in the field data.question of the card

By using attributes you can set some parameters regarding recipients, see <<'build-in_templates_common_usercard_attributes,common attributes for user cards'>>

16.3. Message Or Question List

16.3.1. Card template

If you want to show a message or a question and the answers, you can use the message or question list build-in template, to do that just put in your handlebar file:

<opfab-message-or-question-list-card> </opfab-message-or-question-list-card>

The build-in template supposes the message or question is stored in the card in the field data.message.

16.3.2. User card template

If you want to create a user card where you can select a list of messages or questions linked to a json file, just add in your handlebar file :

<opfab-message-or-question-list-usercard businessData="businessDataFileName"> </opfab-message-or-question-list-usercard>

The message will be stored in the field data.message of the card You need to set the json file name with the businessData attribute If you want to use the template as is, the json file must follow this structure :

`{ "possibleRecipients": [ {"id": "ENTITY1_FR"}, {"id": "ENTITY2_FR"} ],

"messagesList": [{
    "id": "Warning",
    "title": "Warning about the state of the grid",
    "summary": "Warning about the state of the grid : a problem has been detected",
    "message": "A problem has been detected, please put maintenance work on hold and be on stand by",
    "question": false,
    "severity": "ALARM",
    "publishers":  [
        "ENTITY1_FR",
        "ENTITY2_FR",
        "ENTITY3_FR"
    ]
    "recipients" : [
        "ENTITY1_FR",
        "ENTITY2_FR"
    ]
},
{
    "id": "Confirmation",
    "title": "Confirmation the issues have been fixed",
    "message": "Please confirm the issues in your area have been fixed",
    "question": true,
    "severity": "ACTION",
    "recipients" : [
        "ENTITY1_FR"
    ]
}]

}`

The severity field is optional. If not specified, the card will take a default severity value depending on 'question' field value : "ACTION" if question is true, "INFORMATION" if question is false.

The publishers field is optional. It is used to restrict the possible publishers of the message. If specified, the message option is available only for the specified entities. If not specified or empty, there are no restrictions.

The summary field is optional. If specified, considering 'xxx' is the value of the field, so the summary you will see in the feed will be 'Message received : xxx'. If not specified, the summary will only be 'Message received'.

16.4. Common attributes for user cards build-in templates

For each user card templates, you can set :

  • The entity recipient list

  • The initial selected recipients

  • The entity recipient for information list

  • The initial selected recipients for information

  • The external recipients

For example :

<opfab-message-usercard
    entityRecipientList='[{"id": "ENTITY_FR", "levels": [0, 1]}, {"id": "ENTITY_IT"},{"id": "IT_SUPERVISOR_ENTITY"}]'
    initialSelectedRecipients='["ENTITY1_FR", "ENTITY2_FR", "ENTITY3_FR"]'
    entityRecipientForInformationList='[{"id": "ENTITY_FR", "levels": [0, 1]},{"id": "IT_SUPERVISOR_ENTITY"}]'
    initialSelectedRecipientsForInformation='["ENTITY4_FR"]'
    externalRecipients='["externalRecipient1", "externalRecipient2"]'>
</opfab-message-usercard>

17. Business Data

Using the API, it is now possible to store business data and then call it from the templates. This storage system is intended for light documents and should not be treated as a database.

17.1. Pushing data

It is expected for the data being pushed to be compliant with the json format. The different methods are detailed in the (API documentation). The default limit of file size accepted is 100 MB, it can be updated in the nginx configuation.

17.2. Calling a resource

Once a resource has been stored, it is possible to call it from the template of a card. See in this template (usercard_incidentInProgress.handlebars) how the resource services is called with the method opfab.businessconfig.businessData.get()

This prevents writing directly in the template’s code long lists and allows easy data duplication between templates.

18. Frontend API

The frontend API is intended to provide a way for template to communicate with the core opfab code. It’s divided as follow :

  • Users API : API to get information about users and entities

  • Navigate API : API to ask opfab to navigate to a specific view

  • CurrentCard API : API to get information regarding the current card

  • CurrentUserCard API : API to get information regarding the current user card (card in creation or edition)

  • Utils API : utilities functions

18.1. Users API

18.1.1. Entities

Entity object contains the following fields :

  • 'id' : id of the entity

  • 'name' : name of the entity

  • 'description' : description of the entity

  • 'parents' : list of parent entities

  • 'labels' : list of labels associated to the entity

18.1.1.1. Function getEntityName

Obtain the name of an entity :

const myEntityName = opfab.users.entities.getEntityName('myEntityId');
18.1.1.2. Function getEntity

Obtain a specific entity :

const myEntity = opfab.users.entities.getEntity('myEntityId');
18.1.1.3. Function getAllEntities

Obtain all entities as an array :

const entities = opfab.users.entities.getAllEntities();

18.2. Navigate API

18.2.1. Function redirectToBusinessMenu

It’s possible to redirect the user from a card to a business application declared in ui-menu.json.

This can be done by calling the following function from the template :

opfab.navigate.redirectToBusinessMenu(idMenu);
  • idMenu is the id of the menu entry defined in ui-menu.json

It is also possible to append a url extension (sub paths and/or parameters) to the url that will be called:

opfab.navigate.redirectToBusinessMenu('myMenu','/extension?param1=aParam&param2=anotherParam');

This can be useful to pass context from the card to the business application.

18.2.2. Function showCardDetail

It is possible from the template to redirect the user to another "card detail" in feed page

 opfab.navigation.showCardDetail('cardId');

18.3. CurrentCard API

18.3.1. Function displayLoadingSpinner

Once the card is loaded, there may be some processing that is time consuming. In this case, it is possible to show a spinner using the method displayLoadingSpinner and hideLoadingSpinner in the card template .

        opfab.currentCard.displayLoadingSpinner();
        // Time consumming code
        ....
        opfab.currentCard.hideLoadingSpinner();

18.3.2. Function hideLoadingSpinner

Hide loading spinner previously open with displayLoadingSpinner()

        opfab.currentCard.hideLoadingSpinner()

18.3.3. Function getChildCards

Returns an array of the child cards. The structure of a child card is the same as the structure of a classic card.
        const childCards =  opfab.currentCard.getChildCards();

18.3.4. Function getCard()

You can access the current selected card as a js object using the getCard() method.

        const currentCard = opfab.currentCard.getCard();

18.3.5. Function getDisplayContext

To adapt the template content to the display context it is possible to get from OperatorFabric the page context where the template will be rendered by calling the javascript function getDisplayContext(). The function returns a string with one of the following values :

  • 'realtime' : realtime page context (feed, monitoring)

  • 'archive' : archive page context

  • 'preview': preview context (user card)

    const displayContext =  opfab.currentCard.getDisplayContext();

18.3.6. Function getEntitiesAllowedToRespond

If inside your template, you want to get the ids of the entities allowed to send a response, you can call the method getEntitiesAllowedToRespond. This method returns an array containing the ids.

    const entities = opfab.currentCard.getEntitiesAllowedToRespond();

18.3.7. Function getEntityUsedForUserResponse

If inside your template, you want to get the id of the entity used by the user to send a response, you can call the method getEntityUsedForUserResponse. This method is deprecated and you should favor the use of the method getEntitiesUsableForUserResponse.

    const entity = opfab.currentCard.getEntityUsedForUserResponse()

18.3.8. Function getEntitiesUsableForUserResponse

If inside your template, you want to get the ids of the entities the user can answer on behalf of, you can call the method getEntitiesUsableForUserResponse. This method will return an array containing the entities' ids

    const entities = opfab.currentCard.getEntitiesUsableForUserResponse()

18.3.9. Function isResponseLocked

To know if template is locked (i.e user can not respond unless he unlocks the card)

    const isResponseLocked =  opfab.currentCard.isResponseLocked()

18.3.10. Function isUserAllowedToRespond

The template can know if the current user has the permission to send a response to the current card by calling the isUserAllowedToRespond() function. An example of usage can be found in the file src/test/resources/bundles/conferenceAndITIncidentExample/template/incidentInProgress.handlebars.

    const isUserAllowed = opfab.currentCard.isUserAllowedToRespond();

18.3.11. Function isUserMemberOfAnEntityRequiredToRespond

The template can know if the current user is member of an Entity required to respond by calling the isUserMemberOfAnEntityRequiredToRespond function. An example of usage can be found in the file src/test/resources/bundles/defaultProcess_V1/template/question.handlebars.

    const isUserRequired = opfab.currentCard.isUserMemberOfAnEntityRequiredToRespond()

18.3.12. Function listenToResponseLock

Register a function to be informed when template is locked (i.e user has responded to the current card)

    opfab.currentCard.listenToResponseLock( () => {// do some stuff});

18.3.13. Function listenToResponseUnlock

Register a function to be informed when template is unlocked (i.e user has clicked the modify button to prepare a new response)

    opfab.currentCard.listenToResponseUnlock( () => {// do some stuff}))

18.3.14. Function listenToChildCards

Register a function to receive the child cards on card loading and when the childCards list changes

opfab.currentCard.listenToChildCards( (childCards) => { // process child cards });

18.3.15. Function listenToLttdExpired

If the card has a last time to decide (lttd) configured, when the time is expired this information can be received by the template by registering a listener.

    opfab.currentCard.listenToLttdExpired( () => { // do some stuff });

18.3.16. Function listenToStyleChange

Card template can be informed when switching day/night mode by registering a listener as follow :

    opfab.currentCard.listenToStyleChange( () => { // do some stuff });

It can be used by a template to refresh styles and reload embedded charts.

18.3.17. Function listenToScreenSize

To adapt the template content on screen size it is possible to receive from OperatorFabric information on the size of the window where the template will be rendered. To receive screen size information you need to implement a listener function which will receive as input a string parameter with one of the following values :

  • 'md' : medium size window

  • 'lg' : large size window

        opfab.currentCard.listenToScreenSize( (screenSize) => {
            if (screenSize == 'lg') // do some stuff
            else // do some other stuff
        })

18.3.18. Function listenToTemplateRenderingComplete

It is possible to be informed when opfab has finished all tasks regarding rendering template by registering a listener function .The function will be called after the call of the other listener (applyChildCard, lockAnswer ,lttdExpired and screenSize)

It can be used by a template to launch some processing when loading is complete

        opfab.currentCard.listenToTemplateRenderingComplete(() => {// do some stuff})

18.3.19. Function registerFunctionToGetUserResponse

Register the template function to call to get user response. This function will be called by opfab when user clicks on the "send reponse" button. More explanation can be found in the response card chapter.

For example :

        opfab.currentCard.registerFunctionToGetUserResponse ( () =>
          {
                const question = document.getElementById('question').value;

                if (question.length <1) return {
                    valid: false,
                    errorMsg : "You must provide a question"
                }

                const card = {
                    summary: { key: "question.summary" },
                    title: { key: "question.title" },
                    severity: "ACTION",
                    data: {
                        question: question,
                    }
                };
                return {
                    valid: true,
                    card: card,
                    viewCardInCalendar: false
                };

            })

18.4. CurrentUserCard API

18.4.1. Function getEditionMode

The template can know if the user is creating a new card or editing an existing card by calling the opfab.currentUserCard.getEditionMode() function. The function will return one of the following values:

  • 'CREATE'

  • 'EDITION'

  • 'COPY'

        const mode = opfab.currentUserCard.getEditionMode();

18.4.2. Function getEndDate

The template can know the current endDate of the card in creation or edition by calling the opfab.currentUserCard.getEndDate() function. The function will return a number corresponding to the endDate as epoch date value.

        const endDate = opfab.currentUserCard.getEndDate();

18.4.3. Function getExpirationDate

The template can know the current expirationDate of the card in creation or edition by calling the opfab.currentUserCard.getExpirationDate() function. The function will return a number corresponding to the expirationDate as epoch date value.

        const expirationDate = opfab.currentUserCard.getExpirationDate();

18.4.4. Function getLttd

The template can know the current lttd of the card in creation or edition by calling the opfab.currentUserCard.getLttd() function. The function will return a number corresponding to the lttd as epoch date value.

        const lttd = opfab.currentUserCard.getLttd();

18.4.5. Function getProcessId

The template can know the process id of the card by calling the opfab.currentUserCard.getProcessId() function. The function will return a string corresponding to the process id.

        const id = opfab.currentUserCard.getProcessId();

18.4.6. Function getSelectedEntityRecipients

The template can know the list of entities selected by the user as recipients of the card by calling the opfab.currentUserCard.getSelectedEntityRecipients() function. The function will return an array of entity ids.

        const recipients = opfab.currentUserCard.getSelectedEntityRecipients();

18.4.7. Function getSelectedEntityForInformationRecipients

The template can know the list of entities selected by the users as recipients of the card by calling the opfab.currentUserCard.getSelectedEntityForInformationRecipients() function. The function will return an array of entity ids.

        const recipients = opfab.currentUserCard.getSelectedEntityForInformationRecipients();

18.4.8. Function getStartDate

The template can know the current startDate of the card in creation or edition by calling the opfab.currentUserCard.getStartDate() function.The function will return a number corresponding to the startDate as epoch date value.

        const startDate = opfab.currentUserCard.getStartDate();

18.4.9. Function getState

The template can know the state of the card by calling the opfab.currentUserCard.getState() function. The function will return a string corresponding to the state.

        const state = opfab.currentUserCard.getState();

18.4.10. Function getUserEntityChildCard

When editing a user card, the template can get the response sent by the entity of the current user by calling the opfab.currentUserCard.getUserEntityChildCard() function. The function will return the response child card sent by current user entity or null if there is no response.

        const card = opfab.currentUserCard.getUserEntityChildCard();
The method returns only one child card and is therefore not compatible with the fact that the user is in more than one activity area authorized to send the card. In this case, if there is more than one child card, only one will be returned.

18.4.11. Function listenToEntityUsedForSendingCard

The template can receive the emitter entity of the card by registering a listener function. The function will be called by OperatorFabric after loading the template and every time the card emitter changes (if the user can choose from multiple entities).

        opfab.currentUserCard.listenToEntityUsedForSendingCard((entityId) => {// do some stuff with the entity id})

18.4.12. Function registerFunctionToGetSpecificCardInformation

Register the template function to call to get user card specific information. This function will be called by opfab when user clicks on the "preview" button. More explanation can be found in the user card chapter.

For example:

        opfab.currentCard.registerFunctionToGetSpecificCardInformation( () => {
        const message = document.getElementById('message').value;
        const card = {
          summary : {key : "message.summary"},
          title : {key : "message.title"},
          data : {message: message}
        };
        if (message.length<1) return { valid:false , errorMsg:'You must provide a message'}
        return {
            valid: true,
            card: card
        };

    }

18.4.13. Function setDropdownEntityRecipientList

When sending a user card, by default it is possible to choose the recipients from all the available entities. To limit the list of available recipients it is possible to configure the list of possible recipients via javascript in the user template.

For example :

    opfab.currentUserCard.setDropdownEntityRecipientList([
            {"id": "ENTITY_FR", "levels": [0,1]},
            {"id": "IT_SUPERVISOR_ENTITY"}
        ]);

In this example the list of available recipients will contain: "ENTITY_FR" (level 0), all the first level children of "ENTITY_FR" (level 1) and "IT_SUPERVISOR_ENTITY".

18.4.14. Function setDropdownEntityRecipientForInformationList

When sending a user card, by default it is possible to choose the recipients for information from all the available entities. To limit the list of available recipients it is possible to configure the list of possible recipients via javascript in the user template.

For example :

    opfab.currentUserCard.setDropdownEntityRecipientForInformationList([
            {"id": "ENTITY_FR", "levels": [0,1]},
            {"id": "IT_SUPERVISOR_ENTITY"}
        ]);

In this example the list of available recipients for information will contain: "ENTITY_FR" (level 0), all the first level children of "ENTITY_FR" (level 1) and "IT_SUPERVISOR_ENTITY".

18.4.15. Function setInitialEndDate

From the template it is possible to set the initial value for endDate by calling opfab.currentUserCard.setInitialEndDate(endDate) . The endDate is a number representing an epoch date value.

        const endDate = new Date().valueOf() + 10000;
        opfab.currentUserCard.setInitialEndDate(endDate);

18.4.16. Function setInitialExpirationDate

From the template it is possible to set the initial value for expirationDate by calling opfab.currentUserCard.setInitialExpirationDate(expirationDate) . The expirationDate is a number representing an epoch date value.

        const expirationDate = new Date().valueOf() + 10000;
        opfab.currentUserCard.setInitialExpirationDate(expirationDate);

18.4.17. Function setInitialLttd

From the template it is possible to set the initial value for lttd by calling opfab.currentUserCard.setInitialLttd(lttd) . The lttd is a number representing an epoch date value.

        const lttd = new Date().valueOf() + 10000;
        opfab.currentUserCard.setInitialLttd(lttd);

18.4.18. Function setInitialSelectedRecipients

It is possible to configure the list of initially selected recipients via javascript in the user template by calling the setInitialSelectedRecipients method. The method takes as input the list of Entity ids to be preselected. The method will work only at template loading time, cannot be used to modify the selected recipients after the template is loaded or in card edition mode.

For example :

    opfab.currentUserCard.setInitialSelectedRecipients([
            "ENTITY_FR",
            "IT_SUPERVISOR_ENTITY"
        ]);

In this example the dropdown list of available recipients will have "ENTITY_FR" and "IT_SUPERVISOR_ENTITY" preselected. The user can anyway change the selected recipients.

18.4.19. Function setInitialSelectedRecipientsForInformation

It is possible to configure the list of initially selected recipients for information via javascript in the user template by calling the setInitialSelectedRecipientsForInformation method. The method takes as input the list of Entity ids to be preselected. The method will work only at template loading time, cannot be used to modify the selected recipients after the template is loaded or in card edition mode.

For example :

    opfab.currentUserCard.setInitialSelectedRecipientsForInformation([
            "ENTITY_FR",
            "IT_SUPERVISOR_ENTITY"
        ]);

In this example the dropdown list of available recipients will have "ENTITY_FR" and "IT_SUPERVISOR_ENTITY" preselected. The user can anyway change the selected recipients for information.

18.4.20. Function setSelectedRecipients

It is possible to configure the list of selected recipients via javascript in the user template by calling the setInitialSelectedRecipients method. The method takes as input the list of Entity ids to be preselected. This method can be called at any time and also in edition mode.

For example :

    opfab.currentUserCard.setSelectedRecipients([
            "ENTITY_FR",
            "IT_SUPERVISOR_ENTITY"
        ]);

In this example the dropdown list of available recipients will have "ENTITY_FR" and "IT_SUPERVISOR_ENTITY" preselected. The user can anyway change the selected recipients.

18.4.21. Function setSelectedRecipientsForInformation

It is possible to configure the list of selected recipients for information via javascript in the user template by calling the setSelectedRecipientsForInformation method. The method takes as input the list of Entity ids to be selected. This method can be called at any time and also in edition mode.

For example :

    opfab.currentUserCard.setSelectedRecipientsForInformation([
            "ENTITY_FR",
            "IT_SUPERVISOR_ENTITY"
        ]);

In this example the dropdown list of available recipients will have "ENTITY_FR" and "IT_SUPERVISOR_ENTITY" preselected. The user can anyway change the selected recipients for information.

18.4.22. Function setInitialSeverity

From the template it is possible to set the initial value for card severity choice by calling the function setInitialSeverity(severity)

Allowed severity values are:

  • 'ALARM'

  • 'ACTION'

  • 'INFORMATION'

  • 'COMPLIANT'

       opfab.currentUserCard.setInitialSeverity('ACTION');

18.4.23. Function setInitialStartDate

From the template it is possible to set the initial values for startDate by calling opfab.currentUserCard.setInitialStartDate(startDate) . The startDate is a number representing an epoch date value.

        const startDate = new Date().valueOf();
        opfab.currentUserCard.setInitialStartDate(startDate);

18.5. Utils API

18.5.1. Function escapeHtml

To avoid script injection, OperatorFabric provides a utility function 'opfab.utils.escapeHtml()' to sanitize input content by escaping HTML specific characters. For example:

<input id="input-message" type="text" name="message">
<button onClick="showMessage()">

<span id="safe-message"></span>


<script>
  showMessage : function() {
    let msg = document.getElementById("input-message");
    document.getElementById("safe-message").innerHTML = opfab.utils.escapeHtml(msg.value);
  }

</script>

19. Archived Cards

19.1. Key concepts

Every time a card is published, in addition to being delivered to the users and persisted as a "current" card in MongoDB, it is also immediately persisted in the archived cards.

Archived cards are similar in structure to current cards, but they are managed differently. Current cards are uniquely identified by their id (made up of the publisher and the process id). That is because if a new card is published with id as an existing card, it will replace it in the card collection. This way, the current card reflects the current state of a process instance. In the archived cards collection however, both cards will be kept, so that the archived cards show all the states that a given process instance went through.

19.2. Archives screen in the UI

The Archives screen in the UI allows the users to query these archives with different filters. The layout of this screen is very similar to the Feed screen: the results are displayed in a (paginated) card list, and the user can display the associated card details by clicking a card in the list.

The results of these queries are limited to cards that the user is allowed to see, either :

  • because this user is direct recipient of the card,

  • because he belongs to a group (or entity) that is a recipient,

  • or because he belongs to a group that has the right to receive the card (via definition of perimeters)

If a card is sent to an entity and a group, then this user must be part of both the group and the entity.

Currently, child cards are not shown in the Archives, only the parent card is shown.
If the user is a member of a group that has the VIEW_ALL_ARCHIVED_CARDS permission, the user can see all archives, whether he is part of the recipients or not, whether he has the right to receive the card or not. If the user is a member of a group that has the VIEW_ALL_ARCHIVED_CARDS_FOR_USER_PERIMETERS permission, the user can see all archives for which he has the right to receive it, whether he is part of the recipients or not.

20. Monitoring

The monitoring screen is a realtime view of processes based on current cards received by the user (i.e. the last version of cards visible by the user). It can be seen as another view of the feed.

Not all the cards are visible, it depends on the business process they are part of. For a card to be visible in this screen, the parameter uiVisibility.monitoring must be set to true in the config.json file of its process.

20.1. Export configuration

An Excel export function is available in the monitoring screen, the content of the export can be configured. To do so, a json file describing the expected output can be sent to the businessconfig service through the /businessconfig/monitoring endpoint.

In opfab git repository, you can find in directory src/test/resources/monitoringConfig :

  • a script to load a monitoring configuration loadMonitoringConfig.sh

  • an example of configuration in monitoringConfig.json (for the response fields to be filled , you need to respond to a card question in process messageOrQuestionExample )

A description of the structure of the configuration can be found in the businessconfig api documentation

21. Process monitoring

This feature is experimental

The process monitoring screen has the following similarities with the monitoring screen :

  • it is a realtime view of processes based on current cards received by the user (i.e. the last version of cards visible by the user)

  • not all the cards are visible, it depends on the business process they are part of. For a card to be visible on this screen, the parameter uiVisibility.processMonitoring must be set to true in the config.json file of its process.

This screen provides the following features :

  • the columns displayed on this screen can be configured. To do that, you have to define the processMonitoring field in the web-ui.json configuration file. This object is a list of the fields you want to see in the array.

  • depending on the permissions of the groups to which the user is attached, he can also see the cards he is not the recipient of.

Here is an example of processMonitoring configuration :

 "processMonitoring": [
    {
      "field": "startDate",
      "type": "date",
      "colName": "Start Date",
      "size": "200"
    },
    {
      "field": "titleTranslated",
      "colName": "Title",
      "size": "250"
    },
    {
      "field": "data.message",
      "colName": "My field",
      "size": "700"
    },
    {
      "field": "summaryTranslated",
      "colName": "Summary",
      "size": "250"
    },
    {
      "field": "data.error",
      "colName": "My second field",
      "size": "100"
    }
  ]

Here is the description of the fields :

  • "field" : the name of the field in the card (or in the database) you want to display

  • "type" : if you want to display a date/time field, you must set this field to "date" for field formatting

  • "colName" : name of the column on the UI

  • "size" : size of the column on the UI, proportionally to each other

22. User management

The User service manages users, groups, entities and perimeters (linked to groups).

Users

represent account information for a person destined to receive cards in the OperatorFabric instance.

Entities
  • represent set of users destined to receive collectively some cards.

  • can be used to model organizations, for examples : control center, company , department…​

  • can be used in a way to handle rights on card reception in OperatorFabric.

  • can be part of another entity (or even several entities). This relationship is modeled using the "parent entity" property

Groups
  • represent set of users destined to receive collectively some cards.

  • has a set of perimeters to define rights for card reception in OperatorFabric (group type 'PERMISSION').

  • can be used to model roles in organizations (group type 'ROLE'), for examples : supervisor, dispatcher …​

The user define here is an internal representation of the individual card recipient in OperatorFabric the authentication is leave to specific OAuth2 external service.
In the following commands the $token is an authentication token currently valid for the OAuth2 service used by the current OperatorFabric system.

22.1. Users, groups, entities and perimeters

User service manages users, groups, entities and perimeters.

22.1.1. Users

Users are the individuals and mainly physical person who can log in OperatorFabric.

The access to this service has to be authorized, in the OAuth2 service used by the current OperatorFabric instance, at least to access User information and to manage Users. The membership of groups and entities are stored in the user information.

User login must be lowercase. Otherwise, it will be converted to lowercase before saving to the database.
Resource identifiers such as login, group id, entity id and perimeter id must only contain the following characters: letters, _, - or digits.
By default, a user cannot have several sessions at the same time. If you want to enable it, set operatorfabric.checkIfUserIsAlreadyConnected to false in the configuration file of the card consultation service ( cards-consultation-XXX.yml ). However, this is not recommended as it may cause synchronization problems between the sessions using the same login
22.1.1.1. Automated user creation

In case of a user does exist in a provided authentication service but he does not exist in the OperatorFabric instance, when he is authenticated and connected for the first time in the OperatorFabric instance, the user is automatically created in the system without attached group or entity. The administration of the groups, entities and perimeters is dealt by the administrator manually. More details about automated user creation here

22.1.2. Entities

The notion of entity is loose and can be used to model organizations structures(examples : control center, company , department…​ ). Entities are used to send cards to several users without a name specifically. The information about membership to an entity is stored in the user’s data (entities field of the user object). In Entity objects, the parents property (array) expresses the fact that this entity is a part of one or several other entities. This feature allows cards to be sent to a group of entities without having to repeat the detailed list of entities for each card. The roles attribute allows to set the utility of an entity. It determines if an entity can send or receive cards and if it’s part of an activity area. This can be useful for a group of entities where only child entities should be used as card publishers while the parent is just a logical group usable as a card recipient. The labels property (array) allows to associate string labels to an Entity object.

Examples using entities can be found here .

22.1.3. Permissions

Permissions in OperatorFabric are used to configure user’s permission on the UI and control the access to the APIs. The information about permissions is stored in the data of the user’s groups (Permissions field of the group object). The available permissions are:

  • ADMIN: Administrator permission

  • ADMIN_BUSINESS_PROCESS: Permission to administer process bundles (add, update, delete bundles)

  • VIEW_ALL_ARCHIVED_CARDS: Permission to access all archived cards

  • VIEW_ALL_ARCHIVED_CARDS_FOR_USER_PERIMETERS: Permission to access all archived cards which are in the perimeter of the user

  • READONLY: Readonly permission. User cannot send user cards, cannot respond to a card, when acknowledges a card he does not acknowledge it for his entities.

22.1.4. Groups

The notion of group is loose and can be used to simulate business role in OperatorFabric (examples : supervisor, dispatcher …​ ) by setting group type to 'ROLE' or to define permissions on processes/states by setting the group type to 'PERMISSION'. Groups are used to send cards to several users without a name specifically. The information about membership to a group is stored in the user information. A group contains a list of perimeters which define the rights of reception/writing for a couple process/state. The rules used to send cards are described in the recipients section . .

22.1.5. Perimeters

Perimeters are used to define rights for reading/writing cards. A perimeter is composed of an identifier (unique), a process name and a list of state/rights couple. Possible rights for receiving/writing cards are :

  • Receive : the rights for receiving a card

  • Write : the rights for writing a card, that is to say respond to a card or create a new card

  • ReceiveAndWrite : the rights for receiving and writing a card

22.1.6. Alternative way to manage groups and entities

The standard way to handle groups and entities in OperatorFabric instance is dealt on the user information. There is an alternative way to manage groups and entities through the authentication token, the groups and entities are defined by the administrator of the authentication service. See here for more details to use this feature.

22.2. Currently connected users

The endPoint /cards/connections gives the list of connected users in real time. It is accessible by all users.

22.3. Real time users

OperatorFabric allows you to see which entities/groups are logged in and not logged in. To have this information, you must upload a configuration file, using the endpoint /businessconfig/realtimescreens, via a POST request. In this file, you can configure several screens, each one containing the list of entities/groups you want to see.

This interface is accessible via the user menu (top right of the screen).

Here is an example of the configuration file :

{
  "realTimeScreens": [
    {
      "screenName": "All Control Centers",
      "screenColumns": [
        {
          "entitiesGroups": ["ENTITY_FR","ENTITY_IT","ENTITY_NL"]
        },
        {
          "entitiesGroups": ["EUROPEAN_SUPERVISION_CENTERS"]
        }
      ]
    },
    {
      "screenName": "French Control Centers",
      "screenColumns": [
        {
          "entitiesGroups": ["ENTITY_FR"]
        },
        {
          "entitiesGroups": ["EUROPEAN_SUPERVISION_CENTERS"]
        }
      ]
    },
    {
      "screenName": "Italian Control Centers",
      "screenColumns": [
        {
          "entitiesGroups": ["ENTITY_IT"]
        },
        {
          "entitiesGroups": ["EUROPEAN_SUPERVISION_CENTERS"]
        }
      ]
    },
    {
      "screenName": "Dutch Control Centers",
      "screenColumns": [
        {
          "entitiesGroups": ["ENTITY_NL"]
        },
        {
          "entitiesGroups": ["EUROPEAN_SUPERVISION_CENTERS"]
        }
      ]
    }
  ]
}

With this configuration file, 4 different screens will be available : "All Control Centers", "French Control Centers", "Italian Control Centers" and "Dutch Control Centers". To associate the entities under each entity group, it is required to label the entity group as a parent of the entities.

For example, in the UI, "All Control Centers" will look like :

Real Time Screens screenshot

22.4. Activity area

OperatorFabric allows you to connect/disconnect to/from one or several entity/ies. By default, the user is connected to all the entities to which he belongs. By choosing to disconnect from an entity, the user will still be a member of this entity, but he will no longer have access to the cards intended for this entity, until he reconnects to it.

If set visible in ui-menu.json, this interface is accessible via the user menu (top right of the screen).

The choice of activity area may be done during user logging phase if you set selectActivityAreaOnLogin to true in web-ui.json.

If the user is a member of one (or more) real-time group(s), then he will see on the screen the members of these groups, currently connected.

22.5. User actions logs

OperatorFabric allows you to view most relevant user actions:

  • OPEN_SUBSCRIPTION

  • CLOSE_SUBSCRIPTION

  • ACK_CARD

  • UNACK_CARD

  • READ_CARD

  • UNREAD_CARD

  • SEND_CARD

  • SEND_RESPONSE

For each action the following information are available:

  • date: date and time of the action

  • action: type of action

  • login: username of te user who performed the action

  • entities: list of user entities

  • cardUid: card Uid

  • comment: textual information

By default, logs older than 61 days are automatically deleted.

If set visible in ui-menu.json and user is admin, this interface is accessible via the user menu (top right of the screen).

23. UI Customization

23.1. UI configuration

To customize the UI, declare specific parameters in the web-ui.json file as listed here

The ui-menu.json file is used:

  • To manage the visibility of core OperatorFabric menus (feed, monitoring, etc.)

  • To declare specific business menus to be displayed in the navigation bar of OperatorFabric

This file contains 4 fields :

  • navigationBar : contains the navigation bar mixing core menus and custom menus

  • topRightIconMenus : contains only the two menu icons agenda and usercard on the top right of the screen

  • topRightMenus : contains core menus you want to see when you click the user, on the top right of the screen

  • locales : contains the translations for the custom menus

  • showDropdownMenuEvenIfOnlyOneEntry : indicates if the name of the menu and a dropdown menu are displayed if it contains only one submenu entry. If false, the submenu entry will be displayed directly in the navigation bar. This field is optional, the default value is false.

23.2.1. Core menus

Core menus are objects containing the following fields :

  • opfabCoreMenuId: Id of the core menu (string)

  • visible: Whether this menu should be visible for this OperatorFabric instance (boolean). This field is not needed in navigationBar field but it is needed in topRightIconMenus and topRightMenus.

  • showOnlyForGroups : List of groups for which this menu should be visible (array, optional)

Menu visibility summary
  • For a core menu with "visible": true:

    • If showOnlyForGroups is not present, null or an empty array : the menu is visible for all users.

    • If showOnlyForGroups is present and with a non-empty array as a value: menu is visible only for users from the listed groups.

Table 1. List of core menus and corresponding ids
Location of menu Menu Id

Navigation bar

Feed

feed

Archives

archives

Monitoring

monitoring

Dashboard

dashboard

Logging

logging

Navigation bar (icon)

User card

usercard

Calendar

calendar

Top-right menu

Administration

admin

External devices configuration

externaldevicesconfiguration

Real time users

realtimeusers

Settings

settings

Activity area

activityarea

Notification Configuration

feedconfiguration

User action logs

useractionlogs

About

about

Night/Day toggle

nightdaymode

Change password

changepassword

Logout

logout

See /config/docker/ui-menu.json for an example containing all core menus.

If you decide not to make the night/day toggle visible (globally or for certain users), you should consider setting the settings.styleWhenNightDayModeDesactivated property in web-ui.json to specify which mode should be used.

23.2.2. Custom business menus

A menu can target directly a link or give access to several sub-menus when clicked. Those sub-menus can target a link or an opfab core menu. A targeted link can be open in an iframe or in a new tab.

Menus support i18n following the i18n OperatorFabric rules. The ui-menu.json file contains directly the i18n dictionary for the menus.

In case of a single menu, the navigation bar displays the l10n of the label of the entry menu. In this case, the label declared at the root level of the menu is useless and can be omitted (see example below).

A single menu or a menu with sub-menus has at least attributes named id and entries. The entries attribute is an array of menu entry. It is possible to restrict the visibility of one menu entry to one or more user groups by setting the showOnlyForGroups parameter. Note that menus with sub-menus need a label declaring an i18n key.

A menu entry can target a core menu id or a link. If it targets a link, it must declare the attributes listed below:

  • customMenuId: identifier of the entry menu in the UI;

  • url: url opening a new page in a tab in the browser;

  • label: it’s an i18n key used to l10n the entry in the UI.

  • linkType: Defines how to display business menu links in the navigation bar and how to open them. Possible values:

    • TAB: displays only a text link. Clicking it opens the link in a new tab.

    • IFRAME: displays only a text link. Clicking it opens the link in an iframe in the main content zone below the navigation bar.

    • BOTH: default value. Displays a text link plus a little arrow icon. Clicking the text link opens the link in an iframe while clicking the icon opens in a new tab. **

  • showOnlyForGroups: Defines the list of user groups entitled to see the menu entry, if not defined or empty it will be visible to every user.

In the following example, the configuration file declares two additional business menus. The first has only one entry, which targets a link. The second business menu has three entries, one which targets the core menu Dashboard and two others which target links. The sample also contains the 2 top right icons (usercard and calendar), the composition of the top right menu and the i18n translations in English and French.

Example
{
  "navigationBar": [
    {
      "opfabCoreMenuId": "feed"
    },
    {
      "opfabCoreMenuId": "archives"
    },
    {
      "id": "menu1",
      "entries": [
        {
          "customMenuId": "uid_test_0",
          "url": "https://opfab.github.io/",
          "label": "entry.single",
          "linkType": "BOTH"
        }
      ]
    },
    {
      "id": "menu2",
      "label": "title.multi",
      "entries": [
        {
          "opfabCoreMenuId": "dashboard"
        },
        {
          "customMenuId": "uid_test_1",
          "url": "https://opfab.github.io/",
          "label": "entry.entry1",
          "linkType": "BOTH",
          "showOnlyForGroups": "ReadOnly,Dispatcher"
        },
        {
          "customMenuId": "uid_test_2",
          "url": "https://www.wikipedia.org/",
          "label": "entry.entry2",
          "linkType": "BOTH",
          "showOnlyForGroups": "Planner"
        }
      ]
    }
  ],
  "topRightIconMenus": [
    {
      "opfabCoreMenuId": "usercard",
      "visible": true
    },
    {
      "opfabCoreMenuId": "calendar",
      "visible": true
    }
  ],
  "topRightMenus": [
    {
      "opfabCoreMenuId": "admin",
      "visible": true,
      "showOnlyForGroups": ["ADMIN"]
    },
    {
      "opfabCoreMenuId": "settings",
      "visible": true
    },
    {
      "opfabCoreMenuId": "activityarea",
      "visible": true
    },
    {
      "opfabCoreMenuId": "feedconfiguration",
      "visible": true
    },
    {
      "opfabCoreMenuId": "realtimeusers",
      "visible": true
    },
    {
      "opfabCoreMenuId": "externaldevicesconfiguration",
      "visible": true,
      "showOnlyForGroups": ["ADMIN"]
    },
    {
      "opfabCoreMenuId": "useractionlogs",
      "visible": true,
      "showOnlyForGroups": ["ADMIN", "Supervisor"]
    },
    {
      "opfabCoreMenuId": "nightdaymode",
      "visible": true
    },
    {
      "opfabCoreMenuId": "about",
      "visible": true
    },
    {
      "opfabCoreMenuId": "changepassword",
      "visible": true
    },
    {
      "opfabCoreMenuId": "logout",
      "visible": true
    }
  ],
  "locales": [
    {
      "language": "en",
      "i18n": {
        "entry": {
          "single": "Single menu entry",
          "entry1": "First menu entry",
          "entry2": "Second menu entry"
        },
        "title": {
          "multi": "Second menu"
        }
      }
    },
    {
      "language": "fr",
      "i18n": {
        "entry": {
          "single": "Premier élément",
          "entry1": "Premier élément",
          "entry2": "Deuxième élément"
        },
        "title": {
          "multi": "Deuxième menu"
        }
      }
    }
  ]
}
For iframes opened from menu, the associated request uses an extra parameter containing the current theme information. Named opfab_theme, this parameter has a value corresponding to the current theme: DAY or NIGHT. For example: mysite.com/index.htm?opfab_theme=NIGHT. Switching theme will trigger reload of open iframes.

24. Supervisor

Supervisor service can monitor users connections and card acknowledgements to send alert cards when specified conditions are met. To configure the module see config/docker/node-services.yml

24.1. Entities connection

It is possible to configure supervisor module for monitoring users connections to Opfab to ensure that there is at least one user connected to Opfab for each configured entity. When no user of an entity connects to Opfab for a specified period, supervisor module will send an alert card to the configured recipient entity.

The database stores a comprehensive list of supervised entities and their corresponding supervisors. The 'Supervised Entities' admin page offers functionalities to view, modify, add, or delete these entities. Within the supervisor service configuration section, there exists an option to define a default configuration for initial supervised entities. This default configuration applies only when the database’s supervised entities collection is empty. In such a scenario, the defined configuration will be saved into the database for subsequent use.

24.2. Card acknowledgement

It is possible to configure supervisor module to monitor card acknowledgements for specific processes/states. When a card of a configured process and state remains not acknowledged for a configured period, supervisor module will send an alert to the card publisher with the list of recipients that did not acknowledge the card.

25. External Devices Service

The external devices service is an optional service allowing OperatorFabric to relay notifications to external physical devices.

25.1. Specification

The aim of this section is to describe the initial business need that led to the creation of this service and what use cases are currently supported.

OperatorFabric users already have the option to be notified of a card’s arrival by a sound played by their browser. There is a different sound for each severity, and they are configurable for a given instance. The users can decide to opt out of sound notifications for certain severities. We also added an option for the sounds to be repeated until the operator interacted with the application (i.e. clicked anywhere on the page) so as to make them harder to miss.

This can be enough for some use cases, but it was not ideal for operators working in control rooms. Indeed, control rooms each have an external sound system that is shared for the whole room, and existing applications currently trigger sound alerts on these sound systems rather than on each operator’s computer.

This has several advantages:

  • The sound can be heard by all operators

  • It can be heard even if the operator is not at their desk

  • The sound system can warn that the connection with an application has been lost if it hasn’t received a "watchdog" signal for a given period of time.

As a result, external devices support in OperatorFabric aims to allow sound notifications to be passed on to external sound systems. The sound system on which the sound will be played can depend on the user receiving the notification. For example, all operators working in control room A will have their sounds played on the control room’s central sound system, while operators from control room B will use theirs. Each user can have several external devices configured.

So far, only sound systems using the Modbus protocol are supported, but we would like to be able to support other protocols (and ultimately allow people to supply their own drivers) in the future.

25.2. Implementation

25.2.1. Architecture

Given the use case described above, a new service was necessary to act as a link between the OperatorFabric UI and the external devices. This service is in charge of:

  • Managing the configuration relative to external devices (see below for details)

  • Process requests from the UI (e.g. "play the sound for ALARM for user operator1_fr") and translate them as requests to the appropriate device in their supported protocol, based on the above configuration

  • Allow the pool of devices to be managed (connection, disconnection)

This translates as three APIs.

Architecture diagram

Here is what happens when user operator1_fr receives a card with severity ALARM:

  1. In the Angular code, the reception of the card triggers a sound notification.

  2. If the external devices feature is enabled and the user has chosen to play sounds on external devices (instead of the browser), the UI code sends a POST request on the external-devices/notifications endpoint on the NGINX gateway, with the following payload:

    {
        "opfabSignalId": "ALARM"
    }
  3. The NGINX server, acting as a gateway, forwards it to the /notifications endpoint on the External Devices service.

  4. The External Devices service queries the configuration repositories to find out which external devices are configured for operator1_fr, how to connect to them and what signal "ALARM" translates to on these particular devices.

  5. It then creates the appropriate connections if they don’t exist yet, and sends the signal.

25.2.2. Configuration

The following elements need to be configurable:

  1. For each user, which devices to use:

    userConfiguration
    {
      "userLogin": "operator1_fr",
      "externalDeviceIds": ["CDS_1"]
    }
  2. How to connect to a given external device (host, port)

    deviceConfiguration
    {
      "id": "CDS_1",
      "host": "localhost",
      "port": 4300,
      "signalMappingId": "default_CDS_mapping",
      "isEnabled": true
    }

    The "isEnabled" field is optional and allows the user to activate or deactivate an external device. When a device is disabled (isEnabled = false) , OperatorFabric will not send any signal and watchdog to the device.

    The default value of the "isEnabled" field is true.

  3. How to map an OperatorFabric signal key [1] to a signal (sound, light) on the external system

    signalMapping
    {
      "id": "default_CDS_mapping",
      "supportedSignals": {
        "ALARM": 1,
        "ACTION": 2,
        "COMPLIANT": 3,
        "INFORMATION": 4
      }
    }

This means that a single physical device allowing 2 different sets of sounds to be played (for example one set for desk A and another set for desk B) would be represented as two different device configurations, with different ids.

Device configurations
[{
  "id": "CDS_A",
  "host": "localhost",
  "port": 4300,
  "signalMappingId": "mapping_A",
  "isEnabled": true
},
{
  "id": "CDS_B",
  "host": "localhost",
  "port": 4300,
  "signalMappingId": "mapping_B",
  "isEnabled": true
}]
Signal mappings
[{
  "id": "mapping_A",
  "supportedSignals": {
    "ALARM": 1,
    "ACTION": 2,
    "COMPLIANT": 3,
    "INFORMATION": 4
  }
},
{
  "id": "mapping_B",
  "supportedSignals": {
    "ALARM": 5,
    "ACTION": 6,
    "COMPLIANT": 7,
    "INFORMATION": 8
  }
}]
The signalMapping object is built as a Map with String keys (rather than the Severity enum or any otherwise constrained type) because there is a strong possibility that in the future we might want to map something other than severities.

Please see the API documentation for details.

There is a Device object distinct from DeviceConfiguration because the latter represents static information about how to reach a device, while the former contains information about the actual connection. For example, this is why the device configuration contains a host (which can be a hostname) while the device has a resolvedAddress. As a result, they are managed through separate endpoints, which might also make things easier if we need to secure them differently.

25.3. Configuration

25.4. Connection Management

OperatorFabric does automatically attempt to connect to enabled configured external devices at startup. If several external devices share the same host and port, only one connection will be open for the external devices and only one watchdog will be sent for the external devices.

If an external driver is disconnected, OperatorFabric will try to reconnect it every 10 seconds (default value).

25.5. Configuration Management

In coherence with the way Entities, Perimeters, Users and Groups are managed, SignalMapping, UserConfiguration and DeviceConfiguration resources can be deleted even if other resources link to them. For example, if a device configuration lists someMapping as its signalMappingId and a DELETE request is sent on someMapping, the deletion will be performed and return a 200 Success, and the device will have a null signalMappingId.

25.6. Drivers

This section contains information that is specific to each type of driver. Currently, the only supported driver uses the Modbus protocol.

25.6.1. Modbus Driver

The Modbus driver is based on the jlibmodbus library to create a ModbusMaster for each device and then send requests through it using the WriteSingleRegisterRequest object.

We are currently using the "BROADCAST" mode, which (at least in the jlibmodbus implementation) means that the Modbus master doesn’t expect any response to its requests (which makes sense because if there really are several clients responding to the broadcast, ) This is mitigated by the fact that if watchdog signals are enabled, the external devices will be able to detect that they are not receiving signals correctly. In the future, it could be interesting to switch to the TCP default so OperatorFabric can be informed of any exception in the processing of the request, allowing for example to give a more meaningful connection status (see #2294)

25.6.2. Adding new drivers

New drivers should implement the ExternalDeviceDriver interface, and a corresponding factory implementing the ExternalDeviceDriverFactory interface should be created with it.

The idea is that in the future, using dependency injection, Spring should be able to pick up any factory on the classpath implementing the correct interface.

ExternalDeviceDriver, ExternalDeviceDriverFactory and the accompanying custom exceptions should be made available as a jar on Maven Central if we want to allow project users to provide their own drivers.
If several drivers need to be used on a given OperatorFabric instance at the same time, we will need to introduce a device type in the deviceConfiguration object.

26. External Applications Integration

The external web applications can take advantage of OperatorFabric shared stylesheet to style pages and specific components. The available css classes are detailed in OperatorFabric css styles

To use the shared stylesheet the web application should link the following resources:

  • /shared/css/opfab-application.css : css classes

  • /shared/css/opfab-application.js : javascript objects to hande day/night themes

Your can find an example of a web page using these resources in the OperatorFabric core repository (src/test/externalWebAppExample/index.html).

The example test page can be accessed from OperatorFabric directly at the following url localhost:2002/external/appExample/ and also using one of the example business menu entries.

Deployment and Administration of OperatorFabric

The aim of this document is to explain how to configure and deploy OperatorFabric.

27. Deployment

For now OperatorFabric consist of Docker images available either by compiling the project or by using images releases from Dockerhub

Service images are all based on openjdk:8-jre-alpine.

For simple one instance per service deployment, you can find a sample deployment as a docker-compose file here

To run OperatorFabric in development mode, see the development environment documentation .

28. User computer

From the user perspective, OperatorFabric is compatible with recent versions of chrome, edge and firefox and has no specific hardware performance constraints.

Be aware that if you use OperatorFabric in realtime you should prevent user’s computers to go in "sleep mode".

In particular, if you are using edge have a look to github.com/opfab/operatorfabric-core/issues/2754

29. Configuration

The configuration is divided into two parts, the configuration of the business services and the configuration of the UI.

Opfab comes with a default embedded configuration, but certain parameters need to be provided and configured.

The configuration is centralized in the config directory of the GitHub repository. The dev subdirectory contains configurations specific to development environments, while the docker subdirectory contains a specific configuration meant for use in a full docker environment.

29.1. Business service configuration

29.1.1. Shared service configuration

The configuration shared by all services is in a yaml file, you can find an example with the file /config/docker/common.yml.

29.1.1.1. Mongo configuration

We only use URI configuration for mongo through the usage of the operatorfabric.mongodb.uri, it allows us to share the same configuration behavior for simple or cluster configuration and with both spring classic and reactive mongo configuration. See mongo connection string for the complete URI syntax.

This configuration is mandatory (no default configuration is provided).

29.1.2. RabbitMQ

A rabbitMQ component (docker image) is provided in opfab, it has a default configuration.

SECURITY WARNING : The default user/password should be changed if a rabbit port is exposed outside of docker network. To do that, set the environment variables "RABBITMQ_DEFAULT_USER" and "RABBITMQ_DEFAULT_PASS" when starting the image and adjust the following parameters in the common yml config file.

Name

Default

Mandatory

Description

operatorfabric.rabbitmq.host

rabbitmq

no

RabbitMQ host address

operatorfabric.rabbitmq.port

5672

no

RabbitMQ listen port

operatorfabric.rabbitmq.username

guest

no

RabbitMQ username

operatorfabric.rabbitmq.password

guest

no

RabbitMQ password

29.1.2.1. Internal account

The back service cards-reminder ,cards-diffusion and supervision need an internal account to communicate between opfab back services. Therefore, if you intend to utilize any of these services, it is necessary to create an Opfab technical account with ADMIN permissions and configure it

Name

Default

Mandatory

Description

operatorfabric.internalAccount.login

opfab

no

user account used to call Opfab services.

operatorfabric.internalAccount.password

yes

user password to call Opfab services

The services require knowledge of the URL to retrieve the account’s token, and this URL should be configured within operatorfabric.servicesUrls.authToken. A default value, based on OperatorFabric default installation, is set to: "http://web-ui/auth/token".

29.1.2.2. Optional configuration values
Name Default Description

operatorfabric.servicesUrls.users

users:2103

Indicates where the Users service can be reached from the other services.

operatorfabric.servicesUrls.businessconfig

businessconfig:2100

Indicates where the Business service can be reached from the other services.

operatorfabric.servicesUrls.cardsPublication

cards-publication:2102

yes

Cards publication service URL

operatorfabric.servicesUrls.cardsConsultation

cards-consultation:2104

yes

Cards consultation service URL

operatorfabric.servicesUrls.authToken

web-ui/auth/token

Authentication service URL

operatorfabric.userActionLogActivated

29.1.3. Service specific configurations

Each service can have a specific yaml configuration file that can override the default configuration

Examples of configuration of each service can be found under config/docker or config/dev .

29.1.3.1. Businessconfig service

The businessconfig service has this specific property :

Name Default Description

operatorfabric.businessconfig.storage.path

/businessconfig-storage

File path to data storage folder

29.1.3.2. Users service

The user service has these specific properties :

Name Default Description

operatorfabric.users.default.users

null

Array of user objects to create upon startup if they don’t exist

operatorfabric.users.default.user-settings

null

Array of user settings objects to create upon startup if they don’t exist

operatorfabric.users.default.groups

null

Array of group objects to create upon startup if they don’t exist

operatorfabric.users.default.entities

null

Array of entity objects to create upon startup if they don’t exist

operatorfabric.users.daysBeforeLogExpiration

61

Duration beyond which user action logs get automatically deleted

29.1.3.3. Cards-publication service

The cards publication service has these specific properties :

Name Default Description

operatorfabric.cards-publication.checkAuthenticationForCardSending

true

If false, OperatorFabric will not require user authentication to send or delete a card via endpoint /cards (it does not concern user cards which always need authentication). Be careful when setting the value to false, nginx conf must be adapted for security reasons (see security warning in the reference nginx.conf)

operatorfabric.cards-publication.authorizeToSendCardWithInvalidProcessState

false

If true, OperatorFabric will allow to publish a card referring a not existent process or state

operatorfabric.cards-publication.checkPerimeterForCardSending

true

If true, OperatorFabric will check user perimeter for card sending via endpoint /cards (it does not concern user cards which are always controlled).

spring.kafka.consumer.group-id

null

If set, support for receiving cards via Kafka is enabled

spring.deserializer.value.delegate.class

io.confluent.kafka.serializers. KafkaAvroDeserializer

Deserializer used to convert the received bytes into objects

spring.serializer.value.delegate.class

io.confluent.kafka.serializers. KafkaAvroSerializer

Serializer used to convert cards to bytes

spring.kafka.producer.bootstrap-servers

localhost:9092

comma separated list of URL(s) of the broker(s) / bootstrap server(s)

operatorfabric.cards-publication.kafka.topics.card.topicname

opfab

Name of the topic to read the messages from

operatorfabric.cards-publication.kafka.topics.response-card.topicname

opfab

Name of the topic to place the response cards to

operatorfabric.cards-publication.kafka.schema.registry.url

localhost:8081

URL of the schema registry. Can be set to the empty string "" is no registry is used

operatorfabric.cards-publication.delayForDeleteExpiredCardsScheduling

60000

The delay in millisecond after the last execution finished and the next execution starts.

operatorfabric.cards-publication.cardSendingLimitCardCount

1000

For the Rate limiter, this defines how many cards can be sent during cardSendingLimitPeriod.

operatorfabric.cards-publication.cardSendingLimitPeriod

3600

For the Rate limiter, this defines the time period (in seconds) during which cardSendingLimitCardCount is applied.

operatorfabric.cards-publication.activateCardSendingLimiter

true

If false, the Rate limiter will be ignored when sending cards.

OperatorFabric Kafka configuration

Next to publishing cards to OperatorFabric using the REST API, OperatorFabric also supports publishing cards via a Kafka Topic. In the default configuration Kafka is disabled. To enable Kafka you need to set the consumer group to the consumer group you assign to the OpFab Kafka consumer. This can be any group-id, as long as it isn’t used by other consumers (unless you explicitly want multiple consumers for the same group).

You can set the group_id by uncommenting the kafka.consumer.group_id in the cards-publication.yml

  kafka:
    consumer:
      group-id: opfab-command

By default, the consumer will consume messages from the opfab topic. See Spring for Apache Kafka for more information on the Spring Kafka implementation.

With the default settings, the Kafka consumer expects a broker running on http//127.0.0.1:9092 and a schema registry on 127.0.0.1:8081.

Operator Fabric is also able to publish response cards to a Kafka topic. The default topic name opfab-response. You can specify which response cards are to be returned via Kafka by setting the external-recipients in the cards-publication yaml file. Instead of setting http:// URL you should set it to kafka:

operatorfabric:
  cards-publication:
    external-recipients:
      recipients:
        - id: "processAction"
          url: "http://localhost:8090/test"
          propagateUserToken: true
        - id: "mykafka"
          url: "kafka:topicname"
          propagateUserToken: false

Note that topicname is a placeholder for now. All response cards are returned via the same Kafka response topic, as specified in the opfab.kafka.topics.response-card field.

Also note enabling Kafka does not disable the REST interface.

Example Kafka configuration plain:

spring:
  application:
    name: cards-publication
  deserializer:
    value:
      delegate:
        class: org.opfab.cards.publication.kafka.consumer.KafkaAvroWithoutRegistryDeserializer
  serializer:
    value:
      delegate:
        class: org.opfab.cards.publication.kafka.producer.KafkaAvroWithoutRegistrySerializer
  kafka:
    consumer:
      group-id: OPFAB
      properties:
        specific:
          avro:
            reader: true
    producer:
      client-id: operatorfabric-producer
    bootstrap-servers: kafka-server:9092
operatorfabric:
  cards-publication:
    kafka:
      topics:
        card:
          topicname: m_opfab-card-commands_dev
        response-card:
          topicname: m_opfab-card-response_dev

Example Kafka configuration SASL:

spring:
  application:
    name: cards-publication
  deserializer:
    value:
      delegate:
        class: org.opfab.cards.publication.kafka.consumer.KafkaAvroWithoutRegistryDeserializer
  serializer:
    value:
      delegate:
        class: org.opfab.cards.publication.kafka.producer.KafkaAvroWithoutRegistrySerializer
  kafka:
    consumer:
      group-id: OPFAB
      security:
        protocol: SASL_SSL
      properties:
        specific:
          avro:
            reader: true
        sasl:
          mechanism: SCRAM-SHA-256
          jaas:
            config: org.apache.kafka.common.security.scram.ScramLoginModule required username="kafkaUsername" password="kafkaPassword";
    producer:
      client-id: operatorfabric-producer
      security:
        protocol: SASL_SSL
      properties:
        sasl:
          mechanism: SCRAM-SHA-256
          jaas:
            config: org.apache.kafka.common.security.scram.ScramLoginModule required username="kafkaUsername" password="kafkaPassword";
    bootstrap-servers: kafka-server:9094
    ssl:
      trust-store-type: PKCS12
      trust-store-password: truststorePassword
      trust-store-location: file:///etc/truststore.pkcs
    properties:
      ssl:
        endpoint:
          identification:
            algorithm: ""
operatorfabric:
  cards-publication:
    kafka:
      topics:
        card:
          topicname: opfab-card-commands
        response-card:
          topicname: opfab-card-response

Example Kafka configuration Kerberos:

spring:
  application:
    name: cards-publication
  deserializer:
    key:
      delegate:
        class: org.apache.kafka.common.serialization.StringDeserializer
    value:
      delegate:
        class: org.opfab.cards.publication.kafka.consumer.KafkaAvroWithoutRegistryDeserializer
  serializer:
    value:
      delegate:
        class: org.opfab.cards.publication.kafka.producer.KafkaAvroWithoutRegistrySerializer
  kafka:
    security:
      protocol: SASL_SSL
    properties:
      sasl.mechanism: GSSAPI
      sasl:
        jaas:
          config: com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/kafkaUsername.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafkaUsername@DOMAIN";
    bootstrap-servers: kafka-server:9094
    ssl:
      trust-store-type: pkcs12
      trust-store-password: truststorePassword
      trust-store-location: file:///etc/truststore.pkcs12
    consumer:
      group-id: OPFAB
      key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
      properties:
        spring:
          deserializer:
            key:
              delegate:
                class: org.apache.kafka.common.serialization.StringDeserializer
            value:
              delegate:
                class: org.opfab.cards.publication.kafka.consumer.KafkaAvroWithoutRegistryDeserializer
    producer:
      client-id: OPFAB
      value-serializer: org.opfab.cards.publication.kafka.producer.KafkaAvroWithoutRegistrySerializer
operatorfabric:
  cards-publication:
    kafka:
      topics:
        card:
          topicname: opfab-card-commands
        response-card:
          topicname: opfab-card-response

Example Kafka configuration OAUTHBEARER / AADToken:

spring:
  kafka:
    consumer:
      properties:
        "[security.protocol]": SASL_SSL
        "[sasl.jaas.config]": org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ;
        "[sasl.mechanism]": OAUTHBEARER
        "[sasl.login.callback.handler.class]": org.opfab.cards.publication.kafka.auth.AADWorkloadIdentityLoginCallbackHandler
    producer:
      ...the same...
29.1.3.4. Cards-consultation service

The cards-consultation service has these specific properties :

Name Default Description

operatorfabric.checkIfUserIsAlreadyConnected

true

If false, OperatorFabric will allow a user to have several sessions opened at the same time. However, it may cause synchronization problems between the sessions using the same login, so it is recommended to let it true, its default value.

operatorfabric.heartbeat.checkIntervalInSeconds

10

Frequency at which the heartbeat from the ui to the server is checked.

operatorfabric.heartbeat.delayInSecondsToConsiderUserDisconnected

100

After how many seconds without heartbeat, the user is considered disconnected

29.1.3.5. External devices service

The external devices service can be configured with the following properties:

Name Default Description

operatorfabric.externaldevices.watchdog.enabled

false

If true, watchdog signals will be sent to external devices to show that the OperatorFabric is running and connected.

operatorfabric.externaldevices.watchdog.cron

*/5 * * * * *

CRON expression determining when watchdog signals should be sent to external devices.

operatorfabric.externaldevices.watchdog.signalId

0

Id of the signal the external devices are expecting as watchdog

29.1.3.6. Cards external diffusion service

The card external diffusion service can be configured with the following properties:

Name

Default

Mandatory

Description

operatorfabric.logConfig.logFolder

logs

no

Log file folder (inside docker container)

operatorfabric.logConfig.logFile

"opfab.%DATE%.log"

no

Log file name

operatorfabric.logConfig.logLevel

info

no

Default log level

operatorfabric.mail.host

yes

Mail server host

operatorfabric.mail.port

yes

Mail server port

operatorfabric.mail.auth.user

yes

Mail server authentication user

operatorfabric.mail.auth.pass

yes

Mail server authentication password

operatorfabric.cardsExternalDiffusion.adminPort

2106

no

Listen port

operatorfabric.cardsExternalDiffusion.activeOnStartup

true

no

Flag to start the notification service on startup

operatorfabric.cardsExternalDiffusion.defaultConfig.mailFrom

yes

Mail from address

operatorfabric.cardsExternalDiffusion.defaultConfig.subjectPrefix

'Opfab card received'

no

Mail subject prefix

operatorfabric.cardsExternalDiffusion.defaultConfig.bodyPrefix

'You received a card in opfab : '

no

Mail body prefix

operatorfabric.cardsExternalDiffusion.defaultConfig.opfabUrlInMailContent

yes

Url to write in mail to access the opfab server (example : opfab_url/)

operatorfabric.cardsExternalDiffusion.defaultConfig.windowInSecondsForCardSearch

360

no

Max time interval used to query new cards (in seconds)

operatorfabric.cardsExternalDiffusion.defaultConfig.secondsAfterPublicationToConsiderCardAsNotRead

60

no

Time period after card publication before sending mail notification

operatorfabric.cardsExternalDiffusion.defaultConfig.checkPeriodInSeconds

10

no

Time interval between consecutive checks

operatorfabric.cardsExternalDiffusion.defaultConfig.activateCardsDiffusionRateLimiter

true

no

Flag to activate mail sending rate limiting

operatorfabric.cardsExternalDiffusion.defaultConfig.sendRateLimit

100

no

Max number of mail sent allowed for a single destination in the configured period

operatorfabric.cardsExternalDiffusion.defaultConfig.sendRateLimitPeriodInSec

3600

no

Time period for rate limiting control

The parameters in "operatorfabric.cardsExternalDiffusion.defaultConfig" section can be modified at runtime by sending an http POST request to the /config API of cards external diffusion service with a JSON payload containing the config parameters to be changed.

29.1.3.7. Cards reminder service

The card reminder service can be configured with the following properties:

Name

Default

Mandatory

Description

operatorfabric.logConfig.logFolder

logs

no

Log file folder (inside docker container)

operatorfabric.logConfig.logFile

"opfab.%DATE%.log"

no

Log file name

operatorfabric.logConfig.logLevel

info

no

Default log level

operatorfabric.cardsReminder.adminPort

2107

no

Cards reminder service Listen port

operatorfabric.cardsReminder.activeOnStartup

yes

no

Flag to start the cards reminder service on startup

operatorfabric.cardsReminder.checkPeriodInSeconds

5

no

Cards reminder time interval between consecutive checks

29.1.3.8. Supervisor service

The supervisor service can be configured with the following properties:

Name

Default

Mandatory

Description

operatorfabric.logConfig.logFolder

logs

no

Log file folder (inside docker container)

operatorfabric.logConfig.logFile

"opfab.%DATE%.log"

no

Log file name

operatorfabric.logConfig.logLevel

info

no

Default log level

operatorfabric.supervisor.adminPort

2106

no

Cards reminder service Listen port

operatorfabric.supervisor.activeOnStartup

yes

no

Flag to start the cards reminder service on startup

operatorfabric.supervisor.defaultConfig.considerConnectedIfUserInGroups

no

If set the user must be in one of the mention groups to consider his entities as connected, example : ["Dispatcher","Planner"]

operatorfabric.supervisor.defaultConfig.entitiesToSupervise

no

List of entity id and related supervisor list ({ "id": "ENTITY1", "supervisors": ["ENTITY2"] })

operatorfabric.supervisor.defaultConfig.disconnectedCardTemplate.publisher

opfab

no

Publisher for card template used to send connections alerts

operatorfabric.supervisor.defaultConfig.disconnectedCardTemplate.process

supervisor

no

Process for card template used to send connections alerts

operatorfabric.supervisor.defaultConfig.disconnectedCardTemplate.processVersion

1

no

Process version for card template used to send connections alerts

operatorfabric.supervisor.defaultConfig.disconnectedCardTemplate.state

disconnectedEntity

no

State for card template used to send connections alerts

operatorfabric.supervisor.defaultConfig.disconnectedCardTemplate.severity

ALARM

no

Severity for card template used to send connections alerts

operatorfabric.supervisor.defaultConfig.secondsBetweenConnectionChecks

10

no

Connection check period

operatorfabric.supervisor.defaultConfig.nbOfConsecutiveNotConnectedToSendFirstCard

3

no

Number of consecutive failed checks before sending first alert card

operatorfabric.supervisor.defaultConfig.nbOfConsecutiveNotConnectedToSendSecondCard

12

no

Number of consecutive failed checks before sending second alert card

operatorfabric.supervisor.defaultConfig.processesToSupervise

no

List of processes and related states to monitor for missing acknowledgment ({ "process": "defaultProcess", "states": ["processState"] })

operatorfabric.supervisor.defaultConfig.unackCardTemplate.publisher

opfab

no

Publisher for card template used to send acknowledgment alerts

operatorfabric.supervisor.defaultConfig.unackCardTemplate.process

supervisor

no

Process for card template used to send acknowledgment alerts

operatorfabric.supervisor.defaultConfig.unackCardTemplate.processVersion

1

no

Process version for card template used to send acknowledgment alerts

operatorfabric.supervisor.defaultConfig.unackCardTemplate.state

unacknowledgedCards

no

State for card template used to send acknowledgment alerts

operatorfabric.supervisor.defaultConfig.unackCardTemplate.severity

ALARM

no

Severity for card template used to send acknowledgment alerts

operatorfabric.supervisor.defaultConfig.windowInSecondsForCardSearch

1200

no

Time period used to search for not acknowledged cards

operatorfabric.supervisor.defaultConfig.secondsBetweenAcknowledgmentChecks

10

no

Card acknowledgment check period

operatorfabric.supervisor.defaultConfig.secondsAfterPublicationToConsiderCardAsNotAcknowledged

600

no

Time period to wait after card publishing to consider a card as not acknowledged and send the alert

The parameters in "operatorfabric.supervisor.defaultConfig" section can be modified at runtime by sending an http POST request to the /config API of supervisor service with a JSON payload containing the config parameters to be changed.

29.2. Web UI Configuration

OperatorFabric Web UI service is built on top of a NGINX server. It serves the Angular SPA to browsers and act as a reverse proxy for other services.

29.2.1. NGINX configuration

An external nginx.conf file configures the OperatorFabric Nginx instance named web-ui service. Those files are mounted as docker volumes. There are two of them in OperatorFabric, one in config/dev and one in config/docker.

The one in config/dev is set with permissive CORS rules to enable web development using ng serve within the ui/main project.It’s possible to use ng serve with the one in config/docker version also. To do so use the conf file named nginx-cors-permissive.conf by configuring the /docker-compose.yml with the following line:

- "./nginx-cors-permissive.conf:/etc/nginx/conf.d/default.conf"

instead of:

- "./nginx.conf:/etc/nginx/conf.d/default.conf"

The line customized in the nginx configuration file must end with à semicolon (';') otherwise the Nginx server will stop immediately

29.2.2. UI properties

The properties lie in the web-ui.json.The following table describes their meaning and how to use them. An example file can be found in the config/docker directory.

name default mandatory? Description

about

none

no

Declares application names and their version into web-ui about section.
Each entry is a free key value followed by its name (a string of characters), its version (a string of characters) and its facultative rank of declaration (a number).
It is not necessary to declare OperatorFabric as application because it is added automatically with current release version and rank 0.
For example adding Keycloak application, with 'Keycloak' as name, 1 as rank and '6.0.1' as version should look like:

"about": {
    "keycloak": {
      "name": "Keycloak",
      "rank": 1,
      "version": "6.0.1"
    }
  }

alerts.hideBusinessMessages

false

no

if set to true, the business alert messages are hidden

alerts.messageBusinessAutoClose

false

no

if set to true, the (red) business alert message will automatically close after a few seconds

alerts.messageOnBottomOfTheScreen

false

no

if set to true, the alert message is shown on the bottom of the page

archive.filters.page.size

10

no

The page size of archive filters

archive.filters.publishDate.days

10

no

The default search period (days) for publish date filter in archives page

archive.filters.tags.list

no

List of tags to choose from in the corresponding filter in archives page

archive.history.size

100

no

The maximum size of card history visible

checkIfUrlIsLocked

true

no

if set to false, an OperatorFabric url can be used by several tabs in the same browser. Note that there can only be one token per browser for a given OperatorFabric url, so the first session will be replaced by the second one

environmentColor

blue

no

Color of the background of the environment name. The format of color is css, for example : red , #4052FF

environmentName

no

Name of the environment to display in the top-right corner (examples: PROD , TEST .. ), if the value not set the environment name is not shown .

externalDevicesEnabled

false

no

If true, users have the opportunity to play sounds on external devices rather than in the browser. See settings.playSoundOnExternalDevice

feed.card.hideAckAllCardsFeature

true

no

Control if you want to show or hide the option for acknowledging all the visible cards of the feed

feed.card.hideApplyFiltersToTimeLineChoice

false

no

Control if you want to show or hide the option of applying filters or not to timeline in the feed page

feed.card.hideResponseFilter

false

no

Control if you want to show or hide the response filter in the feed page

feed.card.hideProcessFilter

false

no

Control if you want to show or hide the process filter in the feed page

feed.card.hideStateFilter

false

no

Control if you want to show or hide the state filter in the feed page

feed.card.maxNbOfCardsToDisplay

100

no

Max number of card visible in feed (This limit is used for performance reasons, setting the value too high can have consequences on browser response times)

feed.card.secondsBeforeLttdForClockDisplay

180

no

Number of seconds before lttd when a clock is activated in cards on the feed

feed.card.time.display

BUSINESS

no

card time display mode in the feed. Values :

  • BUSINESS: displays card with entire business period. It is the fallback if the set value is none of the values listed here;

  • BUSINESS_START: displays card with business start date;

  • PUBLICATION: displays card with publication date;

  • LTTD: displays card with lttd date;

  • NONE: nothing displayed.

feed.card.titleUpperCase

true

no

If set to true, the card titles are in UPPERCASE. (Option applies to the 'Card Feed', 'Archives' and 'Monitoring')

feed.defaultAcknowledgmentFilter

"notack"

no

Default card acknowledgment filtering option in feed page, possible values are : "notack", "ack", "all".

feed.defaultSorting

"unread"

no

Default card sorting option in feed page, possible values are : "unread", "date", "severity", "startDate", "endDate".

feed.enableGroupedCards

false

no

(Experimental feature): If true cards with the same tags are grouped together. Clicking on the grouped card will show the other cards with the same tags in the feed.

feed.geomap.bglayer.xyz.crossOrigin

null

no

Custom xyz/tiled background layer crossOrigin setting.

feed.geomap.bglayer.xyz.tileSize

null

no

Custom xyz/tiled background layer tile size (Int value, example: 256. Required when using custom background layer).

feed.geomap.bglayer.xyz.url

null

no

Custom xyz/tiled background layer service URL, Replaces OSM background layer (Add endpoint with {x}{y}{z} variables. Example: "https://service.pdok.nl/brt/achtergrondkaart/wmts/v2_0/grijs/EPSG:3857/{z}/{x}/{y}.png".

feed.geomap.defaultDataProjection

EPSG:4326

no

The default data projection of card.wktGeometry to use when no wktProjection is embedded in the card.

feed.geomap.enable

false

no

(Experimental feature): If true a geographical map will be shown and cards that have geographical coordinates will be drawn on the map.

feed.geomap.enableGraph

false

no

Show a graph on the map with number of cards per severity.

feed.geomap.highlightPolygonStrokeWidth

2

no

Define the stroke width when highlighting polygon in the geomap view of a card.

feed.geomap.initialLatitude

0

no

The initial latitude of the map when no cards with geographical coordinates are present.

feed.geomap.initialLongitude

0

no

The initial longitude of the map when no cards with geographical coordinates are present.

feed.geomap.initialZoom

1

no

Initial zoom level of the map.

feed.geomap.layer.geojson.url

null

no

Add a GeoJSON layer to the map. Example: "https://localhost:8000/service-area.geojson"

feed.geomap.maxZoom

11

no

Max zoom level, to prevent zooming in too much when only one card is shown (or multiple cards in close proximity).

feed.geomap.popupContent

publishDateAndTitle

no

Define the content of the geomap popup. Possible values are : publishDateAndTitle (default value) or summary.

feed.geomap.zoomDuration

500

no

Time in milliseconds it takes to zoom the map to the specific location. Set to 0 to disable the zoom animation.

feed.geomap.zoomToLocation

14

no

Zoom level when zooming to a location of a selected card.

feed.showSearchFilter

false

no

If set to false, the search filter is hidden.

feed.timeline.domains

["TR", "J", "7D", "W", "M", "Y"]

no

List of domains to show on the timeline, possible domains are : "TR", "J", "7D", "W", "M", "Y".

heartbeatSendingInterval

30

yes

Frequency in seconds at which the ui sends heartbeat to the server

i18n.supported.locales

no

List of supported locales (Only fr and en so far) Values should be taken from the TZ database.

logging.filters.publishDate.days

10

no

The default search period (days) for publish date filter in logging page

logging.filters.tags.list

no

List of tags to choose from in the corresponding filter in logging page

logo.base64

medium OperatorFabric icon

no

The encoding result of converting the svg logo to Base64, use this online tool to encode your svg. If it is not set, a medium (32px) OperatorFabric icon is displayed.

logo.height

40

no

The height of the logo (in px) (only taken into account if logo.base64 is set). The value cannot be more than 48px (if it is set to more than 48px, it will be ignored and set to 48px).

logo.width

40

no

The width of the logo (in px) (only taken into account if logo.base64 is set).

secondsToCloseSession

60

no

Number of seconds between logout and token expiration. If you use IMPLICIT authentication mode, exercise caution when modifying the value to prevent logouts before token silent refresh.

security.changePasswordUrl

no

URL to change the user password (if the top-right menu item "Change password" is visible)

security.logout-url

yes

The keycloak logout URL. Is a composition of: - Your keycloak instance and the auth keyword (ex: www.keycloakurl.com/auth), but we also support domains without auth (ex: www.keycloakurl.com/customPath) - The realm name (Ex: dev) - The redirect URL (redirect_uri): The redirect URL after success authentication

security.oauth2.flow.delegate-url

null

no

Url to redirect the browser to for authentication. Mandatory with:

  • CODE flow: must be the url with protocol choice and version as query parameters;

  • IMPLICIT flow: must be the url part before .well-known/openid-configuration (for example in dev configuration it’s localhost:89/auth/realms/dev).

security.oauth2.flow.mode

PASSWORD

no

authentication mode, available options:

  • CODE: Authorization Code Flow;

  • PASSWORD: Direct Password Flow (fallback);

  • IMPLICIT: Implicit Flow.

selectActivityAreaOnLogin

false

no

if set to true the users belonging to multiple Entities will be required to configure activity area on login

settings.dateFormat

Value from the browser configuration

no

Format for date rendering (example : DD/MM/YYYY )

settings.dateTimeFormat

Value from the browser configuration

no

Format for date and time rendering (example : HH:mm DD/MM/YYYY )

settings.locale

en

no

Default user locale (use en if not set)

settings.playSoundForAction

false

no

If set to true, a sound is played when Action cards are added or updated in the feed

settings.playSoundForAlarm

false

no

If set to true, a sound is played when Alarm cards are added or updated in the feed

settings.playSoundForCompliant

false

no

If set to true, a sound is played when Compliant cards are added or updated in the feed

settings.playSoundForInformation

false

no

If set to true, a sound is played when Information cards are added or updated in the feed

settings.playSoundOnExternalDevice

false

no

If set to true (and externalDevicesEnabled is set to true as well) and the user has an external device configured, sounds will be played on this device rather than in the browser

settings.remoteLoggingEnabled

false

no

If set to true, some logs from the UI are sent to the back and write in the log file of the cards-consultation service

settings.replayEnabled

false

no

If set to true, sounds are replayed every settings.replayInterval seconds until the user interacts with the application

settings.replayInterval

5

no

Interval between sound replays (see settings.replayEnabled)

settings.systemNotificationAction

false

no

If set to true, a system notification is sent when Action cards are added or updated in the feed

settings.systemNotificationAlarm

false

no

If set to true, a system notification is sent when Alarm cards are added or updated in the feed

settings.systemNotificationCompliant

false

no

If set to true, a system notification is sent when Compliant cards are added or updated in the feed

settings.systemNotificationInformation

false

no

If set to true, a system notification is sent when Information cards are added or updated in the feed

settings.styleWhenNightDayModeDesactivated

no

Style to apply if not using day night mode, possible value are DAY or NIGHT

settings.timeFormat

Value from the browser configuration

no

Format for time rendering (example : HH:mm )

settingsScreen.hiddenSettings

no

Array of string indicating which field(s) we want to hide in the settings screen. Possible values :
"language" : if present, language field will not be displayed
"remoteLoggingEnabled" : if present, the checkbox to activate remote logging will not be displayed
"sounds" : if present, the checkboxes for sound notifications won’t be displayed
"systemNotifications" : if present, the checkboxes for systemNotifications won’t be displayed
"sendCardsByEmail" : if present, the email options won’t be displayed
"emailToPlainText" : if present, the email option to have the emails sent as plain text won’t be displayed

showUserEntitiesOnTopRightOfTheScreen

false

no

if set to true the users entities will be displayed under the login on top right of the screen

title

OperatorFabric

no

Title of the application, displayed on the browser

usercard.useDescriptionFieldForEntityList

false

no

If true, show entity description field instead of name in user card page

IMPORTANT:

To declare settings parameters, you now need to group all fields under settings: { } For example:

Replace the following invalid settings config

  "settings.replayInterval": 10,
  "settings.replayEnabled": true,
  "settings": {
    "about": {
      "keycloak": {
        "name": "Keycloak",
        "rank": 2,
        "version": "6.0.1"
      },
    }
    "locale": "en",
    "dateTimeFormat": "HH:mm DD/MM/YYYY",
    "dateFormat": "DD/MM/YYYY",
    "styleWhenNightDayModeDesactivated": "NIGHT"
  },

By this valid one :

  "settings": {
    "replayInterval": 10,
    "replayEnabled": true,
    "about": {
      "keycloak": {
        "name": "Keycloak",
        "rank": 2,
        "version": "6.0.1"
      },
    }
    "locale": "en",
    "dateTimeFormat": "HH:mm DD/MM/YYYY",
    "dateFormat": "DD/MM/YYYY",
    "styleWhenNightDayModeDesactivated": "NIGHT"
  },

29.3. Security Configuration

Configure the security concern throughout several files:

  • nginx.conf of the nginx server

  • config/docker/common.yml the common configuration file for back services

  • web-ui.json served by the web-ui service;

29.3.1. Authentication configuration

There are 3 OAuth2 Authentication flows available into OperatorFabric UI:

  • password grant: referred as PASSWORD mode flow;

  • code flow : referred as CODE mode flow;

  • implicit flow: referred as IMPLICIT mode flow.

Alternatively there is also another flow available:

  • none flow: referred as NONE mode flow.

The NONE flow assumes that the application is behind a secured proxy wich handles login and token generation. Calls to backend services will get a valid token added to the headers and the token will not be visible for the (web)client.

29.3.1.1. Nginx Configuration

The UI calls need some mapping to reach the Authentication Provider. In the default OperatorFabric configuration it’s a docker keycloak instance, called keycloak in the project docker-compose.yml files.

There are 3 properties to configure within nginx.conf file:

  • $KeycloakBaseUrl: the base url of keycloak;

  • $OperatorFabricRealm: the realm configure within keycloak instance to provide authentication to OperatorFabric;

  • $ClientPairOFAuthentication: base64 encoded string of the pair of client authentication used by OperatorFabric to log to the Authentication Provider (keycloak). The cient-id and the client-secret are separated by a colon(':').

Example of the docker configuration

# Url of the Authentication provider
    set $KeycloakBaseUrl "http://keycloak:89";
# Realm associated to OperatorFabric within the Authentication provider
    set $OperatorFabricRealm "dev";

# base64 encoded pair of authentication in the form of 'client-id:secret-id'
    set $ClientPairOFAuthentication "b3BmYWItY2xpZW50Om9wZmFiLWtleWNsb2FrLXNlY3JldA==" ;

where b3BmYWItY2xpZW50Om9wZmFiLWtleWNsb2FrLXNlY3JldA== is the base64 encoded string of opfab-client:opfab-keycloak-secret with opfab-client as client-id and opfab-keycloak-secret its client secret.

29.3.1.2. Configuration file common.yml

name

default

mandatory?

Description

operatorfabric.security.oauth2.resourceserver.jwt.jwk-set-uri

null

yes

The url providing the certificate used to verify the jwt signature

operatorfabric.security.jwt.loginClaim

sub

no

Jwt claim used for user login

example of common.yml

spring:
  security:
    oauth2:
      resourceserver:
        jwt:
          jwk-set-uri:  http://localhost:89/auth/realms/dev/protocol/openid-connect/certs
29.3.1.3. Configuration file web-ui.json

Nginx web server serves this file. OperatorFabric creates and uses a custom Docker image containing an Nginx server with a docker volume containing this file. The docker-compose.yml contains an example of it. The path in the image to it is /usr/share/nginx/html/opfab/web-ui.json.

For OAuth2 security concerns into this file, there are two ways to configure it, based on the Oauth2 chosen flow. There are several common properties:

  • security.logout-url: url used when a user is logged out of the UI;

  • security.oauth2.flow.delegate-url: url used to connect to the Authentication provider;

  • security.oauth2.flow.mode: technical way to be authenticated by the Autentication provider.

  • security.oauth2.client-id : the OAuth2 client ID

  • security.jwt.login-claim : the field in the token that will be used to identify your account (default value : sub)

29.3.1.4. OAuth2 PASSWORD or CODE Flows

These two modes share the same way of declaring the delegate URL. This has to be defined in web-ui.json by setting :

  • security.oauth2.flow.mode to PASSWORD or CODE;

  • security.oauth2.flow.delegate-url with the URL of the OAuth2 leading to the protocol used for authentication.

Example of Configuration For CODE Flow
{
    "security": {
      "jwt": {
        "login-claim": "preferred_username"
      },
      "oauth2": {
        "client-id": "OAUTHCLIENTID",
        "flow": {
          "mode": "CODE",
          "delegate-url": "http://localhost:89/auth/realms/dev/protocol/openid-connect/auth?response_type=code&client_id=opfab-client"
        }
      },
      "logout-url":"http://localhost:89/auth/realms/dev/protocol/openid-connect/logout?redirect_uri=http://localhost:2002/"
  }
}

Within the delegate-url property dev is the keycloak client realm of OperatorFabric. For keycloak instance used for development purposes, this delegate-url correspond to the realm under which the client opfab-client is registred. Here, the client-id value is opfab-client which is define as client under the realm named dev on the dev keycloak instance.

29.3.1.5. OAuth2 IMPLICIT Flow

It had its own way of configuration. To enable IMPLICIT Flow authentication the following properties need to be set in web-ui.json:

  • security.oauth2.flow.mode to IMPLICIT;

  • security.oauth2.flow.delegate-url with the URL of the OAuth2 leading to the .well-known/openid-configuration end-point used for authentication configuration.

Example of configuration for IMPLICIT Flow
{
  "security": {
    "jwt": {
        "login-claim": "preferred_username"
      },
    "oauth2": {
      "client-id": "OAUTHCLIENTID",
      "flow": {
        "mode": "IMPLICIT",
        "delegate-url": "http://localhost:89/auth/realms/dev"
      }
    },
    "logout-url":"http://localhost:89/auth/realms/dev/protocol/openid-connect/logout?redirect_uri=http://localhost:2002/",
  }
}

Within the delegate-url property dev is the keycloak client realm of OperatorFabric. For keycloak instance used for development purposes, this delegate-url correspond to the realm under which the client opfab-client is registred. The url look up by the implicit ui mechanism is localhost:89/auth/realms/dev/.well-known/openid-configuration.

29.3.1.6. NONE Flow

The configuration for the NONE flow is a bit different because the token isn’t handled/visible in the front-end.

Nginx

The following variables can be removed:

  • $KeycloakBaseUrl

  • $OperatorFabricRealm

  • $ClientPairOFAuthentication

The locations for handling tokens can be edited to return a 401 by default. If one of these locations is called, the token generated by the secured proxy has expired.

location /auth/check_token {
  return 401;
}
location /auth/token {
  return 401;
}
location /auth/code/ {
  return 401;
}
Web-ui.json

Set the security.oauth2.flow.mode to NONE;

Set the security.oauth2.client-id to Your Oauth client ID;

Use the security.jwt.login-claim to select value from the token will be used to identify your account. In this example preferred_username is used;

Settings that are not required are:

  • delegate-url

Example of configuration
{
    "security": {
      "jwt": {
        "login-claim": "preferred_username"
      },
      "oauth2": {
        "client-id": "OAUTHCLIENTID",
        "flow": {
          "mode": "NONE"
        }
      },
      "logout-url":"http://my-secured-proxy/OAUTHTENANTID/oauth2/logout?client_id=OAUTHCLIENTID&post_logout_redirect_uri=https%3A%2F%2Flocalhost:2002%2Fui%2Fui/"
    }
 }

29.3.2. User creation

If a user exists in the authentication system but not in Opfab, their profile will be automatically created upon their initial login. The user’s name and family name will be extracted from the authentication token.

name default mandatory? Description

operatorfabric.security.jwt.givenNameClaim

given-name

no

Jwt claim to use to get the user’s name

operatorfabric.security.jwt.familyName-claim

family-name

no

Jwt claim to used to get the user’s family name

29.3.3. Alternative way to manage groups (and/or entities)

By default, OperatorFabric manages groups (and/or entities) through the user's collection in the database. Another mode can be defined, the JWT mode. The groups (and/or entities) come from the authentication token.

To activate this option , you need to set some parameter in the common.yml file :

name default mandatory? Description

operatorfabric.security.jwt.groups.mode

OPERATOR_FABRIC

no

Set the group mode, possible values JWT or OPERATOR_FABRIC

operatorfabric.security.jwt.groups.rolesClaim.rolesClaimStandard.path

no

path in the JWT to retrieve the claim that defines a group

operatorfabric.security.jwt.groups.rolesClaim.rolesClaimStandardArray.path

no

path in the JWT to retrieve the claim that defines an array of groups

operatorfabric.security.jwt.groups.rolesClaim.rolesClaimStandardList.path

no

path in the JWT to retrieve the claim that defines a list of group

operatorfabric.security.jwt.groups.rolesClaim.rolesClaimStandardList.separator

no

set the separator value of the list of group

operatorfabric.security.jwt.groups.rolesClaim.rolesClaimCheckExistPath.path

no

path in the JWT to check if that path does exist, if it does, use the roleValue as a group

operatorfabric.security.jwt.groups.rolesClaim.rolesClaimCheckExistPath.roleValue

no

set the value of the group if the path exists

operatorfabric.security.jwt.entitiesIdClaim

no

set the name of the field in the token

operatorfabric.security.jwt.gettingEntitiesFromToken

no

boolean indicating if you want the entities of the user to come from the token and not mongoDB (possible values : true/false)

application.yml

operatorfabric:
  security:
    jwt:
      entitiesIdClaim: entitiesId
      gettingEntitiesFromToken: true
      groups:
        mode: JWT # value possible JWT | OPERATOR_FABRIC
        rolesClaim:
          rolesClaimStandard:
            - path: "ATTR1"
            - path: "ATTR2"            
          rolesClaimStandardArray:  
            - path: "resource_access/opfab-client/roles"
          rolesClaimStandardList:  
            - path: "roleFieldList" 
              separator: ";"           
          rolesClaimCheckExistPath: 
            - path: "resource_access/AAA" 
              roleValue: "roleAAA"      
            - path: "resource_access/BBB"
              roleValue: "roleBBB"  

JWT example

{
  "jti": "5ff87583-10bd-4946-8753-9d58171c8b7f",
  "exp": 1572979628,
  "nbf": 0,
  "iat": 1572961628,
  "iss": "http://localhost:89/auth/realms/dev",
  "aud": [
    "AAA",
    "BBB",
    "account"
  ],
  "sub": "example_user",
  "typ": "Bearer",
  "azp": "opfab-client",
  "auth_time": 0,
  "session_state": "960cbec4-fcb2-47f2-a155-975832e61300",
  "acr": "1",
  "realm_access": {
    "roles": [
      "offline_access",
      "uma_authorization"
    ]
  },
  "resource_access": {
    "AAA": {
      "roles": [
        "role_AAA"
      ]
    },
    "BBB": {
      "roles": [
        "role_BBB"
      ]
    },
    "opfab-client": {
      "roles": [
        "USER"
      ]
    },
    "account": {
      "roles": [
        "manage-account",
        "manage-account-links",
        "view-profile"
      ]
    }
  },
  "scope": "openid ATTR2 email ATTR1 profile roleFieldList",
  "email_verified": false,
  "name": "example_firtstname example_lastname",
  "ATTR2": "roleATTR2",
  "ATTR1": "roleATTR1",
  "preferred_username": "example_user",
  "given_name": "example_firtstname",
  "entitiesId": "ENTITY1",
  "family_name": "example_lastname",
  "email": "example_user@mail.com",
  "roleFieldList": "roleA;roleB;roleC"
}

As the result, the group will be [ATTR1, ATTR2, roleA, roleB, roleC, USER, roleBBB, roleAAA]

29.3.4. Adding certification authorities or certificates to the Java keystore

If you’re using certificates (for example for Keycloak) that are not from a certification authority trusted by the JVM, this will cause errors such as this one:

Missing certificate error message
Caused by: sun.security.validator. ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to
requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:302)
at sun.security.validator.Validator.validate(Validator.java:262)
at sun.security.ssl.X509TrustManager Impl.validate(x509TrustManagerImpl.java:330)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(x509TrustManagerImpl.java:237)
at sun.security.ssl.X509TrustManager Impl.checkServerTrusted(x509TrustManager Impl.java:132)
at sun.security.ssl.clientHandshaker.serverCertificate(ClientHandshaker.java:1621)
94 common frames omitted
Caused by: sun.security.provider.certpath. SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath. SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:392)
... 100 common frames omitted

If that is the case, you can pass the additional authorities or certificates that you use to the containers at runtime.

To do so, put the relevant files (*.der files for example) under src/main/docker/certificates.

  1. This directory should only contain the files to be added to the keystore.

  2. The files can be nested inside directories.

  3. Each certificate will be added with its filename as alias. For example, the certificate in file mycertificate.der will be added under alias mycertificate. As a consequence, filenames should be unique or it will cause an error.

  4. If you need to add or remove certificates while the container is already running, the container will have to be restarted for the changes to be taken into account.

If you would like certificates to be sourced from a different location, replace the volumes declarations in the deploy docker-compose.yml file with the selected location:

volumes:
 - "path/to/my/selected/location:/certificates_to_add"

instead of

volumes:
 - "../../../../src/main/docker/certificates:/certificates_to_add"
The steps described here assume you’re running OperatorFabric in docker mode using the deploy docker-compose, but they can be adapted for single container deployments and development mode.

If you want to check that the certificates were correctly added, you can do so with the following steps:

  1. Open a bash shell in the container you want to check

    docker exec -it deploy_businessconfig_1 bash
  2. Run the following command

    $JAVA_HOME/bin/keytool -list -v -keystore /tmp/cacerts -storepass changeit

You can also look at the default list of authorities and certificates trusted by the JVM with this command:

$JAVA_HOME/bin/keytool -list -v -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass changeit

29.3.5. IP address control

For each user it is possible to configure a list of authorized source ip addresses by setting the authorizedIPAddresses field in User object.

30. Monitoring

Operator Fabric provides end points for monitoring via prometheus. The monitoring is available for the five following services: user, businessconfig, cards-consultation, cards-publication and external-devices. To enable the monitoring , you need to open the endpoint by adding in the common yml configuration file :

management:
  endpoints:
    web:
      exposure:
        include: '*'

You can start a test prometheus instance via config/monitoring/startPrometheus.sh , the monitoring will be accessible on localhost:9090/

It is also possible to monitor the health status of cards-external-diffusion, cards-reminder and supervisor services by making an HTTP GET request to the "/healthcheck" endpoint exposed by each service. The healthcheck endpoint will return a HTTP 200 status code if the service is up and running.

31. Logging Administration

31.1. Logging configuration

Log levels can be defined either globally in the YAML configuration file or individually for each service. You will find a commented configuration for each service within the directory config/docker.

Services are designed to write logs to two primary destinations: standard output and log files within the Docker containers.

If you want to save logs you can redirect the standard output to a file.

Or you can also map the Docker directory /var/log/opfab to a location on the host machine for each business services, keeping logs outside of the container.

For the web-ui service’s logs are in the /var/log/nginx directory. You can map this directory to the host to preserve those logs.

31.2. Runtime configuration

Operator Fabric includes the ability to view and configure the log levels at runtime through APIs for most of the business services (the service supervisor , cards-external-diffusion and cards-reminder does not provide this feature). It is possible to configure and view an individual logger configuration, which is made up of both the explicitly configured logging level as well as the effective logging level given to it by the logging framework. These levels can be one of:

  • TRACE

  • DEBUG

  • INFO

  • WARN

  • ERROR

  • FATAL

Querying and setting logging levels is restricted to administrators.

To view the configured logging level for a given logger it is possible to send a GET request to the '/actuator/logger' URI as follows:

curl http://<server>:<port>/actuator/loggers/${logger} -H "Authorization: Bearer ${token}" -H "Content-type:application/json"

where ${token} is a valid OAuth2 JWT for a user with administration privileges and ${logger} is the logger (ex: org.opfab)

The response will be a json object like the following:

{
  "configuredLevel" : "INFO",
  "effectiveLevel" : "INFO"
}

To get the log level for ROOT logger of a service it is possible to call the provided script src/test/resources/logs/getLogs.sh as follows: getLogs.sh <server:port> <user> <opfab_url>. For example to get the current log level for "card-publication" service execute: getLogs.sh localhost:2102 admin localhost

To configure a given logger, POST a json entity to the '/actuator/logger' URI, as follows:

curl -i -X POST http://<server>:<port>/actuator/loggers/${logger} -H "Authorization: Bearer ${token}" -H 'Content-Type: application/json' -d '{"configuredLevel": "DEBUG"}'

To “reset” the specific level of the logger (and use the default configuration instead) it is possible to pass a value of null as the configuredLevel.

To set the log level for ROOT logger of a service it is possible to call the provided script src/test/resources/logs/setLogs.sh as follows: setLogs.sh <server:port> <log_level> <user> <opfab_url>. For example to set the log level to DEBUG for "card-publication" service execute: setLogs.sh localhost:2102 DEBUG admin localhost

32. Users, Groups and Entities Administration

A new operator call John Doe, who has OAuth granted right to connect to current OperatorFabric instance, need to receive cards within current OperatorFabric instance. As a user of OperatorFabric, he needs to be added to the system with a login (john-doe-operator), his firstName (John) and his lastName (Doe). As there is no Administration GUI for the moment, it must be performed through command line, as detailed in the Users API.

32.1. Users

32.1.1. List all users

First of all, list the users (who are the recipients in OperatorFabric) of the system with the following commands:

Httpie

http http://localhost:2103/users "Authorization:Bearer $token" "Content-Type:application/type"

cURL

curl -v http://localhost:2103/users -H "Authorization:Bearer $token" -H "Content-Type:application/type"

response

HTTP/1.1 200 OK

[
    {
        "firstName": null,
        "groups": [
            "ADMIN"
        ],
        "entities": [
            "ENTITY1_FR",
            "ENTITY2_FR"
        ],
        "lastName": null,
        "login": "admin"
    },
    {
        "firstName": null,
        "groups": [
            "RTE",
            "ADMIN",
            "CORESO",
            "ReadOnly",
            "TEST"
        ],
        "lastName": null,
        "login": "operator3_fr"
    },
    {
        "firstName": null,
        "groups": [
            "ELIA"
        ],
        "lastName": null,
        "login": "elia-operator"
    },
    {
        "firstName": null,
        "groups": [
            "CORESO"
        ],
        "lastName": null,
        "login": "coreso-operator"
    },
    {
        "firstName": null,
        "groups": [
            "Dispatcher",
            "ReadOnly",
            "TEST"
        ],
        "entities": [
            "ENTITY1_FR"
        ],
        "lastName": null,
        "login": "operator1_fr"
    },
    {
        "firstName": null,
        "groups": [
            "Planner",
            "ReadOnly"
        ],
        "entities": [
            "ENTITY2_FR"
        ],
        "lastName": null,
        "login": "operator2_fr"
    },
]

32.1.2. Create a new User

We are sure that no John-doe-operator exists in our OperatorFabric instance. We can add him in our OperatorFabric instance using the following command use httpie:

echo '{"login":"john-doe-operator","firstName":"Jahne","lastName":"Doe"}' | http POST http://localhost:2103/users "Authorization:Bearer $token" "Content-Type:application/json"

Or here cURL:

curl -X POST http://localhost:2103/users -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '{"login":"john-doe-operator","firstName":"Jahne","lastName":"Doe"}'

response

HTTP/1.1 200 OK

{
    "firstName": "Jahne",
    "lastName": "Doe",
    "login": "john-doe-operator"
}

32.1.3. Fetch user details

It’s always a good thing to verify if all the information has been correctly recorded in the system:

with httpie:

http -b http://localhost:2103/users/john-doe-operator "Authorization:Bearer $token" "Content-Type:application/json"

or with cURL:

curl http://localhost:2103/users/john-doe-operator -H "Authorization:Bearer $token" -H "Content-Type:application/json"

response

HTTP/1.1 200 OK

{
    "firstName": "Jahne",
    "groups": [],
    "entities": [],
    "lastName": "Doe",
    "login": "john-doe-operator"
}

32.1.4. Update user details

As shown by this result, the firstName of the new operator has been misspelled. We need to update the existing user with john-doe-operator login. To correct this mistake, the following commands can be used:

with httpie:

echo '{"login":"john-doe-operator","lastName":"Doe","firstName":"John"}' | http PUT http://localhost:2103/users/john-doe-operator "Authorization:Bearer $token" "Content-Type:application/json"

or with cURL:

curl -X PUT http://localhost:2103/users/john-doe-operator -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '{"login":"john-doe-operator","firstName":"John","lastName":"Doe"}'

response

HTTP/1.1 200 OK

{
    "firstName": "John",
    "lastName": "Doe",
    "login": "john-doe-operator"
}

32.2. Groups/Entities

All the commands below :

  • List groups

  • Create a new group

  • Fetch details of a given group

  • Update details of a group

  • Add a user to a group

  • Remove a user from a group

are available for both groups and entities. In order not to overload the documentation, we will only detail group endpoints.

32.2.1. List groups (or entities)

This operator is the first member of a new group operator called the OPERATORS, which doesn’t exist for the moment in the system. As shown when we list the groups existing in the server.

Httpie

http http://localhost:2103/groups "Authorization:Bearer $token" "Content-Type:application/type"

cURL

curl http://localhost:2103/groups -H "Authorization:Bearer $token" -H "Content-Type:application/json"

response

HTTP/1.1 200 OK

[
    {
        "description": "The admin group",
        "name": "ADMIN group",
        "id": "ADMIN"
    },
    {
        "description": "RTE TSO Group",
        "name": "RTE group",
        "id": "RTE"
    },
    {
        "description": "ELIA TSO group",
        "name": "ELIA group",
        "id": "ELIA"
    },
    {
        "description": "CORESO Group",
        "name": "CORESO group",
        "id": "CORESO"
    },
    {
        "description": "Dispatcher Group",
        "name": "Dispatcher",
        "id": "Dispatcher"
    },
    {
        "description": "Planner Group",
        "name": "Planner",
        "id": "Planner"
    },
    {
        "description": "ReadOnly Group",
        "name": "ReadOnly",
        "id": "ReadOnly"
    }
]

32.2.2. Create a new group (or entity)

Firstly, the group called OPERATORS has to be added to the system using the following command:

using httpie:

echo '{"id":"OPERATORS","decription":"This is the brand new group of operator"}' | http POST http://localhost:2103/groups "Authorization:Bearer $token" "Content-Type:application/json"

using cURL:

curl -X POST http://localhost:2103/groups -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '{"id":"OPERATORS","decription":"This is the brand new group of operator"}'

response

HTTP/1.1 200 OK

{
    "perimeters": [],
    "description": null,
    "name": null,
    "id": "OPERATORS"
}

32.2.3. Fetch details of a given group (or entity)

The result returned seems strange, to verify if it’s the correct answer by displaying the details of the group called OPERATORS, use the following command:

using httpie:

http http://localhost:2103/groups/OPERATORS "Authorization:Bearer $token" "Content-Type:application/json"

using cURL:

curl http://localhost:2103/groups/OPERATORS -H "Authorization:Bearer $token" -H "Content-Type:application/json"

response

HTTP/1.1 200 OK

{
    "perimeters": [],
    "description": null,
    "name": null,
    "id": "OPERATORS"
}

32.2.4. Update details of a group (or entity)

The description is really null. After verification, in our first command used to create the group, the attribute for the description is misspelled. Using the following command to update the group with the correct spelling, the new group of operator gets a proper description:

with httpie:

echo '{"id":"OPERATORS","description":"This is the brand-new group of operator"}' | http -b PUT http://localhost:2103/groups/OPERATORS "Authorization:Bearer $token" "Content-Type:application/json"

with cURL:

curl -X PUT http://localhost:2103/groups/OPERATORS -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '{"id":"OPERATORS","description":"This is the brand-new group of operator"}'

response

{
    "perimeters": []
    "description": "This is the brand-new group of operator",
    "name": null,
    "id": "OPERATORS"
}

32.2.5. Add a user to a group (or entity)

As both new group and new user are correct it’s time to make the user a member of the group . To achieve this, use the following command:

with httpie:

echo '["john-doe-operator"]' | http PATCH http://localhost:2103/groups/OPERATORS/users "Authorization:Bearer $token" "Content-Type:application/json"

with cURL:

curl -X PATCH http://localhost:2103/groups/OPERATORS/users -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '["john-doe-operator"]'

response

HTTP/1.1 200 OK

Let’s verify that the changes are correctly recorded by fetching the :

http http://localhost:2103/users/john-doe-operator "Authorization:Bearer $token" "Content-Type:application/json"

with cURL

curl http://localhost:2103/users/john-doe-operator -H "Authorization:Bearer $token" -H "Content-Type:application/json"

response

HTTP/1.1 200 OK

{
    "firstName": "John",
    "groups": ["OPERATORS"],
    "entities": [],
    "lastName": "Doe",
    "login": "john-doe-operator"
}

It’s now possible to send cards either specifically to john-doe-operator or more generally to the OPERATORS group.

32.2.6. Remove a user from a group (or entity)

When John Doe is no longer in charge of hypervising cards for OPERATORS group, this group has to be removed from his login by using the following command:

with httpie:

http DELETE http://localhost:2103/groups/OPERATORS/users/john-doe-operator "Authorization:Bearer $token"

with cURL:

curl -X DELETE -H "Authorization:Bearer $token" http://localhost:2103/groups/OPERATORS/users/john-doe-operator

response

HTTP/1.1 200 OK

{
	"login":"john-doe-operator","
	firstName":"John",
	"lastName":"Doe",
	"groups":[],
    "entities":[]
}

A last command to verify that OPERATORS is no longer linked to john-doe-operator:

with httpie:

http http://localhost:2103/users/john-doe-operator "Authorization:Bearer $token" "Content-Type:application/json"

with cURL:

curl http://localhost:2103/users/john-doe-operator -H "Authorization:Bearer $token" -H "Content-Type:application/json"

response

HTTP/1.1 200 OK

{
    "firstName": "John",
    "groups": [],
    "entities": [],
    "lastName": "Doe",
    "login": "john-doe-operator"

}

32.2.7. Entity parents

Entities have a 'parents' attribute instead of a 'perimeters' one. This attribute is a string array. Each element of the array is the id of another Entity. When adding or patching an Entity into the system, operatorFabric performs a cycle detection. On a positive cycle detection cancels the addition or the patch.

32.2.7.1. Add a new Entity without a cycle in the parent declaration

using httpie:

echo '{"id":"NEW_ENTITY","name":"Last New Entity","description":"This is the last new entity","parents": ["ENTITY1_FR"]}' \
| http POST http://localhost:2103/entities "Authorization:Bearer $token" "Content-Type:application/json"

using cURL:

curl http://localhost:2103/entities -H "Authorization:Bearer $token" -H "Content-Type:application/json" \
--data '{"id":"NEW_ENTITY","name":"Last New Entity","description":"This is the last new entity","parents": ["ENTITY1_FR"]}'

response

HTTP/1.1 200 OK

{
    "id": "NEW_ENTITY",
    "parents": ["ENTITY1_FR"],
    "name": "Last New Entity",
    "description":"This is the last new entity",
    "labels":[]
}
32.2.7.2. Add a new Entity with a cycle

For simplicity, in this example the new entity will declare itself as a parent. This auto-referencing triggers a cycle detection.

using httpie:

echo '{"id":"NEW_ENTITY","name":"Last New Entity","description":"This is the last new entity","parents": ["NEW_ENTITY"]}' | http POST http://localhost:2103/entities "Authorization:Bearer $token" "Content-Type:application/json"

using cURL:

curl http://localhost:2103/entities -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '{"id":"NEW_ENTITY","name":"Last New Entity","description":"This is the last new entity","parents": ["NEW_ENTITY"]}'

response

{
    "status":"BAD_REQUEST",
    "message":"A cycle has been detected: NEW_ENTITY->NEW_ENTITY"
}

with a 400 as http status return.

33. Service port table

Port Service Description

89

KeyCloak

KeyCloak api port

2002

web-ui

Web ui and gateway (Nginx server)

2100

businessconfig

Businessconfig management service http (REST)

2102

cards-publication

Cards publication service http (REST)

2103

users

Users management service http (REST)

2104

cards-consultation

Cards consultation service http (REST)

2105

external-devices

External devices management service http (REST)

2106

cards-external-diffusion

Mail notification service http (REST)

2107

cards-reminder

Cards reminder service http (REST)

2108

supervisor

Supervisor service http (REST)

27017

mongo

Mongo api port

5672

rabbitmq

Amqp api port

15672

rabbitmq

Rabbitmq api port

1025

MailHog

MailHog SMTP port (for testing only)

8025

MailHog

MailHog web UI and API http (REST) (for testing only)

34. Rate limiter for card sending

External publishers are monitored by this module which limits how many cards they can send cardSendingLimitCardCount in a period of time cardSendingLimitPeriod . This is to avoid potential overloading due to external apps stuck in a card sending loop. It can be disabled / enabled with activateCardSendingLimiter . It is also possible to reset the data of the limiter by calling the API through the "/cards/rateLimiter" endpoint (e.g. POST "$url:2102/cards/rateLimiter" -H "Authorization:Bearer $token").

35. Restricted operations (administration)

Some operations are restricted to users with administrative roles, either because they are administration operations with the potential to impact the OperatorFabric instance as a whole, or because they give access to information that should be private to a user.

Below is a quick recap of these restricted operations.

Users Service

Any action (read, create/update or delete) regarding a single user’s data (their personal info such as their first and last name, as well as their settings) can be performed either by the user in question or by a user with the ADMIN role.

Any action on users, groups, entities or perimeters lists (if authorization is managed in OperatorFabric) is restricted to users with the ADMIN role.

Access to users action logs information is restricted to users with ADMIN role or VIEW_USER_ACTION_LOGS role.

Businessconfig Service

Any write (create, update or delete) action on bundles can only be performed by a user with the ADMIN role or ADMIN_BUSINESS_PROCESS role. As such, administrators are responsible for the quality and security of the provided bundles. In particular, as it is possible to use scripts in templates, they should perform a security check to make sure that there is no XSS risk.

Cards consultation Service

Users holding the ADMIN role are authorized to read all archived cards, regardless of the card’s sender, recipients, or the user’s assigned perimeters IMPORTANT: The ADMIN role doesn’t grant any special privileges when it comes to current cards consultation so a user with the ADMIN role will only receive cards that have been addressed to them (or to one of their groups or entities), just like any other user.

External devices service

Any action (read, create/update or delete) on external devices configuration is restricted to users with th ADMIN role.

Development environment

The aim of this document is to provide all the necessary information to developers who would like to start working on OperatorFabric. It will walk you through setting up the necessary tooling to be able to launch OperatorFabric in development mode, describe the structure of the project and point out useful tools (Gradle tasks, scripts, etc.) for development purposes.

36. Requirements

To install a dev environment for Operator Fabric , you need

  • A linux physical or virtual machine

  • A git client

  • Docker

  • Docker Compose with 2.1+ file format support

  • Chrome (needed for UI tests in build)

  • jq

37. Setting up your development environment

37.1. Clone repository

git clone https://github.com/opfab/operatorfabric-core.git
cd operatorfabric-core

Do not forget to set proxy if needed , for example :

git config --global http.proxy http://LOGIN:PWD@PROXY_URL:PORT

37.2. Install sdkman and nvm

sdkman is used to manage java version , see sdkman.io/ for installation

nvm is used to manage node and npm version , see github.com/nvm-sh/nvm for installation See :

Do not forget to set proxy if needed , for example :

export https_proxy= http://LOGIN:PWD@PROXY_URL:PORT

Once you have installed sdkman and nvm, you can source the following script to set up your development environment (appropriate versions of Node Java and project variables set):

source bin/load_environment_light.sh
From now on, you can use environment variable ${OF_HOME} to go back to the home repository of OperatorFabric.

37.3. Setting up your proxy configuration

If you use a proxy to access internet, you must configure it for all tools needed to build opfab

37.3.1. Docker

To download images , you need to set the proxy for the docker daemon

See docker documentation to set the proxy

To build docker images, you need to set a proxy via the variable https_proxy

export https_proxy= http://LOGIN:PWD@PROXY_URL:PORT

You may have some DNS error when building docker images, in this case use the IP address of your proxy instead of the FQDN .

37.3.2. Gradle

In gradle.properties (in ~/.gradle repository)

Example :

systemProp.https.proxyHost=PROXY_URL
systemProp.https.proxyPort=PROXY_PORT
systemProp.https.proxyUser=LOGIN
systemProp.https.proxyPassword=PASSWORD

37.3.3. Npm

Example :

npm config set https-proxy http://LOGIN:PWD@PROXY_URL:PORT

37.4. Deploy needed docker containers

OperatorFabric development needs docker images of MongoDB, RabbitMQ, web-ui and Keycloak running.

For this, use:

cd ${OF_HOME}/config/dev
./docker-compose.sh

37.5. Build OperatorFabric with Gradle

Using the wrapper in order to ensure building the project the same way from one machine to another.

To fully build opfab :

cd ${OF_HOME}
./gradlew dockerTagSnapshot

Optionally, it is possible to

37.5.1. Only compile and package the jars

cd ${OF_HOME}
./gradlew assemble

37.5.2. Only launch the Unit Test, compile and package the jars

cd ${OF_HOME}
./gradlew build

37.6. Run OperatorFabric Services using the run_all.sh script

cd ${OF_HOME}
bin/run_all.sh -w start
See bin/run_all.sh -h for details.

37.7. Log into the UI

URL: localhost:2002
login: operator1_fr
password: test

Other users available in development mode are operator2_froperator3_fr, operator4_fr and admin, with test as password.

37.8. Push cards to the feed

You can check that you see cards into the feed by running the following scripts.

./src/test/resources/loadTestConf.sh
./src/test/resources/send6TestCards.sh

37.9. Enabling local quality report generation

This step is optional and is generally not needed (only needed if you want a Sonarqube report locally)

Sonarqube reporting needs a SonarQube docker container. Use the ${OF_HOME}/src/main/docker/test-quality-environment/docker-compose.yml to get them all running.

To generate the quality report, run the following commands:

cd ${OF_HOME}
./gradlew jacocoTestReport

To export the reports into the SonarQube docker instance, install and use SonarScanner.

38. User Interface

In the following document the variable declared as OF_HOME is the root folder of the operatorfabric-core project.
CLI

stands for Command Line Interface

38.1. Build

Within the folder ${OF_HOME}/ui/main, run ng build to build the project.

The build artifacts will be stored in:

${OF_HOME}/ui/main/build/distribution

The previous command could lead to the following error:

Generating ES5 bundles for differential loading...
An unhandled exception occurred: Call retries were exceeded
See "/tmp/ng-<random-string>/angular-errors.log" for further details.

where ng-<random-string> is a temporary folder created by Angular to build the front-end.

Use node --max_old_space_size=4096 node_modules/@angular/cli/bin/ng build instead to solve this problem.

38.2. Test

38.2.1. Standalone tests

Run in the ${OF_HOME}/ui/main directory the command ng test --watch=false to execute the unit tests on Jasmine using Karma to drive the browser.

38.2.2. Test during UI development

  1. if the RabbitMQ, MongoDB and Keycloak docker containers are not running, launch them;

  2. set your environment variables with source ${OF_HOME}/bin/load_environment_light.sh;

  3. run the micro services using the same command as earlier: ${OF_HOME}/bin/run_all.sh start;

  4. launch an angular server with the command: ng serve;

  5. test your changes in your browser using this url: localhost:4200 which leads to localhost:4200/#/feed.

38.2.2.1. Troubleshooting :

If ng serve returns the error Command 'ng' not found, install the Angular CLI globally with the following command.

npm install -g @angular/cli

This will install the latest version of the Angular command line, which might not be in line with the one used by the project, but it’s not an issue as when you run ng serve the local version of the Angular CLI (as defined in the package.json file) will be used.

If it is still not running , launch in the ui/main directory

npm link @angular/cli

39. Environment variables

These variables are loaded by bin/load_environment_light.sh

  • OF_HOME: OperatorFabric root dir

  • OF_VERSION : OperatorFabric version, as defined in the $OF_HOME/VERSION file

  • OF_CLIENT_REL_COMPONENTS : List of modules for the client libraries

40. Project Structure

40.1. Conventions regarding project structure and configuration

Sub-projects must conform to a few rules in order for the configured Gradle tasks to work:

40.1.1. Java

[sub-project]/src/main/java

contains java source code

[sub-project]/src/test/java

contains java tests source code

[sub-project]/src/main/resources

contains resource files

[sub-project]/src/test/resources

contains test resource files

40.1.2. Modeling

Core services projects declaring REST APIS that use Swagger for their definition must declare two files:

[sub-project]/src/main/modeling/swagger.yaml

Swagger API definition

[sub-project]/src/main/modeling/config.json

Swagger generator configuration

40.1.3. Docker

Services project all have docker image generated in their build cycle. See Gradle Tasks for details.

Per project configuration :

  • docker file : [sub-project]/src/main/docker/Dockerfile

  • docker-compose file : [sub-project]/src/main/docker/docker-compose.yml

  • runtime data : [sub-project]/src/main/docker/volume is copied to [sub-project]/build/docker-volume/ by task copyWorkingDir. The latest can then be mounted as volume in docker containers.

41. Development tools

41.1. Scripts (bin and CICD)

bin/load_environment_light.sh

sets up environment when sourced (java version & node version)

bin/run_all.sh

runs all all services (see below)

41.1.1. run_all.sh

Please see run_all.sh -h usage before running.

Prerequisites

  • mongo running on port 27017 with user "root" and password "password" (See src/main/docker/mongodb/docker-compose.yml for a pre configured instance).

  • rabbitmq running on port 5672 with user "guest" and password "guest" (See src/main/docker/rabbitmq/docker-compose.yml for a pre configured instance).

Ports configuration

Port

2002

web-ui

Web ui and gateway (Nginx server)

2100

businessconfig

Businessconfig service http (REST)

2102

cards-publication

card publication service http (REST)

2103

users

Users management service http (REST)

2104

cards-consultation

card consultation service http (REST)

2105

external-devices

External devices service http (REST)

4100

businessconfig

java debug port

4102

cards-publication

java debug port

4103

users

java debug port

4104

cards-consultation

java debug port

4105

external-devices

java debug port

41.2. Gradle Tasks

In this section only custom tasks are described. For more information on tasks, refer to the output of the "tasks" gradle task and to gradle and plugins official documentation.

41.2.1. Services

41.2.1.1. Common tasks for all sub-projects
  • Test tasks

  • Other:

    • copyWorkingDir: copies [sub-project]/src/main/docker/volume to [sub-project]/build/

    • copyDependencies: copy dependencies to build/support_libs (for Sonar)

41.2.1.2. Businessconfig Service
  • Test tasks

    • prepareTestDataDir: prepare directory (build/test-data) for test data

    • compressBundle1Data, compressBundle2Data: generate tar.gz businessconfig party configuration data for tests in build/test-data

    • prepareDevDataDir: prepare directory (build/dev-data) for bootRun task

    • createDevData: prepare data in build/test-data for running bootRun task during development

  • Other tasks

    • copyCompileClasspathDependencies: copy compile classpath dependencies, catching lombok that must be sent for sonarqube

41.2.1.3. tools/generic
  • Test tasks

    • prepareTestData: copy test data from src/test/data/simple to build/test-data/

    • compressTestArchive: compress the contents of /src/test/data/archive to /build/test-data/archive.tar.gz

41.2.2. Client Library

The jars produced by the projects under "client" will now be published to Maven Central after each release to make integration in client applications more manageable (see the official Sonatype documentation) for more information about the requirements and publishing process.

To that end, we are using:

  • the Maven Publish Gradle plugin to take care of the metadata (producing the required pom.xml for example) and publishing the artifacts to a staging repository

  • the Signing Gradle plugin to sign the produced artifacts using a GPG key.

41.2.2.1. Configuration

For the signing task to work, you need to :

Import the opfab secret key file
gpg2 --import OPFAB_KEY_FILE
Set the signing configuration in your gradle.properties file

Add to your gradle.properties :

signing.gnupg.keyName=ID_OF_THE_GPG_KEY_TO_USE
signing.secretKeyRingFile=LOCATION_OF_THE_KEYRING_HOLDING_THE_GPG_KEY

To get the keyName (ID_OF_THE_GPG_KEY_TO_USE) use :

gpg2 --list-secret-keys

LOCATION_OF_THE_KEYRING_HOLDING_THE_GPG_KEY is usually /YOUR_HOME_DIRECTORY/.gnupg/pubring.kbx

Set the credential for the publication

For the publication to the staging repository (OSSRH) to work, you need to set the credentials in your gradle.properties file:

ossrhUsername=SONATYPE_JIRA_USERNAME
ossrhPassword=SONATYPE_JIRA_PASSWORD
The credentials need to belong to an account that has been granted the required privileges on the project (this is done by Sonatype on request via the same JIRA).
More information

See this link for more information about importing a GPG key to your machine and getting its id.

41.2.2.2. Relevant tasks

These plugins and the associated configuration in the client.gradle file make the following tasks available:

  • publishClientLibraryMavenPublicationToOssrhRepository: this will publish the client jars to the OSSRH repository (in the case of a X.X.X.RELEASE version) or to a repos directory in the build directory (in the case of a SNAPSHOT version).

  • publishClientLibraryMavenPublicationToMavenLocal: this will publish the client jars to the local Maven repository

The publication tasks will call the signing task automatically.

See the plugins documentations for more details on the other tasks available and the dependencies between them.

As the client library publication is currently the only configured publication in our build, it is also possible to use the corresponding aggregate tasks as shortcuts: publish instead of publishClientLibraryMavenPublicationToOssrhRepository and publishToMavenLocal instead of publishClientLibraryMavenPublicationToMavenLocal.

41.2.3. Gradle Plugins

In addition to these custom tasks and standard Gradle tasks, OperatorFabric uses several Gradle plugins, among which:

41.3. API testing with Karate DSL

If your OperatorFabric instance is not running on localhost, you need to replace localhost with the address of your running instance within the karate-config.js file.

All the scripts and test files are in src/test/api/karate.

41.3.1. Run a feature

To launch a specific test, launch in src/test/api/karate.

$OF_HOME/gradlew karate --args=myfeature.feature

The result will be available in the target repository.

41.3.2. Non regression tests

You can launch operatorFabric non-regression tests via the script launchAll.sh in src/test/api/karate.

To have the test passed, you need to have a clean Mongo DB database. To do that, you can use the scripts :

  • src/test/resources/deleteAllCards.sh

  • src/test/resources/deleteAllArchivedCards.sh

41.4. Cypress Tests

Automatic UI testing

All paths for cd are given assuming you’re starting from $OF_HOME.

41.4.1. Installation

Before running Cypress tests for the first time you need to install it using NPM.

cd src/test/cypress
npm install

41.4.2. Cypress file structure

By default, all test files are located in cypress/cypress/integration but it is possible to put it on another directory

The commands.js file under cypress/cypress/support is used to create custom commands and overwrite existing commands.

41.4.3. Launching the OpFab instance to test

41.4.3.1. Commands

You can launch the OpFab instance for Cypress tests either in dev or docker mode. The following commands launch the instance in docker mode, just substitute dev for docker to launch it in dev mode.

cd config/docker
docker-compose down (1)
./startOpfabForCypress.sh (2)
1 Remove existing config/docker containers to avoid conflicts
2 Start "Cypress-flavoured" containers

After you’re done with your tests, you can stop and destroy containers (as it is better to start with fresh containers to avoid side-effects from previous tests) with the following commands:

cd config/docker
docker-compose down
41.4.3.2. Explanation

The Cypress tests rely on a running OpFab instance that is an adaptation from the config/docker docker-compose file (environment name, shorter time before lttd clock display, etc.).

The generateUIConfigForCypress.sh script performs this adaptation to create this base Cypress configuration.

This will create the following files under config/cypress/ui-config:

  • ui-menu.json

  • web-ui.json

  • web-ui-base.json

Where XXX-base.json and XXX.json are created by copying the corresponding XXX.json file for standard docker configuration (found under config/docker/ui-config) and making the necessary adaptations needed for the cypress instance to work well for the tests (changing the authentication mode, making all features visible, etc.).

Then, during the course of the cypress tests, the web-ui.json file will be modified to test specific features (for example, hiding a feature, defining a new menu, etc.). It is reset with the content of web-ui-base.json before each test or series of test.

The docker container relies on the XXX.json files under config/cypress/ui-config.

For convenience, the generateUIConfigForCypress.sh is launched as part of the startOpfabForCypress.sh scripts.

41.4.4. Running existing tests

To launch the Cypress test runner:

cd src/test/cypress
./node_modules/.bin/cypress open

This will open the Cypress test runner. Either click on the test you want to run or run all X tests on the right to run all existing tests.

You can select the browser (and version) that you want to use from a dropdown list on the right. The list will display the browsers that are currently installed on your computer (and supported by Cypress).

41.4.5. Running tests on 4200 (ng serve)

Follow the steps described above in "Dev mode" to start a Cypress-flavoured OpFab instance in development mode, then run ng serve to start a dynamically generated ui on port 4200:

cd ui/main
ng serve

Then launch the Cypress test runner as follows:

To launch the Cypress test runner:

cd src/test/cypress
./node_modules/.bin/cypress open --config baseUrl=http://localhost:4200

41.4.6. Running tests with Gradle

The tests can also be run in command line mode using a Gradle task :

./gradlew runCypressTests

You can run a subset of the tests, for example if you want to run all the tests starting with 'User':

./gradlew runSomeCypressTests -PspecFiles=User*

41.4.7. Clearing MongoDB

If you want to start with a clean database (from the cards and archived cards point of view), you can purge the associated collections through the MongoDB shell with the following commands:

docker exec -it docker_mongodb_1 bash
mongo "mongodb://root:password@localhost:27017/?authSource=admin"
use operator-fabric
db.cards.remove({})
db.archivedCards.remove({})

41.4.8. Current status of tests

All tests should be passing when run alone (i.e. not with run all specs) against empty card/archived cards collections. However, tests in the "Flaky" folder can sometimes fail because they involve dates (round up errors for example).

41.4.9. Creating new tests

Create a new XXXX.spec.js file under cypress/cypress/integration

We will need to define a convention for naming and organizing tests.
41.4.9.2. Guidelines and tips
  • Use the find or within commands rather than complex CSS selectors to target descendants elements.

  • If you want to access aliases using the this keyword, make sure you are using anonymous functions rather than fat arrow functions, otherwise use cy.get('@myAlias') to access it asynchronously (the documentation has recently been updated on this topic).

  • When running tests, make sure that you are not connected to OpFab as it can cause unexpected behaviour with read cards for example.

  • When chaining a should assertion to a cy.get command that returns several elements, it will pass if it is true for ANY of these elements. Use each + callback to check that an assertion is true on every element.

  • cy.contains is a command, not an assertion. If you want to test the attribute, classes, content etc. of an element, it’s better to target the element by id or data attribute using a cy.get() command for example and then chain an assertion with should(). This way, you will get an expected/actual error message if the assertion fails, you will avoid false positives (text is found in another sibling element) and hard to debug behaviour with retries.

  • Be careful with find() (see #1751 for an example of issue that it can cause). See the Cypress documentation for an explanation and a less flaky alternative.

41.4.10. Configuration

In cypress.config.js:

  • e2e.baseUrl: The base url of the OperatorFabric instance you’re testing against. It will be appended in front of any visit call.

  • e2e.env.host: The host corresponding to the OperatorFabric instance you’re testing against. It will be used for API calls.

  • e2e.env.defaultWaitTime: Using the custom-defined command cy.waitDefaultTime() instead of cy.wait(XXX) allows the wait time to be changed globally for all steps to the value defined by this property.

41.5. Load testing with Gatling

Load tests using Gatling are written in java. All test java sources are in src/test/gatling/src/java

If your OperatorFabric instance is not running on localhost, you need to edit the tests classes and replace localhost with the address of your running instance.

41.5.1. Run a test

To launch a specific test, launch:

$OF_HOME/gradlew clean  gatlingRun-<TestClassName>

The result will be available in the src/test/gatling/build/reports/gatling/ folder.

41.6. Node services

When developing Node services, you have the option to run the service outside of the Docker environment. To do this, follow these steps:

Stop the Docker container running the Node service. For example:

    docker stop cards-reminder

After stopping the Docker container, you can start the service in development mode with hot reload :

    cd node-services/cards-reminder
    npm run start:dev

41.6.1. Other useful commands

Here are some other useful commands for working with Node services:

To launch unit tests :

    npm run test

To build the service :

    npm run build

To start the service without hot reload :

    node build/cardReminder.ts

42. Useful recipes

42.1. Running sub-project from jar file

  • java -jar [sub-projectPath]/build/libs/[sub-project].jar

42.2. Overriding properties when running from jar file

  • java -jar [sub-projectPath]/build/libs/[sub-project].jar –spring.config.additional-location=file:[filepath] NB : properties may be set using ".properties" file or ".yml" file. See Spring Boot configuration for more info.

  • Generic property list extract :

    • server.port : embedded server port

  • :services:core:businessconfig-party-service properties list extract :

    • operatorfabric.businessconfig.storage.path (defaults to "") : where to save/load OperatorFabric Businessconfig data

42.3. Generating docker images

To Generate all docker images run `./gradlew dockerTagSnaphot'

42.4. Generating documentation (from AsciiDoc sources)

The sources for the documentation are located under src/docs/asciidoc. To generate HTML pages from these sources, use the asciidoctor gradle task from the project root:

cd $OF_HOME
./gradlew asciidoctor

The task output can be found under $OF_HOME/build/docs/asciidoc

42.5. Generating API documentation

The documentation for the API is generated from the swagger.yaml definitions using SwaggerUI. To generate the API documentation, use the generateSwaggerUI gradle task, either from the project root or from one of the services:

cd $OF_HOME
./gradlew generateSwaggerUI

The task output can be found for each service under [service-path]/build/docs/api (for example services/businessconfig/build/docs/api). Open the index.html file in a browser to have a look at the generated SwaggerUI.

43. Troubleshooting

Proxy error when running businessconfig-party docker-compose

Error message
Pulling rabbitmq (rabbitmq:3-management)...
ERROR: Get https://registry-1.docker.io/v2/: Proxy Authentication Required
Possible causes & resolution

When running docker-compose files using businessconfig-party images(such as rabbitmq, mongodb etc.) the first time, docker will need to pull these images from their repositories. If the docker proxy isn’t set properly, you will see the above message.

To set the proxy, follow these steps from the docker documentation.

If your proxy needs authentication, add your user and password as follows:

HTTP_PROXY=http://user:password@proxy.example.com:80/
The password should be URL-encoded.

Gradle Metaspace error

Gradle task (for example gradle build) fails with the following error:

Error message
* What went wrong:
Metaspace
Possible causes & resolution

Issue with the Gradle daemon. Stopping the daemon using ./gradlew --stop and re-launching the build should solve this issue.

Java version not available when setting up environment
When sourcing the load_environment_light script to set up your environment, you might get the following error message:

Error message
Stop! java 8.0.192-zulu is not available. Possible causes:
 * 8.0.192-zulu is an invalid version
 * java binaries are incompatible with Linux64
 * java has not been released yet

Select the next available version and update load_environment_light accordingly before sourcing it again.

Possible causes & resolution

The java version currently listed in the script might have been deprecated (for security reasons) or might not be available for your operating system (for example, 8.0.192-zulu wasn’t available for Ubuntu).

Run sdk list java to find out which versions are available. You will get this kind of output:

================================================================================
Available Java Versions
================================================================================
     13.ea.16-open       9.0.4-open          1.0.0-rc-11-grl
     12.0.0-zulu         8.0.202-zulu        1.0.0-rc-10-grl
     12.0.0-open         8.0.202-amzn        1.0.0-rc-9-grl
     12.0.0-librca       8.0.202.j9-adpt     1.0.0-rc-8-grl
     11.0.2-zulu         8.0.202.hs-adpt
     11.0.2-open         8.0.202-zulufx
     11.0.2-amzn         8.0.202-librca
     11.0.2.j9-adpt      8.0.201-oracle
     11.0.2.hs-adpt  > + 8.0.192-zulu
     11.0.2-zulufx       7.0.211-zulu
     11.0.2-librca       6.0.119-zulu
     11.0.2-sapmchn      1.0.0-rc-15-grl
     10.0.2-zulu         1.0.0-rc-14-grl
     10.0.2-open         1.0.0-rc-13-grl
     9.0.7-zulu          1.0.0-rc-12-grl

================================================================================
+ - local version
* - installed
> - currently in use
================================================================================

BUILD FAILED with message Execution failed for task ':ui:main-user-interface:npmInstall'.

Error message
FAILURE: Build failed with an exception.

    What went wrong:
    Execution failed for task ':ui:main-user-interface:npmInstall'.
Possible causes & resolution

A sudo has been used before the ./gradlew assemble.

Don’t use sudo to build OperatorFabric otherwise unexpected problems could arise.

curl get Failed to connect to localhost:2002: Connection refused

When using the following command line:

curl http://localhost:2002/
Error message
curl: (7) Failed to connect to localhost port 2002: Connexion refused
Possible causes & resolution

The web-ui docker container stops running. Check its configuration.

curl 404 status return by ngnix

When using the following command line:

curl http://localhost:2002/thirds/

The following error appears:

Error message
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.17.10</center>
</body>
</html>
Possible causes & resolution

The requested page is not or no more mapped by the nginx.conf of web-ui. Update it or check for the new end point of the desired page.

For this example, businessconfig replaces now the former thirds end-point.

curl 404 status return by OperatorFabric

When using the following command line:

curl http://localhost:2002/businessconfig/ -H "Authorization: Bearer ${token}"

where ${token} is a valid OAuth2 JWT.

The following error appears:

Error message
{"timestamp":"XXXX-XX-XXTXX:XX:XX.XXX+00:00","status":404,"error":"Not Found","message":"","path":"/businessconfig"}

where XXXX-XX-XXTXX:XX:XX.XXX+00:00 is a time stamp corresponding to the moment when the request has been sent.

Possible causes & resolution

The requested end-point is not or no more valid in OperatorFabric. Check the API documentation for correct path.

For this example, businessconfig/processes is a correct end-point whereas businessconfig alone is not.

ERROR: for web-ui when running docker-compose in ${OF_HOME}/config/dev

When using the following commands:

cd ${OF_HOME}/config/dev
docker-compose up -d

The following error appears:

Error message
ERROR: for web-ui  Cannot start service web-ui: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/home/legallron/projects/operatorfabric-core/config/dev/nginx.conf\\\" to rootfs …

where is specific to the runtime environment.

Possible causes & resolution

There is no nginx.conf file in the ${OF_HOME}/conf/dev directory.

A first run of OperatorFabric docker-compose in dev config needs a nginx.conf file. To create it, and run a docker-compose environment use:

cd ${OF_HOME}/config/dev
./docker-compose.sh

If docker-compose has created a nginx.conf directory, delete it before running the previous commands.

Once this nginx.conf file created a simple docker-compose up -d is enough to run a dev docker-compose environment. Sometimes a nginx.conf has been created as an attempt to launch the web-ui docker. See the following section to resolve this.

/docker-compose.sh: ligne 7: ./nginx.conf: is a folder when running ${OF_HOME}/config/dev/docker-compose.sh

When using the following commands:

cd ${OF_HOME}/config/dev
./docker-compose.sh

The following error appears:

Error message
./docker-compose.sh: ligne 7: ./nginx.conf: is a folder
Possible causes

A docker-compose up has been run previously without nginx.conf. A folder named nginx.conf has been created by docker-compose.

Resolution

You have rights to delete the folder:

cd ${OF_HOME}/config/dev
rm -rf nginx.conf
./docker-compose.sh # if you want to run OperatorFabric directly after.
cd ${OF_HOME}
bin/run_all.sh start

You don’t have the rights to delete the folder:

cd ${OF_HOME}/config/dev
docker run -ti --rm -v $(pwd):/current alpine # if there is no `alpine` docker available it will pull it from dockerHub
# your are now in the alpine docker container
cd /current
rm -rf nginxconf
<ctrl-d> # to exit the `alpine` container bash environement
./docker-compose.sh # if you want to run OperatorFabric directly after.
cd ${OF_HOME}
bin/run_all.sh start

An unhandled exception occurred: Call retries were exceeded occurs when using ng build

When using the following command line:

cd ${OF_HOME}/ui/main
ng build

The following error appears:

Error message
Generating ES5 bundles for differential loading...
An unhandled exception occurred: Call retries were exceeded
See "/tmp/ng-<random-string>/angular-errors.log" for further details.

where ng-<random-string> is a temporary folder created by Angular to build the front-end.

Possible causes & resolution

There is not enough allocated memory space to build the front-end.

Use the following command to solve the problem:

node --max_old_space_size=4096 node_modules/@angular/cli/bin/ng build

44. Keycloak Configuration

The configuration needed for development purposes is automatically loaded from the dev-realms.json file. However, the steps below describe how they can be reproduced from scratch on a blank Keycloak instance in case you want to add to it.

The Keycloak Management interface is available here: [host]:89/auth/admin Default credentials are admin/admin.

44.1. Add Realm

  • Click top left down arrow next to Master

  • Add Realm

  • Name it dev (or whatever)

44.2. Setup at least one client (or best one per service)

44.2.1. Create client

  • Click Clients in left menu

  • Click Create Button

  • Set client ID to "opfab-client" (or whatever)

  • Select Openid-Connect Protocol

  • Click Next

  • Enable client authentication

  • Enable authorization

  • Select Authentication flows: Standard flow, Direct access grants, Implicit flow

  • Click Next

  • Enter Valid redirect URIs : localhost:2002/*

  • Add Valid redirect URIs : localhost:4200/*

  • Click Save

  • Remove Web origins settings

  • Click Save

  • Select Client scopes tab

  • Click on opfab-client-dedicated

  • From Mappers tab click Add Mapper

  • Select by configuration

  • Select User Property

  • name it sub

  • set Property to username

  • set Token claim name to sub

  • enable add to access token

  • save

  • From Mappers tab click Add Mapper

  • Select by configuration

  • Select User Attribute

  • name it groups

  • set Property to groups

  • set User attribute to groups

  • set Token claim name to groups

  • enable add to access token

  • save

  • From Mappers tab click Add Mapper

  • Select by configuration

  • Select User Attribute

  • name it groups

  • set Property to entitiesId

  • set User attribute to entitiesId

  • set Token claim name to entitiesId

  • enable add to access token

  • save

44.3. Create Users

  • Click Users in left menu

  • Click Add User button

  • Set username to admin

  • Create

  • Select Credentials tab

  • set password and confirmation to "test"

  • disable Temporary flag

  • Select Attributes tab

  • add groups or entitiesId attributes if needed

repeat process for other users: operator3_fr, operator1_fr, operator2_fr, etc ..

44.3.1. Development-specific configuration

To facilitate development, in the configuration file provided in the git (dev-realms.json) ,session are set to have a duration of 10 hours (36000 seconds) and SSL is not required. These parameters should not be used in production.

The following parameters are set : accessTokenLifespan : 36000 ssoSessionMaxLifespan : 36000 accessCodeLifespan" : 36000 accessCodeLifespanUserAction : 36000 sslRequired : none

45. Using OAuth2 token with the CLI

45.1. Get a token

Method: POST

Body arguments:

  • client_id: string constant=clientIdPassword;

  • grant_type: string constant=password;

    • username: string any value, must match an OperatorFabric registered user name;

  • password: string any value;

The following examples will be for admin user.

45.1.1. Curl

command:

curl -s -X POST -d
"username=admin&password=test&grant_type=password&client_id=clientIdPassword"
http://localhost:2002/auth/token

example of expected result:

{"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cC
I6MTU1MjY1OTczOCwiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOi
IwMmQ4MmU4NS0xM2YwLTQ2NzgtOTc0ZC0xOGViMDYyMTVhNjUiLCJjbGllbnRfaWQiOiJjbGllbnRJZF
Bhc3N3b3JkIiwic2NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.SDg-BEzzonIVXfVBnnfq0oMbs_0
rWVtFGAZzRHj7KPgaOXT3bUhQwPOgggZDO0lv2U1klwB94c8Cb6rErzd3yjJ8wcVcnFLO4KxrjYZZxdK
VAz0CkMKqng4kQeQm_1UShsQXGLl48ezbjXyJn6mAl0oS4ExeiVsx_kYGEdqjyb5CiNaAzyx0J-J5jVD
SJew1rj5EiSybuy83PZwhluhxq0D2zPK1OSzqiezsd5kX5V8XI4MipDhaAbPYroL94banZTn9RmmAKZC
AYVM-mmHbjk8mF89fL9rKf9EUNhxOG6GE0MDqB3LLLcyQ6sYUmpqdP5Z94IkAN-FpC7k93_-RDw","to
ken_type":"bearer","refresh_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWI
iOiJhZG1pbiIsInNjb3BlIjpbInJlYWQiLCJ1c2VyX2luZm8iXSwiYXRpIjoiMDJkODJlODUtMTNmMC0
0Njc4LTk3NGQtMThlYjA2MjE1YTY1IiwiZXhwIjoxNTUyNzAxMTM4LCJhdXRob3JpdGllcyI6WyJST0x
FX0FETUlOIiwiUk9MRV9VU0VSIl0sImp0aSI6IjMwOWY2ZDllLWNmOGEtNDg0YS05ZjMxLWViOTAxYzk
4YTFkYSIsImNsaWVudF9pZCI6ImNsaWVudElkUGFzc3dvcmQifQ.jnZDt6TX2BvlmdT5JV-A7eHTJz_s
lC5fHrJFVI58ly6N7AUUfxebG_52pmuVHYULSKqTJXaLR866r-EnD4BJlzhk476FtgtVx1nazTpLFRLb
8qDCxeLrzClQBkzcxOt6VPxB3CD9QImx3bcsDwjkPxofUDmdg8AxZfGTu0PNbvO8TKLXEkeCztLFvSJM
GlN9zDzWhKxr49I-zPZg0XecgE9j4WITkFoDVwI-AfDJ3sGXDi5AN55Sz1j633QoqVjhtc0lO50WPVk5
YT7gU8HLj27EfX-6vjnGfNb8oeq189-NX100QHZM9Wgm79mIm4sRgwhpv-zzdDAkeb3uwIpb8g","exp
ires_in":1799,"scope":"read
user_info","jti":"02d82e85-13f0-4678-974d-18eb06215a65"}

45.1.2. Httpie

http --form POST http://localhost:2002/auth/token username=admin password=test
grant_type=password client_id=clientIdPassword

example of expected result:

.HTTP/1.1 200 OK
Cache-Control: no-store
Content-Type: application/json;charset=utf-8
Date: Fri, 15 Mar 2019 13:57:19 GMT
Pragma: no-cache
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
transfer-encoding: chunked

{
    "access_token":
"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY2MDAzOS
wiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiI2MjQzMDliMS03Yz
g3LTRjZGMtODQ0My0wMTI0NTE1Zjg3ZjgiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkIiwic2
NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.VO4OZL7ycqNez0cHzM5WPuklr0r6SAOkUdUV2qFa5Bd
3PWx3DFHAHUxkfSX0-R4OO6iG2Zu7abzToAZNVLwk107LH_lWXOMQBriGx3d2aSgCf1yx_wI3lHDd8ST
8fxV7uNeolzywYavSpMGfgz9GXLzmnyeuPH4oy7eyPk9BwWVi0d7a_0d-EfhE1T8eaiDfymzzNXJ4Bge
8scPy-93HmWpqORtJaFq1qy4QgU28N2LgHFEEEWCSzfhYXH-LngTCP3-JSNcox1hI51XBWEqoeApKdfD
J6o4szR71SIFCBERxCH9TyUxsFywWL3e-YnXMiP2J08eB8O4YwhYQEFqB8Q",
    "expires_in": 1799,
    "jti": "624309b1-7c87-4cdc-8443-0124515f87f8",
    "refresh_token":
"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsInNjb3BlIjpbInJlYWQiLC
J1c2VyX2luZm8iXSwiYXRpIjoiNjI0MzA5YjEtN2M4Ny00Y2RjLTg0NDMtMDEyNDUxNWY4N2Y4IiwiZX
hwIjoxNTUyNzAxNDM5LCJhdXRob3JpdGllcyI6WyJST0xFX0FETUlOIiwiUk9MRV9VU0VSIl0sImp0aS
I6ImRiYzMxNTJiLTM4YTUtNGFmZC1hY2VmLWVkZTI4MjJkOTE3YyIsImNsaWVudF9pZCI6ImNsaWVudE
lkUGFzc3dvcmQifQ.Ezd8kbfNQHOOvUCNNN4UmOOkncHiT9QVEM63FiW1rq0uXDa3xfBGil8geM5MsP0
7Q2He-mynkFb8sGNDrAXTdO-8r5o4a60zWrktrMg2QH4icC1lyeZpiwZxe6675QpLpSeMlXt9PdYj-pb
14lrRookxXP5xMQuIMteZpbtby7LuuNAbNrjveZ1bZ4WMi7zltUzcYUuqHlP1AYPteGRrJVKXiuPpoDv
gwMsEk2SkgyyACI7SdZZs8IT9IGgSsIjjgTMQKzj8P6yYxNLUynEW4o5y1s2aAOV0xKrzkln9PchH9zN
qO-fkjTVRjy_LBXGq9zkn0ZeQ3BUe1GuthvGjaA",
    "scope": "read user_info",
    "token_type": "bearer"
}

45.2. Extract token

From the previous results, the data needed to be considered to be authenticated by OperatorFabric services is the content of the "access_token" attribute of the body response.

Once this value extracted, it needs to be passed at the end of the value of the http HEADER of type Authorization:Bearer. Note that a space is needed between Bearer and token actual value. example from previous results:

45.2.1. Curl

Authorization:Bearer
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY1OTczOCw
iYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiIwMmQ4MmU4NS0xM2Y
wLTQ2NzgtOTc0ZC0xOGViMDYyMTVhNjUiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkIiwic2N
vcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.SDg-BEzzonIVXfVBnnfq0oMbs_0rWVtFGAZzRHj7KPga
OXT3bUhQwPOgggZDO0lv2U1klwB94c8Cb6rErzd3yjJ8wcVcnFLO4KxrjYZZxdKVAz0CkMKqng4kQeQm
_1UShsQXGLl48ezbjXyJn6mAl0oS4ExeiVsx_kYGEdqjyb5CiNaAzyx0J-J5jVDSJew1rj5EiSybuy83
PZwhluhxq0D2zPK1OSzqiezsd5kX5V8XI4MipDhaAbPYroL94banZTn9RmmAKZCAYVM-mmHbjk8mF89f
L9rKf9EUNhxOG6GE0MDqB3LLLcyQ6sYUmpqdP5Z94IkAN-FpC7k93_-RDw

45.2.2. Httpie

Authorization:Bearer
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY2MDAzOSw
iYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiI2MjQzMDliMS03Yzg
3LTRjZGMtODQ0My0wMTI0NTE1Zjg3ZjgiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkIiwic2N
vcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.VO4OZL7ycqNez0cHzM5WPuklr0r6SAOkUdUV2qFa5Bd3
PWx3DFHAHUxkfSX0-R4OO6iG2Zu7abzToAZNVLwk107LH_lWXOMQBriGx3d2aSgCf1yx_wI3lHDd8ST8
fxV7uNeolzywYavSpMGfgz9GXLzmnyeuPH4oy7eyPk9BwWVi0d7a_0d-EfhE1T8eaiDfymzzNXJ4Bge8
scPy-93HmWpqORtJaFq1qy4QgU28N2LgHFEEEWCSzfhYXH-LngTCP3-JSNcox1hI51XBWEqoeApKdfDJ
6o4szR71SIFCBERxCH9TyUxsFywWL3e-YnXMiP2J08eB8O4YwhYQEFqB8Q

45.3. Check a token

45.3.1. Curl

from previous example

curl -s -X POST -d
"token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY1
OTczOCwiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiIwMmQ4MmU4
NS0xM2YwLTQ2NzgtOTc0ZC0xOGViMDYyMTVhNjUiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3Jk
Iiwic2NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.SDg-BEzzonIVXfVBnnfq0oMbs_0rWVtFGAZzR
Hj7KPgaOXT3bUhQwPOgggZDO0lv2U1klwB94c8Cb6rErzd3yjJ8wcVcnFLO4KxrjYZZxdKVAz0CkMKqn
g4kQeQm_1UShsQXGLl48ezbjXyJn6mAl0oS4ExeiVsx_kYGEdqjyb5CiNaAzyx0J-J5jVDSJew1rj5Ei
Sybuy83PZwhluhxq0D2zPK1OSzqiezsd5kX5V8XI4MipDhaAbPYroL94banZTn9RmmAKZCAYVM-mmHbj
k8mF89fL9rKf9EUNhxOG6GE0MDqB3LLLcyQ6sYUmpqdP5Z94IkAN-FpC7k93_-RDw"
http://localhost:2002/auth/check_token

which gives the following example of result:

{
    "sub":"admin",
    "scope":["read","user_info"],
    "active":true,"exp":1552659738,
    "authorities":["ROLE_ADMIN","ROLE_USER"],
    "jti":"02d82e85-13f0-4678-974d-18eb06215a65",
    "client_id":"clientIdPassword"
}

45.3.2. Httpie

from previous example:

http --form POST http://localhost:2002/auth/check_token
token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY2M
DAzOSwiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiI2MjQzMDliM
S03Yzg3LTRjZGMtODQ0My0wMTI0NTE1Zjg3ZjgiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkI
iwic2NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.VO4OZL7ycqNez0cHzM5WPuklr0r6SAOkUdUV2q
Fa5Bd3PWx3DFHAHUxkfSX0-R4OO6iG2Zu7abzToAZNVLwk107LH_lWXOMQBriGx3d2aSgCf1yx_wI3lH
Dd8ST8fxV7uNeolzywYavSpMGfgz9GXLzmnyeuPH4oy7eyPk9BwWVi0d7a_0d-EfhE1T8eaiDfymzzNX
J4Bge8scPy-93HmWpqORtJaFq1qy4QgU28N2LgHFEEEWCSzfhYXH-LngTCP3-JSNcox1hI51XBWEqoeA
pKdfDJ6o4szR71SIFCBERxCH9TyUxsFywWL3e-YnXMiP2J08eB8O4YwhYQEFqB8Q

which gives the following example of result:

HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json;charset=utf-8
Date: Fri, 15 Mar 2019 14:19:31 GMT
Expires: 0
Pragma: no-cache
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
transfer-encoding: chunked

{
    "active": true,
    "authorities": [
        "ROLE_ADMIN",
        "ROLE_USER"
    ],
    "client_id": "clientIdPassword",
    "exp": 1552660039,
    "jti": "624309b1-7c87-4cdc-8443-0124515f87f8",
    "scope": [
        "read",
        "user_info"
    ],
    "sub": "admin"
}

45.4. Extract token

The utility jq, sadly not always available on some Linux distro, parses json input and extracts requested json path value(access_token here). Here is a way to do so.

 curl -d "username=${user}&password=${password}&grant_type=password&client_id=opfab-client" "http://localhost:2002/auth/token" | jq -r .access_token

where:

  • ${user}: login existing on keycloak for operatorfabric;

  • ${password}: password for the previous login in keycloak;

  • opfab-client: is the id of the client for OperatorFabric associated to the dev realm in Keycloak in a dev(${OF_HOME/config/dev) or docker(${OF_HOME/config/docker) configuration of operatorFabric.

The -r option, for raw, leaves the output without any quotes.

46. Kafka Implementation

Next to publishing cards to OperatorFabric using the REST API, OperatorFabric also supports publishing cards via a Kafka Topic. In the default configuration Kafka is disabled.

46.1. Setup Kafka environment

If you do not have a Kafka environment running you need to set one up so the Kafka consumers and producers can find each other. You can for example download lenses.io for an easy-to-use broker with a graphical interface. Another option is to add a free broker like a bitnami Kafka image to docker compose. To do so, add the following lines to config/dev/docker-compose.yml or config/docker/docker-compose.yml for the zookeeper and bitnami images:

services:
  zookeeper:
    image: bitnami/zookeeper:3
    ports:
      - "2181:2181"
    environment:
      ALLOW_ANONYMOUS_LOGIN: "yes"
  kafka:
    image: bitnami/kafka:2
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: "1"
      KAFKA_LISTENERS: "PLAINTEXT://:9092"
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://172.17.0.1:9092"
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      ALLOW_PLAINTEXT_LISTENER: "yes"
  rabbitmq:

46.2. Enabling Kafka

To enable Kafka support you need to set the kafka.consumer.group_id property in the cards-publication.yml file:

  spring:
    kafka:
      consumer:
        group-id: opfab-command

The default topic from which the messages are consumed is called opfab. This setting can be modified by setting opfab.kafka.card.topics.topicname. Messages are encoded in the CardCommand.card field. The default topic to which messages are produced is called opfab-response. This setting can be modified by setting the opfab.kafka.topics.response-card, see below. Messages produced by Operator Fabric are encoded in the CardCommand.responseCard field

The default Kafka Avro serializers and deserializers need a registry service. Make sure the registry service setting is provided in the card-publication.yml file. When you use the provided KafkaAvroWithoutRegistrySerializer and KafkaAvroWithoutRegistryDeserializer no schema registry setting is needed.

With the default settings, the Kafka consumer expects a broker running on http//127.0.0.1:9092 and a schema registry on 127.0.0.1:8081.

Example settings for the cards-publication.yml file:

operatorfabric:
  cards-publication:
    kafka:
      topics:
        card:
          topicname: opfab
        response-card:
          topicname: opfab-response
      schema:
        registry:
          url: http://localhost:8081

Cards-publication service for more settings.

See Schema management for detailed information on using and benefits of a schema registry.

46.3. OperatorFabric Kafka source code

46.4. Listener / deserializer

Most of the OperatorFabric Kafka implementation can be found at

org.opfab.cards.publication.kafka

for the implementation of the deserializers and mapping of Kafka topics to OperatorFabric cards and

org.opfab.autoconfigure.kafka

for the various Kafka configuration options.

46.4.1. Kafka OperatorFabric AVRO schema

The AVRO schema, the byte format in which messages are transferred using Kafka topics, can be found at client/src/main/avro. Message are wrapped in a CardCommand object before being sent to a Kafka topic. The CardCommand consists of some additional information and the OperatorFabric card itself, see also Card Structure. The additional information, for example CommandType, consists mostly of the information present in a REST operation but not in Kafka. For example the http method (POST, DELETE, UPDATE) used.

46.5. Configure Kafka

46.5.1. Setting a new deserializer

By default, OperatorFabric uses the io.confluent.kafka.serializers.KafkaAvroDeserializer from Confluent. However, you can write your own deserializer. To use your own deserializer, make sure spring.deserializer.value.delegate.class points to your deserializer.

46.5.2. Configuring a broker

When you have a broker running on localhost port 9092, you do not need to set the bootstrap severs. If this is not the case, you need tell Operator Fabric where the broker can be found. You can do so by setting the bootstrap-servers property in the cards-publication.yml file:

spring:
  kafka:
    bootstrap-servers: 172.17.0.1:9092

46.6. Kafka card producer

To send a CardCommand to OperatorFabric, start by implementing a simple Kafka producer by following for example Spring for Apache Kafka. Note that some properties of CardCommand or its embedded Card are required. If not set, the card will be rejected by OperatorFabric.

When you dump the card (which is about to be put on a topic) to stdout, you should see something like the line below. Do ignore the actual values from the dump below.

{
  "command": "CREATE_CARD",
  "process": "integrationTest",
  "processInstanceId": "fa6ce61f-192f-11eb-a6e3-eea952defe56",
  "card": {
    "parentCardUid": null,
    "publisher": "myFirstPublisher",
    "processVersion": "2",
    "state": "FirstUserTask",
    "publishDate": null,
    "lttd": null,
    "startDate": 1603897942000,
    "endDate": 1604070742000,
    "severity": "ALARM",
    "tags": null,
    "timeSpans": null,
    "details": null,
    "title": {
      "key": "FirstUserTask.title",
      "parameters": null
    },
    "summary": {
      "key": "FirstUserTask.summary",
      "parameters": null
    },
    "userRecipients": [
      "tso1-operator",
      "tso2-operator"
    ],
    "groupRecipients": null,
    "entitiesAllowedToRespond": [
      "ENTITY1_FR"
    ],
    "entityRecipients": null,
    "hasBeenAcknowledged": null,
    "data": "{\"action\":\"Just do something\"}"
  }
}

46.7. Response Cards

OperatorFabric response cards can be sent by REST of put on a Kafka topic. The Kafka response card configuration follows the convention to configure a REST endpoint. Instead of setting the 'http://host/api' URL, you set it to 'kafka:response-topic' in the external-recipients: section from the cards-publication.yml file:

operatorfabric:
  cards-publication:
    external-recipients:
      recipents:
        - id: "processAction"
          url: "http://localhost:8090/test"
          propagateUserToken: false
        - id: "mykafka"
          url: "kafka:topicname"
          propagateUserToken: false

Note that topicname is a placeholder for now. All response cards are returned via the same Kafka response topic, as specified in the opfab.kafka.topics.response-card field.

47. MailHog SMTP server

MailHog is an SMTP mail server suitable for testing. Opfab uses MailHog to test mail notification service. MailHog allows to view sent messages in the web UI, or retrieve them with the JSON API.

47.1. MailHog docker container

A MailHog docker container is configured in Opfab docker-compose.yaml:

  mailhog:
    image: mailhog/mailhog:v1.0.1
    ports:
      - 1025:1025
      - 8025:8025

The container exposes port 1025 for SMTP protocol and port 8025 for web UI and HTTP REST API. Mailhog web interface is accesible at localhost:8025. Mailhog REST API allows to list, retrieve and delete messages.

For example, to retrieve the list of received messages, send an http GET request to: localhost:8025/api/v2/messages

48. Dependency analysis

Inside the bin/dependencies directory, you can find scripts to help analyze the dependencies used by the project. These scripts should be executed from within the same directory.

The first script, named generateDependencyReport.sh, aggregates all dependency trees (including Java and npm dependencies) into a single file. The generated file includes the name of the current branch, making it easier to compare dependencies between branches.

The second script, searchForDependency.sh, allows you to search for dependencies based on the generated file. For example, if you are searching for the "rabbit" dependency:

cd bin/dependencies
./searchForDependency.sh rabbit

The analysis is conducted on the current branch by default. If you wish to utilize a report from another branch, you can specify it as follows:

./searchForDependency rabbit myBranch

OperatorFabric Community

The aim of this document is to present the OperatorFabric community, its code of conduct and to welcome contributors!

First, thank you for your interest !

We can’t stress enough that feedback, discussions, questions and contributions to OperatorFabric are very much appreciated. However, because the project is still in its early stages, we’re not fully equipped for any of it yet, so please bear with us while the contribution process and tooling are sorted out.

This project is governed by the OperatorFabric Technical Charter.

This project applies the LF Energy Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to opfab_AT_lists.lfenergy.org.

49. License and Developer Certificate of Origin

OperatorFabric is an open source project licensed under the Mozilla Public License 2.0. By contributing to OperatorFabric, you accept and agree to the terms and conditions for your present and future contributions submitted to OperatorFabric.

The project also uses a mechanism known as a Developer Certificate of Origin (DCO) to ensure that we are legally allowed to distribute all the code and assets for the project. A DCO is a legally binding statement that asserts that you are the author of your contribution, and that you wish to allow OperatorFabric to use your work.

Contributors sign-off that they adhere to these requirements by adding a Signed-off-by line to commit messages. All commits to any repository of the OperatorFabric organization have to be signed-off like this:

This is my commit message.

Signed-off-by: John Doe <john.doe@email-provider.com>

You can write it manually but Git has a -s command line option to append it automatically to your commit message:

$ git commit -s -m 'This is my commit message'

Most IDEs can also take care of this for you.

A check will be performed during the integration, making sure all commits in the Pull Request contain a valid Signed-off-by line.

These processes and templates have been adapted from the ones defined by the PowSyBl project.

50. Reporting Bugs or Vulnerabilities and Suggesting Enhancements

Anyone is welcome to report bugs and suggest enhancements or new features by opening a GitHub issue on this repository. Vulnerabilities can be reported publicly in the same way.

51. Contributing Code or Documentation

51.1. Contribution Workflow

The project started out using a Feature Branch workflow, but as the project team grew and we needed to manage support to previous releases we decided to switch to a workflow based on the Git Flow workflow, starting after version 1.3.0.RELEASE.

The principles for this workflow were first described in the blog post linked above, and this document attempts to summarize how we adapted it to our project. Statements in quotes are from the original blog post.

In this document, "repository version" refers to the version defined in the VERSION file at the root of the project, which is a parameter for certain build tasks and for our CICD pipeline.

51.1.1. Principles

51.1.1.1. develop branch

The role of the develop branch is quite similar to that of the master branch in our previous "Feature Branch" workflow.

The develop branch is where feature branches are branched off from, and where they’re merged back to. This way, the HEAD of the develop branch "always reflects a state with the latest delivered development changes for the next release".

The repository version on the develop branch should always be SNAPSHOT.

The daily CRON GitHub Action generating the documentation and docker images for the SNAPSHOT version are run from this branch (see our CICD documentation for details).

51.1.1.2. master branch

"When the source code in the develop branch reaches a stable point and is ready to be released, all the changes should be merged back into master somehow and then tagged with a release number."

This means that any commit on master is a production-ready release, be it a minor or a major version.

Any commit on master triggers a GitHub Action build generating and pushing documentation and docker images for the corresponding release version. As a version released from master is also by design the latest version, it will also update the latest docker images and the current documentation folder on the website.

51.1.1.3. Hotfix branches

While minor and major versions are tagged in increasing and linear order on the master branch, it might be necessary to create patches on previous releases.

To do so, we create hotfix branches branching off version commits on the master branch.

For example, the 1.8.hotfixes branch branches off the commit tagged 1.8.0.RELEASE on the master branch, and will contain tags for 1.8.1.RELEASE, 1.8.2.RELEASE and so on.

Naming convention: Hotfix branches names should always start with the two digits representing the minor version they’re patching, followed by ".hotfixes".

Examples of valid hotfix branch names:
  • 1.8.hotfixes

  • 2.0.hotfixes

51.1.1.4. Feature branches

Feature branches are used to develop new features or fix bugs described in GitHub issues. They have two distinct use cases with similar workflows.

Feature branches for the next release

These feature branches are used to develop new features or fix bugs for the next release.

Their lifecycle is as follows:

  1. A new feature branch is branched off develop before starting work on a feature or bug.

  2. Once the developments are deemed ready by the developer(s) working on the feature, a pull request should be created for this branch.

  3. New pull requests are discussed during daily meetings to assign someone from the Reviewers group to the issue.

  4. The pull request author and reviewer(s) then make use of the Git Hub pull requests features (comments, proposed changes, etc.) to make changes to the PR until it meets the project requirements.

  5. Once it is ready to be included in the next version, the pull request is then merged back into develop.

Feature branches for hotfixes

These feature branches fix bugs in existing releases and give rise to new patches.

Their lifecycle is similar to regular feature branches, except they should be branched off (and merged back to) the X.X.hotfixes branch corresponding to the release they are fixing.

Example: A feature branch working on bug 1234 affecting version 1.8.0.RELEASE should be branched off branch 1.8.hotfixes.

Common conventions

Naming convention: Feature branches names should always start with "FE-" followed by the reference of the GitHub issue they’re addressing, followed by additional information to give context.

Examples of valid feature branch names:
  • FE-123-AddMailNotification

  • FE-123-FixIssueWithMailNotification

  • FE-123-RefactorBundleCompression

Examples of invalid feature branch names:
  • 123

  • FE-123

  • SomeTextDescribingTheBranch

Commit messages should also contain the GitHub issue reference: ` My commit message (#123)`

This allows the branch, PR and commits to be directly accessible from the GitHub issue.

51.1.1.5. Release branches
Release candidate

Once developments are in a release-ready state and have been tested on the develop branch, a release candidate branch should be created off develop .

"All features that are targeted for the release-to-be-built must be merged in to develop at this point in time. All features targeted at future releases may not—they must wait until after the release branch is branched off."
By contrast to what is described in the original blog post, for now we have chosen to only create the release branch once the developments are completely ready and tested on the develop branch, so that no fixes should be made on the release branch. This simplifies things because it means that release branches don’t have to be merged back into develop.

Once the X.X.X.release branch has been created, a new commit should be made on this branch to change the repository version from SNAPSHOT to X.X.X-RC.RELEASE.

Release

When release candidate is ready to be released, a new commit should be made on the X.X.X.release branch to change the repository version from X.X.X-RC.RELEASE to X.X.X.RELEASE.

Then, pushing the branch will trigger a build and a "dry-run" generation of docker images. The aim is to detect any issue with this generation before moving to master.

Finally, the X.X.X.release can be merged into master, triggering The resulting merge commit on master should then be tagged with X.X.X.RELEASE.

All commits on master should be merged commits from release branches, direct pushes on master are disabled .

Naming convention: The name of a release branch should match the repository version it is meant to merge into master but in lower case to avoid confusion with release tags on master.

Example: The valid branch name for the branch bringing 1.3.0.RELEASE into master is 1.3.0.release

51.1.2. Examples and commands

The aim of this section is to illustrate how our workflow works on a concrete example, complete with the required git commands.

51.1.2.1. Initial state

In the initial state of our example, only develop and master exist.

The repository version in master is 1.3.0.RELEASE, and the develop branch has just been branched off it. Commits have been added to develop to change the repository version to SNAPSHOT and implement the changes necessary for Git flow.

51.1.2.2. Starting work on a new feature for the next version

Let’s say we want to start working on a feature described in GitHub issue #123.

git checkout develop (1)
git pull (2)
git checkout -b FE-123 (3)
1 Check out the develop branch
2 Make sure it is up to date with the remote (=GitHub repository)
3 Create a FE-123 off the develop branch

Then, you can start working on the feature and commit your work to the branch. Referencing the issue you’re working on at the end of the commit message allows the commit to be listed on the issue’s page for future reference.

git commit -s -m "Short message describing content of commit (#123)"
The -s flag is to automatically add a sign-off to your commit message, which is our way to implement the Developer Certificate of Origin .

At any point during your work you can push your feature branch to the GitHub repository, to back your work up, let others look at your work or contribute to the feature.

To do this, just run:

git push

If it’s your first push to this branch, Git will prompt you to define the remote branch to be associated with your local branch with the following command:

git push --set-upstream origin FE-123

You can re-work, squash your commits and push as many times as you want on a feature branch. Force pushes are allowed on feature branches.

To see your branch:

  1. Go to the operatorfabric-core repository on GitHub

  2. Click the branches tab

Feel free to add or update a copyright header (on top of the existing ones) to files you create or amend. See src/main/headers for examples.

51.1.2.3. Submitting a pull request to develop

Once you are satisfied with the state of your developments, you can submit it as a pull request.

Before submitting your branch as a pull request, please squash/fix your commits to reduce the number of commits and comment them accordingly. In the end, the division of changes into commits should make the PR easier to understand and review.

You should also take a look at the review checklist below to make sure your branch meets its criteria.

Once you feel your branch is ready, submit a pull request. Open pull requests are then reviewed by the core maintainers to assign a reviewer to each of them.

To do so, go to the branches tab of the repository as described above. Click the "New Pull Request" button for your branch.

Add a comment containing a short summary of the PR goal and any information that should go into the release notes. It’s especially important for PRs that have a direct impact on existing OperatorFabric deployments, to alert administrators of the impacts of deploying a new version and help them with the migration. Whenever possible/relevant, a link to the corresponding documentation is appreciated.

You need to link your PR to the issue it is fixing so merging the PR will automatically close the corresponding issue. You can do so either manually or by adding "Fix #XXX" to the PR’s description.

Make sure that the base branch for the PR is develop, because feature branches are meant to be merged back into develop. This should be the default value since develop is the default branch on this repository, but if not, select it in the base branch dropdown list.

At this point, GitHub will tell you whether your branch could be automatically merged into develop or whether there are conflicts to be fixed.

Case 1: GitHub is able to automatically merge your branch

This means that either your branch was up to date with develop or there were no conflicts. In this case, just go ahead and fill in the PR title and message, then click "Create pull request".

Case 2: GitHub can’t merge your branch automatically

This means that there are conflicts with the current state of the develop branch on GitHub. To fix these conflicts, you need to update your local copy of the develop branch and merge it into your feature branch.

git checkout develop (1)
git pull (2)
git checkout FE-123 (3)
git merge develop (4)
1 Check out the develop branch
2 Make sure it is up to date with the remote (=GitHub repository)
3 Check out the FR-123 branch
4 Merge the new commits from develop into the feature branch

Then, handle any conflicts from the merge. For example, let’s say there is a conflict on file dummy1.txt:

Auto-merging dummy1.txt
CONFLICT (add/add): Merge conflict in dummy1.txt
Automatic merge failed; fix conflicts and then commit the result.

Open file dummy1.txt:

dummy1.txt
 <<<<<<< HEAD
 Some content from FE-123.
 =======
 Some content that has been changed on develop since FE-123 branched off.
 >>>>>>> develop

Update the content to reflect the changes that you want to keep:

dummy1.txt
Some content from FE-123 and some content that has been changed on develop since FE-123 branched off.
git add dummy1.txt (1)
git commit (2)
git push (3)
1 Add the manually merged file to the changes to be committed
2 Commit the changes to finish the merge
3 Push the changes to GitHub

Now, if you go back to GitHub and try to create a pull request again, GitHub should indicate that it is able to merge automatically.

51.1.2.4. Working on a fix for a previous version

To work on a fix for an existing version, the steps are similar to those described above, substituting X.X.hotfix for develop.

51.1.2.5. Reviewing a Pull Request

Only developers from the reviewers group can merge pull requests into develop, but this shouldn’t stop anyone interested in the topic of a PR to comment and review it.

Review checklist
  • The PR comment contains the text to insert in release note. Otherwise, it should say why this development doesn’t need to be on the release notes.

  • If necessary, the PR should create or add content to a migration guide for the next version, under src/docs/asciidoc/resources

  • Check that GitHub Action build is passing for the PR

  • The SonarCloud analysis should report no new bug or code smells, and should pass the quality gate

  • Check that the base branch (i.e. the branch into which we want to merge changes) is correct: for feature branches pull requests, this branch should be develop.

  • Look through changed files to make sure everything is relevant to the PR (no mistakenly added changes, no secret information, no malicious changes) and to see if you have remarks on the way things are implemented

  • Check that the commit(s) message(s) is(are) relevant and follow conventions

  • If there is more than one commit, is it meaningful or do we need to squash ?

  • Meaningful and sufficient unit tests for the backend (we aim for 80% coverage)

  • Meaningful unit tests for the frontend (Angular tests can be complex to implement, we should focus on testing complex logic and not the framework itself)

  • API testing via Karate has been updated

  • Documentation has been updated (especially if configuration is needed)

  • Configuration examples have been updated

  • Build and run OpFab locally to see the new feature or bug fix at work. In the case of a new feature, it’s also a way of making sure that the configuration documentation is correct and easily understandable.

  • Check for error messages in the browser console.

  • Check the UI in resolution 1680x1050 (minimum supported resolution for all features)

  • Check the UI in smartphone resolution (only for feed, settings and notification filter features)

  • Depending on the scope of the PR , build docker images and test in docker mode

  • Check that the copyright header has been updated on the changed files if need be, and in the case of a first-time contributor, make sure they’re added to the AUTHORS.txt file.

  • Check new dependencies added to the project to see if they are compatible with the OperatorFabric license , see the following table

License usage restrictions

License Name

SPX Identifier

Type

Usage, Restrictions

Academic Free License v1.1, v1.2, v2.0, v2.1, v3.0

AFL-1.1, AFL-1.2, AFL-2.0, AFL-2.1, AFL-3.0

Permissive

None , Be careful of incompatibility with GPL license.

Apache License 2.0

Apache-2.0

Permissive

None

BSD 2-Clause "Simplified" License

BSD-2-Clause

Permissive

None

BSD 3-Clause "New" or "Revised" License

BSD-3-Clause

Permissive

None

BSD 4-Clause "Original" or "Old" License

BSD-4-Clause

Permissive

Usage prohibited due to advertising clause.

Common Development and Distribution License 1.0

CDDL-1.0 Moderate

copyleft

Only as a distinct library. Be careful of incompatibility with GPL license.

Common Development and Distribution License 1.1

CDDL-1.1

Moderate copyleft

Only as a distinct library.Be careful of incompatibility with GPL license.

Creative Commons Attribution 3.0

CC-BY-3.0

Permissive

None. Suitable for documentation material.

Creative Commons Attribution 4.0

CC-BY-4.0

Permissive

None. Suitable for documentation material.

Creative Commons Attribution Non Commercial (any version)

CC-BY-NC-…​

Non commercial

Usage prohibited due to non commercial restriction.

Creative Commons Zero v1.0 Universal

CC0-1.0

Free usage (public domain)

None. Suitable for documentation material.

Eclipse Public License 1.0

EPL-1.0

Moderate copyleft

Only as a distinct library.Be careful of incompatibility with GPL license.

Eclipse Public License 2.0

EPL-2.0 Moderate

copyleft

Only as a distinct library GNU General Public License v2.0 only GPL-2.0-only Strong copyleft Usage prohibited. Exemptions on a case by case basis.

GNU General Public License v2.0 or later

GPL-2.0-or-later

Strong copyleft

Usage prohibited. Exemptions on a case by case basis.

GNU General Public License v3.0 only

GPL-3.0-only

Strong copyleft

Usage prohibited. Exemptions on a case by case basis.

GNU General Public License v3.0 or later

GPL-3.0-or-later

Strong copyleft

Usage prohibited. Exemptions on a case by case basis.

GNU Lesser General Public License v2.1 only

LGPL-2.1-only

Moderate copyleft

Only as a distinct library

GNU Lesser General Public License v2.1 or later

LGPL-2.1-or-later

Moderate copyleft

Only as a distinct library

GNU Lesser General Public License v3.0 only

LGPL-3.0-only

Moderate copyleft

Only as a distinct library

GNU Lesser General Public License v3.0 or later

LGPL-3.0-or-later

Moderate copyleft

Only as a distinct library

ISC License

ISC

Permissive

None

MIT License

MIT

Permissive

None

Mozilla Public License 2.0

MPL-2.0 Moderate (weak)

copyleft

Only as a distinct library

Public domain

-

Free usage

None

SIL Open Font License 1.1

OFL-1.1

Permissive in relation to combination with non-font code (strong copyleft for font code)

Only for font components

zlib License

Zlib

Permissive

None

Testing environment for reviewer

Compile and run OperatorFabric docker images is the most effective way to check any regression.

  1. Pull the submitted branch on a testing machine;

  2. Run a docker-compose with the ${OF_HOME}/src/main/docker/test-environment/docker-compose.yml file;

  3. Create SNAPSHOT docker images, from the ${OF_HOME} directory with the following command: ./gradlew clean dockerTagSNAPSHOT;

  4. Stop the test-environment docker-compose;

  5. Go to ${OF_HOME}/config/docker;

  6. Run the ./docker-compose.sh script (or use the docker-compose.yml with a docker-compose command);

  7. Go to ${OF_HOME}/src/test/resources/;

  8. Run the following scripts: ./loadTestConf.sh && ./send6TestCards.sh;

  9. Open the front-end in a browser and look for any regression.

To automate build and API testing, you can use ${OF_HOME}/src/test/api/karate/buildAndLaunchAll.sh.

51.1.2.6. Merging a Pull Request

Once the pull request meets all the criteria from the above check list, you can merge it into the develop branch.

  1. Go to the pull request page on GitHub

  2. Check that the base branch for the pull request is develop (or X.X.hotfixes). This information is visible at the top of the page.

    existing PR check base
  3. If that is not the case, you can edit the base branch by clicking the Edit button in the top right corner.

  4. Click the merge pull request button at the bottom of the PR page

  5. Make sure that the corresponding GitHub issue was associated to the project for the current release. It should now be visible under the "Done" column. If not, add it to the project and put it there manually.

  6. Go to the release-notes repository and add the issue to the list with the information provided in the PR comments.

51.1.2.7. Creating a release or hotfix

See the release process described in our CICD documentation for details.

51.2. Code Guidelines

  • We don’t mention specific authors by name in each file so as not to have to maintain these mentions (since this information is tracked by git anyway).

  • For ui code, you must use prettier (prettier.io/) to format the code (config provided in .prettierrc.json in the git root repository). The file types to format via prettier are the following : js, ts, css and scss.

  • When adding a dependency, define a precise version of the dependency

  • When fixing a bug, the commit message shall be "Fix : Issue text (#XXXX) "

51.2.1. Angular/TypeScript development caution and guidelines

  • Use foreach to iterate over an array (instead of for(let i = ..).

  • Do not use ngFor / ngIf with methods with computing as angular will call these methods very regularly (around ten times per seconds), use instead variables and compute them only when necessary.

  • ngOnInit() : ngOnInit is called when component is created, the creation is made by the parent component : be careful to check when initialization is done when calling method inside ngOnInit. When the context of the parent component change, it can lead to a new initialization or not.

  • Prefer using *ngIf over hidden property to hide elements : stackoverflow.com/questions/51317666/when-should-i-use-ngif-over-hidden-property-and-vice-versa/51317774

  • Do not use console.log or console.error, use specific opfab service : OpfabLoggerService

  • To give id to an HTML element, please use dashes between words and only lowercase letters. Example id="opfab-feed-filter-publish-date-to"

  • Always precise the return type of a function except when it is void (Example : getUserName():string)

  • Always precise the type of the function parameters (Example : setUserName(name : string))

  • Function shall start with a verb (Example : showAlertPopup())

  • When the return type is a boolean, the method name shall be a yes/no question (Example : isUserDeleted() : boolean)

  • Whenever possible use changeDetection: ChangeDetectionStrategy.OnPush in the component decorator to improve performance

51.2.2. Back development

  • When developing a Spring Controller, you do not need to develop unit test, use karate api test (file pattern **/controllers/** is excluded from test coverage indicator in sonarCloud).

  • Do not use spring annotation (@Autowired, @PostConstruct.. ) in business code to avoid coupling business code with spring framework. This rules is recent so a lot of the business code depends on spring.

51.2.3. Unit test

  • Unit test shall not use random values which is considered as a bad practice

  • Unit test shall not use external resources (database, file system, network, …​)

51.3. Documentation Guidelines

The aim of this section is to explain how the documentation is currently organized and to give a few guidelines on how it should be written.

51.3.1. Structure

All the sources for the AsciiDoc documentation published on our website are found under the src/docs/asciidoc folder in the operatorfabric-core repository.

It is organized into several folders (architecture documentation, deployment documentation, etc.). Each of these folders represent a document and contain an index.adoc file, optionally referencing other adoc files (to break it down into sections).

In addition, an images folder contains images for all documents and a resources folder contains various appendices that might be of use to some people but that we felt weren’t part of the main documentation.

The table below gives a short summary of the content of each document as well as advice on which ones you should focus on depending on your profile.

Contributor

A developer who contributes (or wishes to) to the OperatorFabric project

Developer

A developer working on an application using OperatorFabric or a businessconfig-party application posting content to an OperatorFabric instance

Admin

Someone who is in charge of deploying and maintaining OperatorFabric in production as part of an integrated solution

Product Owner

Project managers, anyone interested in using OperatorFabric for their business requirements.

Documentation Structure and Intended Readers
Folder Content Contributor Developer Admin Product Owner

architecture

Architecture documentation

Describes the business objects and concepts handled by OperatorFabric as well as the microservices architecture behind it.

Yes

Yes

Yes

CICD

CICD Pipeline documentation

Describes our CICD pipeline and release process

Yes

community

OF Community documentation

Everything about the OperatorFabric Community: Code of conduct, governance, contribution guidelines, communication channels.

Yes

deployment

Deployment documentation

Explains how to deploy and configure an OperatorFabric instance

Yes

Yes

Yes

dev_env

Development Environment documentation

Explains how to set up a working development environment for OperatorFabric with all the appropriate tooling and how to run OperatorFabric in development mode.

Yes

docs

This folder contains the documentation that should be archived for previous releases (as of today, the release notes and single page documentation - see below).

Yes

Yes

Yes

Yes

getting_started

Getting Started Guide guides you through setting up OperatorFabric and experimenting with its main features

Yes

Yes

Maybe

reference_doc

Reference Documentation contains the reference documentation for each microservice. It starts off with a high-level functional documentation and then goes into more technical details when needed.

Yes

Yes

Yes

In addition to this asciidoctor documentation, API documentation is available in the form of SwaggerUI-generated html pages. It is generated by the generateSwaggerUI Gradle task, using the swagger.yaml files from each service (for example for the BusinessConfig API ). It can be found under the build/docs/api folder for each client or service project.

51.3.2. Conventions

  • In addition to the "visible" structure described above, documents are broken down into coherent parts using the "include" feature of AsciiDoc. This is done mostly to avoid long files that are harder to edit, but it also allows us to reuse some content in different places.

  • Given the number of files this creates, we try to keep header attributes in files to a minimum. Instead, they’re set in the configuration of the asciidoctor gradle task:

    build.gradle
    asciidoctor {
    
        baseDirFollowsSourceFile()
    
        sources {
            include '*/index.adoc','docs/*'
        }
        resources {
            from('src/docs/asciidoc') {
                include 'images/*','pdf/*'
            }
        }
        attributes  nofooter            : '',
                revnumber           : operatorfabric.version,
                revdate             : operatorfabric.revisionDate,
                sectnums            : '',
                sectnumlevels       : '4',
                sectanchors         : '',
                toc                 : 'left',
                toclevels           : '4',
                icons               : 'font',
                imagesdir           : '../images',
                "hide-uri-scheme"   : '',
                "source-highlighter": 'coderay'
    }

    In particular, the version and revision date are set automatically from the version defined in the VERSION file at the root of the project and the current date.

  • All files are created starting with level 0 titles so:

    • They can be generated on their own if need be.

    • They can be included at different levels in different documents using leveloffset.

  • In addition to being available as separate documents (architecture, reference, etc.) for the current release, the documentation is also generated as a single page document available for all releases from the releases page. This is also a way to make searching the documentation for specific terms easier, and could be used to generate a single page pdf documentation.

  • Unfortunately, this entails a little complexity for cross-references and relative links, because the path to the content is a little different depending on whether the content is generated as different pages or as a single page document.

    For example, to link to the "Card Structure" section of the reference document from the architecture document, one needs to use the following external cross-reference:

    <</documentation/current/reference_doc/index.adoc#card_structure, Card Structure>>

    In the case of the single-page documentation however, both the architecture content and the reference content are part of the same document, so the cross-reference becomes a simple internal cross-reference:

    <<card_structure, Card Structure>>

    This is managed by using the ifdef and indef directives to define which link syntax should be used:

    ifdef::single-page-doc[<<card_structure, Card Structure>>]
    ifndef::single-page-doc[<</documentation/current/reference_doc/index.adoc#card_structure, Card Structure>>]
    The label ("Card Structure" in this example) is defined with each link because it seems that defining it in the target file along with the ID ([[my_section_id, text to display]]) doesn’t work with relative links.

    In the same way, for relative links to external files (mostly the API documentation):

    ifdef::single-page-doc[link:../api/cards/index.html#/archives[here]]
    ifndef::single-page-doc[link:/documentation/current/api/cards/index.html#/archives[here]]
    For this to work, the single_page_doc.adoc file needs to have :single-page-doc: as a header attribute.
  • As you can see in the examples above, we are using custom-defined section ids as anchors rather than taking advantage of generated ones (see documentation). This is cumbersome but:

    • Generation will give a warning if duplicate ids are defined, whereas with generated ids it will silently link to the wrong section.

    • Restructuring the document might change the generated section ID, creating broken links.

    • Its easier to find referenced text (ctrl-f on id)

    • The presence of a custom-defined ID is a sign that the content is referenced somewhere else, which you should take into account if you’re thinking of deleting or moving this content.

  • The :imagesdir: attribute is set globally as ../images, because all images are stored under src/docs/asciidoc/images.

  • In addition to links, it is sometimes necessary to display the actual content of files (or part of it) in the documentation (in the case of configuration examples, for instance). Whenever possible, this should be done by using the include directive rather than copying the content into the adoc file. This way the documentation will always be up to date with the file content at the time of generation.

    See the build.gradle include above for an example using tags to include only part of a file.

  • Source-highlighting is done using Coderay. See their documentation for supported languages, and the AsciiDoctor documentation on how to apply source-highlighting.

  • Avoid long lines whenever possible (for example, try not to go beyond 120 characters). This makes editing the documentation easier and diffs more readable.

  • Most links to other OperatorFabric documents should be relative (see above) so they automatically point to the document in the same version rather than the latest version. Links starting with opfab.github.io/documentation/current/ should only be used when we want to always refer to the latest (release) version of the documentation.

  • If necessary, add the relevant copyright at the top of the file.

All source files and documentation files for the project should bear copyright headers.

51.4.1. Header templates

In the case of source files (*.java, *.css or *.scss, *.html, *.ts, etc.), we are working with the Mozilla Public License, v. 2.0, so the header should be something like this:

Copyright (c) YYYY-YYYY, Entity Name (website or contact info)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.

In the case of documentation files (*.adoc), we use the Creative Commons Attribution 4.0 International license, so the header should be:

Copyright (c) YYYY-YYYY, Entity Name (website or contact info)
See AUTHORS.txt
This document is subject to the terms of the Creative Commons Attribution 4.0 International license.
If a copy of the license was not distributed with this
file, You can obtain one at https://creativecommons.org/licenses/by/4.0/.
SPDX-License-Identifier: CC-BY-4.0

These templates should of course be converted to comments depending on the file type. See src/main/headers for examples.

Please make sure to include the appropriate header when creating new files and to update the existing one when making changes to a file.

In the case of a first time contribution, the GitHub username of the person making the contribution should also be added to the AUTHORS file.

51.4.2. Examples

51.4.2.1. Creating a new file

Let’s say a developer from entity Entity X creates a new java file in 2020. The header should read:

Copyright (c) 2020, Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.
51.4.2.2. Updating a file

Given an existing java file with the following header:

Copyright (c) 2020, Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.

If a developer from entity Entity X edits it in 2021, the header should now read:

Copyright (c) 2020-2021, Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.

However, if a developer from entity Entity X edits it in 2022, but no one from Entity X had touched it in 2021, the header should now read:

Copyright (c) 2020, 2022 Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.
51.4.2.3. Multiple contributors

Given an existing java file with the following header:

Copyright (c) 2020-2021, Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.

If a developer from entity Entity Y edits it in 2021, the header should now read:

Copyright (c) 2020-2021, Entity X (http://www.entityX.org)
Copyright (c) 2021, Entity Y (http://www.entityY.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.

52. Project Governance

52.1. Project Owner

OperatorFabric is part of the LF Energy Foundation, a project of the Linux Foundation that supports open source innovation projects within the energy and electricity sectors.

52.2. Committers

Committers are contributors who have made several valuable contributions to the project and are now relied upon to both write code directly to the repository and screen the contributions of others. In many cases they are programmers but it is also possible that they contribute in a different role. Typically, a committer will focus on a specific aspect of the project, and will bring a level of expertise and understanding that earns them the respect of the community and the project owner.

52.3. Technical Steering Committee

See the dedicated page for more details on the Technical Steering Committee (scheduled meetings, minutes of past meetings, etc.).

52.4. Contributors

Contributors include anyone in the technical community that contributes code, documentation, or other technical artifacts to the project.

Anyone can become a contributor. There is no expectation of commitment to the project, no specific skill requirements and no selection process. To become a contributor, a community member simply has to perform one or more actions that are beneficial to the project.

53. Communication channels

In addition to issues and discussions on GitHub, we use the following communication channels:

53.1. Slack channel

We use the operator-fabric channel on the LFEnergy Slack for daily discussions, to warn of breaking changes being merged into develop for example.

Everyone is welcome to join.

53.2. LF Energy Mailing Lists

Several mailing lists have been created by LF Energy for the project, please feel free to subscribe to the ones you could be interested in:

And if you’re interested in LF Energy in general: LF Energy General Discussion

54. Code of Conduct

The Code of Conduct for the OperatorFabric community is version 2.0 of the Contributor Covenant.

54.1. Our Pledge

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

54.2. Our Standards

Examples of behavior that contributes to a positive environment for our community include:

  • Demonstrating empathy and kindness toward other people

  • Being respectful of differing opinions, viewpoints, and experiences

  • Giving and gracefully accepting constructive feedback

  • Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience

  • Focusing on what is best not just for us as individuals, but for the overall community

Examples of unacceptable behavior include:

  • The use of sexualized language or imagery, and sexual attention or advances of any kind

  • Trolling, insulting or derogatory comments, and personal or political attacks

  • Public or private harassment

  • Publishing others’ private information, such as a physical or email address, without their explicit permission

  • Other conduct which could reasonably be considered inappropriate in a professional setting

54.3. Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

54.4. Scope

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

54.5. Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at opfab-tsc_AT_lists.lfenergy.org. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

  1. Correction

    Community Impact

    Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

    Consequence

    A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

  2. Warning

    Community Impact

    A violation through a single incident or series of actions.

    Consequence

    A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

  3. Temporary Ban

    Community Impact

    A serious violation of community standards, including sustained inappropriate behavior.

    Consequence

    A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

  4. Permanent Ban

    Community Impact

    Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

    Consequence

    A permanent ban from any sort of public interaction within the community.

54.6. Attribution

This Code of Conduct is adapted from the Contributor Covenant, version 2.0, available at www.contributor-covenant.org/version/2/0/code_of_conduct.html. Community Impact Guidelines were inspired by Mozilla’s code of conduct enforcement ladder. For answers to common questions about this code of conduct, see the FAQ at www.contributor-covenant.org/faq. Translations are available at www.contributor-covenant.org/translations. www.contributor-covenant.org/version/2/0/code_of_conduct/code_of_conduct.txt

OperatorFabric CICD

The aim of this document is to describe our CICD pipeline and our release process.

55. Pipeline Configuration

This section briefly describes the organization of our CICD pipeline.

Most of the access and permissions required by our CICD plateform are managed by tokens that are created on each of the required services (SonarCloud, DockerHub, GitHub). A technical user account (opfabtech) has been created for each of these services so that these tokens are not linked to the account of any member of the team.

55.1. CICD Pipeline

55.1.1. Github Actions

We use github Actions to manage our pipeline (github.com/opfab/operatorfabric-core/actions).

55.1.2. SonarCloud

To be allowed to push results to SonarCloud, github needs to be authenticated. This is done by generating a token on SonarCloud with an account (opfabtech) that has admin rights to the organization, and then providing this token to github using actions secrets .

55.1.3. GitHub (documentation)

To be allowed to push the generated documentation to the opfab.github.io, Github needs write access to the repository. This is done by setting up a Personal Access Token in GitHub using the technical account. This token is then passed to Github using actions secrets .

After new content is pushed to the opfab.github.io repository, it can take a few minutes before this content is visible on the website because it needs to be built by GitHub pages, and this can take a short while depending on how busy the service is.

55.1.4. DockerHub

To be allowed to push images to DockerHub, Github needs to be authenticated. This is done again by generating a token in DockerHub using the technical account and provide it to github via actions secrets.

55.1.5. Sonatype

We use Sonatype accounts to be able to push libraries to maven central. The list of people authorized to publish is defined via issues : issues.sonatype.org/browse/OSSRH-67392

56. Release process

56.1. Version numbers

We work with two types of versions:

  • X.Y.Z.RELEASE versions are stable versions

  • X.Y.Z-RC.RELEASE versions are release candidates for the next stable version

  • SNAPSHOT version represents the current state of merged developments

Version numbers for X.Y.Z.RELEASE should be understood like this:

  • X: Major version, a major version adds new features and breaks compatibility with previous major and minor versions.

  • Y: Minor version, a minor version adds new features and does not break compatibility with previous minor versions for the same major version.

  • Z: Patch, a patch version only contains bug fixes of current minor version

56.2. Releasing a Release Candidate

To release a version we use some GitHub Actions dedicated jobs. These jobs are triggered when pushing a commit in branches containing the keyword release at the end (**release) and rely on the VERSION file at the root of this repository to know which version is being produced. It is thus crucial to double-check the content of this file before any push (triggering the GitHub Actions jobs) is made.

Before releasing a version, you need to prepare the release.

56.2.1. Creating a release branch and preparing the release candidate

  1. On the operatorfabric-core repository, create a branch off the develop branch named X.X.X.release (note the lowercase release to distinguish it from X.X.X.RELEASE tags).

    git checkout -b X.X.X.release
  2. Use the ./CICD/prepare_release_version.sh script to automatically perform all the necessary changes:

    ./CICD/prepare_release_version.sh -v X.X.X-RC.RELEASE

    You should get the following output:

    Current version is SNAPSHOT (based on VERSION file)
    Preparing X.X.X-RC.RELEASE
    Updating version for pipeline in VERSION file
    Replacing SNAPSHOT with X.X.X-RC.RELEASE in swagger.yaml files
    Using X.X.X-RC.RELEASE for lfeoperatorfabric images in dev and docker environment docker-compose files
    The following files have been updated:
     M VERSION
     M config/dev/docker-compose.yml
     M config/docker/docker-compose.yml
     M services/cards-publication/src/main/modeling/swagger.yaml
     M services/businessconfig/src/main/modeling/swagger.yaml
     M services/users/src/main/modeling/swagger.yaml

    This script performs the following changes:

    • Replace SNAPSHOT with X.X.X-RC.RELEASE in swagger.yaml files and the VERSION file at the root operator-fabric folder

    • Change the version from SNAPSHOT to X.X.X-RC.RELEASE in the docker-compose files for dev and docker deployments

  3. Commit the changes with the template message:

    git add .
    git commit -s -m "[RELEASE] X.X.X-RC.RELEASE"
  4. Push the commit

    git push --set-upstream origin X.X.X.release
  5. Create the tag for the release candidate:

    git tag X.X.X-RC.RELEASE
    git push origin X.X.X-RC.RELEASE
  6. Check that the build is correctly triggered

    You can check the status of the GitHub Actions triggered by the commit on Github Actions.

  7. Wait for the build to complete (around 50 minutes) and check that all jobs have been successful. This ensures that the code builds, tests are OK and there is no error preventing documentation or Docker images generation.

56.2.2. Publish the release candidate on docker hub and documentation

If the build and tests are successful, launch manually GitHubActions with jobs : Build , Docker Push and Build and publish documentation

56.2.3. Publishing the jars for the client library to Maven Central

Once everything else looks ok, you can publish the jars for the client library to MavenCentral. This is done as a last step once we are pretty sure we won’t need to go back and change things on the release because jars are not meant to be removed from Maven Central once they are published (even briefly), and it’s not something that could be managed by the project.

To do so:

  1. Set the appropriate properties (credentials and GPG key information) as described in the documentation for the publishing task

  2. Run the following command from the project root:

    ./gradlew publish
  3. After a while you should be prompted to enter the passphrase for the GPG key.

  4. Once the task has completed, log in to the OSSRH Repository using the same credentials as for the Sonatype JIRA.

    Welcome page for the OSSRH repository manager
  5. Click on Staging repositories link on the left. After a while (and maybe after clicking the refresh button), you should see a repository with the name orgopfab-XXXX (where XXXX is a Sonatype-generated id, not related to the release number).

    Staging repositories
  6. Click on the repository then on the "content" tab below to check its content and metadata.

    Check staging repository
  7. If there is an issue with the repository, click on the "Drop" button and start the process again after making the necessary changes. If everything looks in order, click on the "Close" button and add a small comment when prompted to confirm.

    Close staging repository
  8. This will trigger validation of the Sonatype requirements (for example, making sure that the pom file contains the required information), as you can see from the Activity tab below (Refresh might be needed).

    Closing and validation of the staging repository
  9. If all the validations pass, the "Release" button will become available. Click it to send the jars to Maven Central. When prompted, write a comment then confirm (keeping the "Automatically Drop" option checked).

    Release to Maven Central
  10. The jars for the release should then be available on the project space in the Maven repository within 10 minutes.

  11. It can take up to two hours for them to appear on the Maven Central Repository Search and up to one day for MvnRepository

56.2.4. Publishing the release on GitHub

  1. On the releases screen for the core repository, draft a new release.

    1. Select the existing X.X.X-RC.RELEASE tag

    2. The title should be X.X.X-RC.RELEASE

    3. In the description field, paste the content from the release_notes_X.X.X.md file from the release-notes repository.

    4. Reformat and correct the content as needed.

    5. Check if there is a migration guide for this version, if so, check if the corresponding file has been included in src/docs/asciidoc/resources/index.adoc and include a link to it at the top of the release notes.

    6. Click "Publish release"

  2. Create a new release_notes.Y.Y.Y.md file with next version number.

56.2.5. Updating the version list on the website

On the website repository, edit the /_data/versions.yml file to add the version being released

For example:

Before
- id: SNAPSHOT
  type: SNAPSHOT
  external_devices_api: true
- id: D.E.F.RELEASE
  badge: current
  external_devices_api: true
- id: A.B.C.RELEASE
  #... end of file omitted
After
- id: SNAPSHOT
  type: SNAPSHOT
  external_devices_api: true
- id: X.X.X-RC.RELEASE
  external_devices_api: true
  badge: current
- id: D.E.F.RELEASE
  external_devices_api: true
- id: A.B.C.RELEASE
  #... end of file omitted

This file determines which versions (and in which order) are displayed on the release page of the website.

Check that you see the X.X.X-RC.RELEASE under the releases page and that the links work (It may need a few minutes for the website to be updated).

The external_devices_api property should be set to true for all new versions, so the API documentation for the External Devices API is displayed on the website.

56.2.6. Advertising the new release

Here is the link to the administration website for the LFE mailing lists in case there is an issue.
  • Send a message on operator-fabric slack channel.

56.3. Releasing a Major or Minor Version

To release a version we use some GitHub Actions dedicated jobs. These jobs are triggered when pushing a commit in branches containing the keyword release at the end (**release) and rely on the VERSION file at the root of this repository to know which version is being produced. It is thus crucial to double-check the content of this file before any push (triggering the GitHub Actions jobs) is made.

Before releasing a version, you need to prepare the release.

56.3.1. Move Release Candidate to Release

  1. Checkout release branch

    git checkout X.X.X.release
  2. Use the ./CICD/prepare_release_version.sh script to automatically perform all the necessary changes:

    ./CICD/prepare_release_version.sh -v X.X.X.RELEASE

    You should get the following output:

    Current version is SNAPSHOT (based on VERSION file)
    Preparing X.X.X.RELEASE
    Updating version for pipeline in VERSION file
    Replacing SNAPSHOT with X.X.X.RELEASE in swagger.yaml files
    Using X.X.X.RELEASE for lfeoperatorfabric images in dev and docker environment docker-compose files
    The following files have been updated:
     M VERSION
     M config/dev/docker-compose.yml
     M config/docker/docker-compose.yml
     M services/cards-publication/src/main/modeling/swagger.yaml
     M services/businessconfig/src/main/modeling/swagger.yaml
     M services/users/src/main/modeling/swagger.yaml

    This script performs the following changes:

    • Replace SNAPSHOT with X.X.X.RELEASE in swagger.yaml files and the VERSION file at the root operator-fabric folder

    • Change the version from SNAPSHOT to X.X.X.RELEASE in the docker-compose files for dev and docker deployments

  3. Commit the changes with the template message:

    git add .
    git commit -s -m "[RELEASE] X.X.X.RELEASE"
  4. Push the commit

    git push
  5. Check that the build is correctly triggered

    You can check the status of the GitHub Actions triggered by the commit on Github Actions.

    Wait for the build to complete (around 50 minutes) and check that all jobs have been successful. This ensures that the code builds, tests are OK and there is no error preventing documentation or Docker images generation.

56.3.2. Merging the release branch into master

Once the release branch build is passing, you should merge the release branch into master to bring the new developments into master and trigger the CICD tasks associated with a release (Docker images for DockerHub and documentation).

git checkout master (1)
git pull (2)
git merge X.X.X.release (3)
1 Check out the master branch
2 Make sure your local copy is up to date
3 Merge the X.X.X.release branch into master, accepting changes from X.X.X.release in case of conflicts.
git tag X.X.X.RELEASE (1)
git push (2)
git push origin X.X.X.RELEASE (3)
1 Tag the commit with the X.X.X.RELEASE tag
2 Push the commits to update the remote master branch
3 Push the tag
  1. Check that the build is correctly triggered

    You can check the status of the GitHub Action workflow triggered by the commit on Github Actions. The Github Actions workflow should have the following six jobs executed : - Build - Karate tests - Cypress tests - Publish Documentation (latest) - Push images to dockerhub - Push images latest to dockerhub

Wait for the build to complete (around 50 minutes) and check that all jobs have been successful.

  1. Check that the X.X.X.RELEASE images have been generated and pushed to DockerHub.

  2. Check that the latest images have been updated on DockerHub (if this has been triggered).

  3. Check that the documentation has been generated and pushed to the GitHub pages website : check the version and revision date at the top of the documents in the current documentation (for example the architecture documentation)

  4. Check that the tag was correctly pushed to GitHub and is visible under the tags page for the repository.

56.3.3. Updating the version list on the website

On the website repository, edit the /_data/versions.yml file to:

  1. Add the version being released to the list with the current badge

  2. Remove the current badge from the previous version

For example:

Before
- id: SNAPSHOT
  type: SNAPSHOT
  external_devices_api: true
- id: D.E.F.RELEASE
  badge: current
  external_devices_api: true
- id: A.B.C.RELEASE
  #... end of file omitted
After
- id: SNAPSHOT
  type: SNAPSHOT
  external_devices_api: true
- id: X.X.X.RELEASE
  badge: current
  external_devices_api: true
- id: D.E.F.RELEASE
  external_devices_api: true
- id: A.B.C.RELEASE
  #... end of file omitted

This file determines which versions (and in which order) are displayed on the release page of the website.

Check that you see the X.X.X.RELEASE under the releases page and that the links work (It may need a few minutes for the website to be updated).

The external_devices_api property should be set to true for all new versions, so the API documentation for the External Devices API is displayed on the website.

56.3.4. Checking the docker-compose files

While the docker-compose files should always point to the SNAPSHOT images while on the develop branch, on the master branch they should rely on the latest RELEASE version available on DockerHub. Once the CI pipeline triggered by the previous steps has completed successfully, and you can see X.X.X.RELEASE images for all services on DockerHub, you should:

  1. Remove your locally built X.X.X.RELEASE images if any

  2. Run the config/docker docker-compose file to make sure it pulls the images from DockerHub and behaves as intended.

People who want to experiment with OperatorFabric are pointed to this docker-compose so it’s important to make sure that it’s working correctly.

56.3.5. Publishing the jars for the client library to Maven Central

Like for release candidate , you need to release the client library jars.

56.3.6. Publishing the release on GitHub

  1. On the releases screen for the core repository, draft a new release.

    1. Select the existing X.X.X.RELEASE tag

    2. The title should be X.X.X.RELEASE

    3. In the description field, paste the content from the release_notes_X.X.X.md file from the release-notes repository.

    4. Reformat and correct the content as needed.

    5. Check if there is a migration guide for this version, if so, check if the corresponding file has been included in src/docs/asciidoc/resources/index.adoc and include a link to it at the top of the release notes.

    6. Click "Publish release"

56.3.7. Update supported versions

Update supported version in security policy (SECURITY.md file) via a Pull Request on develop branch

56.3.8. Advertising the new release

Here is the link to the administration website for the LFE mailing lists in case there is an issue.
  • Send a message on operator-fabric slack channel.

56.3.9. Preparing the next version

56.3.9.1. On the release-notes repository

Remove the release_notes.X.X.X.md file corresponding to the release’s version.

56.3.9.2. On the operatorfabric-core repository

Now that the release branch has served its purpose, it should be deleted so as not to clutter the repository and to avoid confusion with the actual release commit tagged on master.

git branch -d X.X.X.release (1)
1 Delete the branch locally
You should also delete the branch on GitHub.

You should also close the project for this version, and create one for the next version if it doesn’t already exist (use the "Automated Kanban" template).

56.4. Releasing a Patch (Hotfixes) Version

Let’s say fixes are needed on version X.X.0.RELEASE, and will be released as X.X.X.RELEASE. If it’s the first patch version to be released for this minor version (i.e. version X.X.1.RELEASE), you will need to create the X.X.hotfixes branch. To do so:

git checkout X.X.0.RELEASE (1)
git checkout -b X.X.hotfixes (2)
1 Checkout X.X.0.RELEASE tag
2 Create (and checkout) branch X.X.hotfixes from this commit

If branch X.X.hotfixes already exists, you can just check it out.

git checkout X.X.hotfixes

Then, follow the process described

here to create feature branches, work on fixes and merge them back into X.X.hotfixes.

Once all the big fixes that need to go into the version X.X.X.RELEASE have been merged into branch X.X.hotfix, you can release the patch version. To do so:

  1. Use the ./CICD/prepare_release_version.sh script to automatically perform all the necessary changes:

    ./CICD/prepare_release_version.sh -v X.X.X.RELEASE
  2. Commit the changes, tag them and push both to GitHub:

    git add .
    git commit -m "[RELEASE] X.X.X.RELEASE " (1)
    git tag X.X.X.RELEASE (2)
    git push --set-upstream origin X.X.hotfixes (3)
    git push origin X.X.X.RELEASE (4)
    1 Commit the changes
    2 Tag the release
    3 Push the commit
    4 Push the tag

This will trigger the build and tests in GitHub Actions.

If the build and tests are successful, launch manually GitHubActions with jobs : Build , Docker Push and Build and publish documentation

In the case of a patch on the last major/minor version tagged on master, this version will become the latest version. In this case, add the jobs Docker Push - Latest and Build and publish documentation - Latest instead of Build and publish documentation to also update the latest docker images on DockerHub and the current documentation on the website.

You then have to follow the following steps as for a classic release:

Resources

57. Appendix B: Publication of the client library jars to Maven Central

This is a summary of the steps that were necessary to initially set up the publication of jars to Maven Central. The process to actually publish the jars for each release is detailed in the release process documentation.

Publication process overview
  1. Building and signing the jars

  2. Publishing them to a staging repository where there are validations (e.g. check that the POM contains the required information, validating the signature against the public key)

  3. If the validations pass, release the jar to Maven Central

57.1. Claiming the org.opfab namespace on Maven Central

This is done by logging an issue on the Sonatype JIRA (create an account first). The namespace needs to match a domain that you own (and this will be verified), which is why we had to rename our packages to org.opfab.XXX.

You can then request other users to be granted rights on the namespace as well.

57.2. Creating a GPG key pair

The key pair is generated with GPG2, keeping the default options and using the opfabtech technical account email as contact. The key is further secured with a passphrase.

gpg2 --full-generate-key

gpg (GnuPG) 2.2.19; Copyright (C) 2019 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
  (14) Existing key from card
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (3072) 3072
Requested keysize is 3072 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 18m
Key expires at Tue 11 Oct 2022 12:38:13 CEST
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name: opfabtech
E-mail address: opfabtech@gmail.com
Comment: technical account for the OperatorFabric project
You selected this USER-ID:
    "opfabtech (technical account for the OperatorFabric project) <opfabtech@gmail.com>"

Change (N)ame, (C)omment, (E)-mail or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilise the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key 469E7252B8D25328 marked as ultimately trusted
gpg: directory '/home/guironnetale/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/home/guironnetale/.gnupg/openpgp-revocs.d/FE0D7AFF9C129CFBBDC18A0B469E7252B8D25328.rev'
public and secret key created and signed.

pub   rsa3072 2021-04-19 [SC] [expires: 2022-10-11]
      FE0D7AFF9C129CFBBDC18A0B469E7252B8D25328
uid                      opfabtech (technical account for the OperatorFabric project) <opfabtech@gmail.com>
sub   rsa3072 2021-04-19 [E] [expires: 2022-10-11]
A standard practice is to have the key expire in 18 months, so I set up a calendar reminder for us to renew it.

57.3. Sharing the signing key

For other developers to be able to sign jars, you need to share both the key pair and the passphrase.

  • Export the key pair to a file

gpg2 --export-secret-keys OPFAB_KEY_ID > key_pair.key

57.4. Publishing the public key

The public key needs to be published to (preferably several) key directories so people wanting to use the signed jars can check the signature against the public key. It is also checked as part of the validations performed on the staging repository.

Our public key was initially published to pool.sks-keyservers.net, which became deprecated (causing the publication to fail), so it was then published to the two servers that the sonatype validations seem to rely on.

For OpenPGP you need to export the public key (and not the key pair) to a file and upload it to their web interface.

gpg2 --export OPFAB_KEY_ID > my_key.pub

For Ubuntu you need to export the public key as ascii-armored ascii and paste the result to their web interface

gpg2 --export --armor OPFAB_KEY_ID
The key can be retrieved from both these servers by searching either for opfabtech@gmail.com or for the key ID.

57.5. Setting up the signing and publication in Gradle

You can’t publish a jar with version "SNAPSHOT" to the Maven staging repositories (you would get a 403 BAD REQUEST), that’s why the Gradle publication task is configured so that if the version ends with "SNAPSHOT", the jars should be published to a local directory (repos/snapshots) rather than to the Maven Central staging repository.

58. Migration Guide from release 1.4.0 to release 1.5.0

58.1. Refactoring of configuration management

58.1.1. Motivation for the change

The initial situation was to have a Third concept that was meant to represent third-party applications that publish content (cards) to OperatorFabric. As such, a Businessconfig was both the sender of the message and the unit of configuration for resources for card rendering.

Because of that mix of concerns, naming was not consistent across the different services in the backend and frontend as this object could be referred to using the following terms: * Third * ThirdParty * Bundle * Publisher

But now that we’re aiming for cards to be sent by entities, users (see Free Message feature) or external services, it doesn’t make sense to tie the rendering of the card ("Which configuration bundle should I take the templates and details from?") to its publisher ("Who/What emitted this card and who/where should I reply?").

58.1.2. Changes to the model

To do this, we decided that the publisher of a card would now have the sole meaning of emitter, and that the link to the configuration bundle to use to render a card would now be based on its process field.

58.1.2.1. On the Businessconfig model

We used to have a Businessconfig object which had an array of Process objects as one of its properties. Now, the Process object replaces the Businessconfig object and this new object combines the properties of the old Businessconfig and Process objects (menuEntries, states, etc.).

In particular, this means that while in the past one bundle could "contain" several processes, now there can be only one process by bundle.

The Businessconfig object used to have a name property that was actually its unique identifier (used to retrieve it through the API for example). It also had a i18nLabelKey property that was meant to be the i18n key to determine the display name of the corresponding businessconfig, but so far it was only used to determine the display name of the associated menu in the navbar in case there where several menu entries associated with this businessconfig.

Below is a summary of the changes to the config.json file that all this entails:

Field before Field after Usage

name

id

Unique identifier of the bundle. Used to match the publisher field in associated cards, should now match process

name

I18n key for process display name.

states.mystate.name

I18n key for state display name.

i18nLabelKey

menuLabel

I18n key for menu display name in case there are several menu entries attached to the process

processes array is a root property, states array being a property of a given process

states array is a root property

Here is an example of a simple config.json file:

Before
{
  "name": "TEST",
  "version": "1",
  "defaultLocale": "fr",
  "menuEntries": [
    {"id": "uid test 0","url": "https://opfab.github.io/","label": "menu.first"},
    {"id": "uid test 1","url": "https://www.la-rache.com","label": "menu.second"}
  ],
  "i18nLabelKey": "businessconfig.label",
  "processes": {
    "process": {
      "states": {
        "firstState": {
          "details": [
            {
              "title": {
                "key": "template.title"
              },
              "templateName": "operation"
            }
          ]
        }
      }
    }
  }
}
After
{
  "id": "TEST",
  "version": "1",
  "name": "process.label",
  "defaultLocale": "fr",
  "menuLabel": "menu.label",
  "menuEntries": [
    {"id": "uid test 0","url": "https://opfab.github.io/","label": "menu.first"},
    {"id": "uid test 1","url": "https://www.la-rache.com","label": "menu.second"}
  ],
  "states": {
    "firstState": {
      "name" :"mystate.label",
      "details": [
        {
          "title": {
            "key": "template.title"
          },
          "templateName": "operation"
        }
      ]
    }
  }
}
You should also make sure that the new i18n label keys that you introduce match what is defined in the i18n folder of the bundle.
58.1.2.2. On the Cards model
Field before Field after Usage

publisherVersion

processVersion

Identifies the version of the bundle. It was renamed for consistency now that bundles are linked to processes not publishers

process

process

This field is now required and should match the id field of the process (bundle) to use to render the card.

processId

processInstanceId

This field is just renamed , it represent an id of an instance of the process

These changes impact both current cards from the feed and archived cards.

The id of the card is now build as process.processInstanceId an not anymore publisherID_process.

58.2. Change on the web-ui.json

The parameter navbar.thirdmenus.type has been removed from this file. Starting from this release the related functionality has been moved on bundle basis and it’s not more global. See "Changes on bundle config.json" for more information.

58.3. Changes on bundle config.json

Under menuEntries a new subproperty has been added: linkType. This property replace the old property navbar.thirdmenus.type in web-ui.json, making possible a more fine control of the related behaviour.

58.4. Component name

We also change the component name of third which is now named businessconfig.

58.5. Changes to the endpoints

The /third endpoint becomes /businessconfig/processes.

58.6. Migration steps

This section outlines the necessary steps to migrate existing data.

You need to perform these steps before starting up the OperatorFabric instance because starting up services with the new version while there are still "old" bundles in the businessconfig storage will cause the businessconfig service to crash.
  1. Backup your existing bundles and existing Mongo data.

  2. Edit your bundles as detailed above. In particular, if you had bundles containing several processes, you will need to split them into several bundles. The id of the bundles should match the process field in the corresponding cards.

  3. If you use navbar.thirdmenus.type in web-ui.json, rename it to navbar.businessmenus.type

  4. Run the following scripts in the mongo shell to copy the value of publisherVersion to a new processVersion field and to copy the value of processId to a new processInstanceId field for all cards (current and archived):

    Current cards
    db.cards.updateMany(
    {},
    { $rename: { "publisherVersion": "processVersion", "processId": "processInstanceId" } }
    )
    Archived cards
    db.archivedCards.updateMany(
    {},
    { $rename: { "publisherVersion": "processVersion", "processId": "processInstanceId" } }
    )
  5. Make sure you have no cards without process using the following mongo shell commands:

    db.cards.find({ process: null})
    db.archivedCards.find({ process: null})
  6. If it turns out to be the case, you will need to set a process value for all these cards to finish the migration. You can do it either manually through Compass or using a mongo shell command. For example, to set the process to "SOME_PROCESS" for all cards with an empty process, use:

    db.cards.updateMany(
    { process: null },
    {
    $set: { "process": "SOME_PROCESS"}
    }
    )
    db.archivedCards.updateMany(
    { process: null },
    {
    $set: { "process": "SOME_PROCESS"}
    }
    )
  7. If you have any code or scripts that push bundles, you should update it to point to the new endpoint.

59. Migration Guide from release 1.7.0 to release 1.8.0

59.1. Card detail definition in business configuration

There is no more the need for multiple definitions of card’s detail rendering because of the removal of multi-tab rendering. The rendering of the detail of a card is configured specifying the detail title, template name and the list of styles.

In the Businessconfig model definition the field details has been removed.

The new fields detailTitle, templateName and styles have been added.

Here is an example of a simple config.json file:

Before
{
  "id": "TEST",
  "version": "1",
  "name": "process.label",
  "states": {
    "firstState": {
      "name" :"mystate.label",
      "details": [
        {
          "title": {
            "key": "template.title"
          },
          "templateName": "operation",
          "styles": ["style1","style2"]
        }
      ]
    }
  }
}
After
{
  "id": "TEST",
  "version": "1",
  "name": "process.label",
  "states": {
    "firstState": {
      "name" :"mystate.label",
      "detailTitle": {
        "key": "template.title"
        },
      "templateName": "operation",
      "styles": ["style1","style2"]
    }
  }
}

59.2. Business menu definition

The business menu are not configured anymore in the business definition but in a specific single configuration file called ui-menu.json. You must move your configuration from the config.json to this new file, see documentation .

60. Migration Guide from release 1.8.0 to release 2.0.0

60.1. AcknowledgmentAllowed field in business configuration

In the process state definition the acknowledgementAllowed has been renamed to acknowledgmentAllowed. The acknowledgmentAllowed field in no more a boolean type and can now assume one of the following values:

  • "Never": acknowledgment not allowed (default value)

  • "Always": acknowledgment allowed

  • "OnlyWhenResponseDisabledForUser": acknowledgment allowed only when the response is disabled for the user

Here is an example of a simple config.json file:

{
  "id": "TEST",
  "version": "1",
  "name": "process.label",
  "states": {
    "firstState": {
      "name" :"mystate.label",
      "details": [
        {
          "title": {
            "key": "template.title"
          },
          "templateName": "operation",
          "styles": ["style1","style2"]
        }
      ],
      "acknowledgmentAllowed": "Never"
    }
  }
}

60.2. Response card : templateGateway.applyChild()

For card with responses, it is not necessary to call templateGateway.applyChildCards() in your template on loading anymore, OperatorFabric will do it.

61. Migration Guide from release 2.2.0 to release 2.3.0

61.1. Communication between template and opfab

61.1.1. Getting response data from template

Some renaming has been done in version 2.3.0 :

  • the method to have the response information from template is rename in getUserResponse instead of validyForm.

  • the return object of this method shall now contains the response data in field responseCardData instead of formData

So if you have the following code in your template :

    templateGateway.validyForm = function () {
        const response = document.getElementById('response').value;
        const formData = { response: response };
        return {
            valid: true,
            formData: formData
        };
    }

It must be modify this way :

    templateGateway.getUserResponse = function () {
        const response = document.getElementById('response').value;
        const responseCardData = { response: response };
        return {
            valid: true,
            responseCardData: responseCardData
        };

    }

61.1.2. Getting the information if the user can respond

To know from a template that the user can respond to a card you must now call templateGateway.isUserAllowedToRespond() instead of implementing the method templateGateway.setUserCanRespond()

So if you have the following code in your template :

    templateGateway.setUserCanRespond = function(responseEnabled) {
        if (responseEnabled) {
            // do something
        } else {
            // do something
        }
    }

It must be modify this way :

    if (templateGateway.isUserAllowedToRespond()) {
         // do something
    } else {
        // do something
    }

62. Migration Guide from release 2.3.0 to release 2.4.0

62.1. Send card

The API does not provide an endpoint to send an array of cards anymore.

  • The endpoint cards now accepts only one card

  • The endpoint async/cards no longer exists

So if you used to send several cards as array, you need to modify your code to send hem one by one via endpoint cards.

62.2. Package name change

The name of the packages in the OperatorFabric code has been changed from org.lfenergy.operatorfabric. to org.opfab. in preparation for an upload of the client library to Maven Central. You need to update any code using the client library to reflect this name change.

63. Migration Guide from release 2.4.0 to release 2.5.0

63.1. Send card

The API endpoint to send a card doesn’t return CardCreationReport object anymore. The endPoint cards now returns :

  • status code 201 (Created) in case of success.

  • status code 400 (Bad request) in case of a request with wrong data.

So if you use CardCreationReport object when you send card, you need to modify your code to not use it anymore and to test the status code returned.

63.2. Card recipients

The deprecated card field recipient is now deleted. So you have to use the fields userRecipients, groupRecipients and entityRecipients to send a card.

64. Migration Guide from release 2.5.0 to release 2.6.0

64.1. Inter-service communication - Ribbon

Ribbon was used by our Feign client that lets business services get information on the current user from the Users service.

Ribbon is no longer maintained, so we chose to remove it from our dependencies (mostly because it was blocking any update of the other Spring dependencies).

Instead, the Feign client now relies on an external property to know where the Users service can be reached, and there is no load-balancing for now.

This requires a change to service configuration: the users.ribbon.listOfServers property should be removed and replaced with operatorfabric.servicesUrls.users.

It should be set in the configuration of all business services (except users), or in the common configuration.

Example
operatorfabric:
  servicesUrls:
    users: "http://localhost:2103"

According to the Feign documentation, the property should contain "an absolute URL or resolvable hostname (the protocol is optional)".

This property is mandatory, if it is absent the application won’t start.
While the ribbon property could handle an array of several urls, this new property expects a single url as there is no load-balancing mechanism for now.

65. Migration Guide from release 2.6.0 to release 2.7.0

65.1. Send card

The API endPoint cards will now require authentication by default. It is possible to configure the endpoint to not require authentication by setting the configuration parameter checkAuthenticationForCardSending to false in cards-publication service configuration file.

66. Migration Guide from release 2.7.0 to release 2.8.0

66.1. UI Configuration Management

The web-ui container has two configuration files: web-ui.json and ui-menu.json.

To avoid maintaining separate copies of these files for each run environment (docker, dev, Cypress), the reference configuration will be the one for the docker mode, with the others being created by script, changing only the properties that should be different between environments (e.g. environmentName). Only the docker configuration will be version-controlled. The scripts creating the configuration are launched by the docker-compose.sh and docker-compose-cypress.sh.

As a consequence, the web-ui.json and ui-menu.json files have been moved from config/xxx/ to config/xxx/ui-config. The volumes in the docker-compose.yml files have been updated accordingly.

This new organization will also allow us to run Cypress tests against different versions of the configuration, for example to test the behaviour of a property meant to hide a component. See the Cypress tests README (src/test/cypress/README.adoc) for more information.

All modes (dev, docker, config) now use the PASSWORD authentication flow by default. If you want to test with another authentication flow, you should use the setSecurityAuthFlow.sh script AFTER the containers have been started.
Example to use the CODE flow in dev mode
cd src/test/resources
.setSecurityAuthFlow.sh dev CODE

66.2. Management of visible menus

The visibility of some core OperatorFabric menus (monitoring, logging, feed configuration screen,…​) was so far configurable for a given OperatorFabric instance through various properties in web-ui.json.

As of this version, it has been unified with that of custom menus:

  • It will now be managed in ui-menu.json along with custom menus

  • It is now possible to make these menus visible only for certain groups

The following properties in web-ui.json are no longer supported and should be removed
  • navbar.hidden (array of menus to hide)

  • admin.hidden (boolean)

  • feedConfiguration.hidden (boolean)

  • realTimeUsers.hidden (boolean)

  • settings.nightDayMode (boolean)

New property in ui-menu.json
{
  "coreMenusConfiguration":
  [
    {
      "id": "coreMenuId1",
      "visible": true
    },
    {
      "id": "coreMenuId2",
      "visible": false
    },
    {
      "id": "coreMenuId3",
      "visible": true,
      "showOnlyForGroups": ["ADMIN","SOME_OTHER_GROUP"]
    }
  ]
}

All core menus should be listed under this new coreMenusConfiguration property in ui-menu.json, each with their own visible and (optionally) showOnlyForGroups property.

Necessary actions for the migration:

  • Remove the deprecated properties listed above from your web-ui.json

  • Add a coreMenusConfiguration block to your ui-menu.json (see the documentation for details and a full example)

66.3. Simplification MongoDB Configuration

We are getting rid of our specific MongoDB configuration to let SpringBoot autoconfigure it. As a result, we are removing support for the spring.data.mongodb.uris property in favour of the standard spring.data.mongodb.uri property. Please change the application configuration files for the services accordingly.

This property can only hold a single URI.

67. Migration Guide from release 2.10.0 to release 2.11.0

67.1. Logging screen : Filter with tags list

As for archives screen, now you have to specify the list of tags you want to see in the filter of logging screen. You have to set this list in the web-ui.json file, with the parameter logging.filters.tags.list. For example :

"logging": {
  "filters": {
    "tags": {
      "list": [
        {
          "label": "Label for tag 1",
          "value": "tag1"
        }
      ]
    }
  }
}

68. Migration Guide from release 2.11.0 to release 3.0.0

68.1. Changes in configuration due to business translation mechanism removal

In order to remove unnecessary complexity for third-party, we have decided to remove translation for some of business data, this implies the following migration tasks :

68.1.1. Template directory

The template directory does not contain language repository anymore. There is one template per process/state and the templates are located directly in this directory. So you have to move your template for your language directly to the template directory

68.1.2. Changes in config.json

In config.json, the fields process.name, state.name and state.description must not contain i18n data anymore, they must only contain string data. These data will be simply displayed on the screen.

68.1.3. New file i18n.json

The bundle shall contain a new i18n.json file in the root directory. The i18n.json file contains internationalization information for title and summary fields. We keep an i18n mechanism for these two fields in order to have the possibility to adapt the title or summary of the card without having to modify the code of the third party tool sending the card (using only one "language" for translation)

Here is an example file :

{
	"message":{
		"title":"Message",
		"summary":"Message received"
	},
	"chartDetail" : { "title":"A Chart"},
	"chartLine" : { "title":"Electricity consumption forecast"},
	"question" : {"title": "⚡ Planned Outage"},
	"contingencies": {"title" : "⚠️ Network Contingencies ⚠️","summary":"Contingencies report for French network"},
	"processState": {
		"title":"Process state ({{status}})"
	}
}

68.1.4. i18n folder

The folder i18n and contained files has to be removed from bundle structure.

68.1.5. Processes groups

There is no possibility anymore to have translation for the process group name. You can just define the name of the group in the uploaded file. Here is an example of this file :

{
  "groups": [
    {
      "id": "processgroup1",
      "name": "Process Group 1",
      "processes": [
        "process1",
        "process2"
      ]
    },
    {
      "id": "processgroup2",
      "name": "Process Group 2",
      "processes": [
        "process3",
        "process4"
      ]
    }
  ]
}

68.1.6. Script to migrate an existing database

Opfab 3.0 needs two new fields in database : titleTranslated and summaryTranslated. In order to add these two new fields to an existing database, you have to pull the image migration-opfab3, go to the directory OF_HOME/src/tooling/migration-opfab3 and execute the script docker-compose.sh, this way :

./docker-compose.sh <containerNameMongoDB> <portMongoDB> <loginMongoDB> <passwordMongoDB> <pathToBundlesDirectory>

The bundles directory is the directory where you store all the "untar" bundles in the new format with the i18n.json file. It is not the directory where the opfab instance stores the bundles.

68.2. Configuration file web-ui.json

Some attributes have been renamed to be more consistent with their meaning. Here are these attributes :

  • settings.infos.description renamed settings.infos.hide.description

  • settings.infos.language renamed settings.infos.hide.language

  • settings.infos.timezone renamed settings.infos.hide.timezone

  • settings.infos.tags renamed settings.infos.hide.tags

  • settings.infos.sounds renamed settings.infos.hide.sounds

So if you use these fields in web-ui.json file, you have to rename them.

68.3. Configuration file common-docker.yml

A new parameter named operatorfabric.servicesUrls.businessconfig has to be added to your configuration file common-docker.yml (or equivalent)

For example the following configuration:

operatorfabric:
  businessLogActivated: true
  servicesUrls:
    users: "users:8080"

becomes:

operatorfabric:
  businessLogActivated: true
  servicesUrls:
    users: "users:8080"
    businessconfig: "businessconfig:8080"

68.4. Nginx.conf.template file

In order to have a better organized file, we have modified nginx.conf.template file, more precisely the information for location /cardspub/cards. Please report this modification to your local config.

69. Migration Guide from release 3.0.0 to release 3.1.0

69.1. Kafka date format

The signature for sending card dates via Kafka have been changed to better reflect the REST API. Instead of sending the date in seconds since epoch, you can now use the Java Instant object. The following methods are affected:

  • startDate

  • endDate

  • lttd

  • publishDate

  • TimeSpan.setStart

  • TimeSpan.setEnd

For example the following code:

Card card = Card.newBuilder().setStartDate(12345L).build();

becomes:

Card card = Card.newBuilder().setStartDate(Instant.now()).build();

70. Migration Guide from release 3.1.0 to release 3.2.0

70.1. Cache configuration for i18n files

Add the following lines in your nginx.conf to avoid keeping in cache old translation files when migrating opfab

  location /ui/assets/i18n/ {
    add_header Cache-Control "no-cache";
    alias /usr/share/nginx/html/assets/i18n/;
  }

70.2. Tag feed filter removal

The feature to filter tags in the feed has been removed. In consequence, you can remove the following parameters in your web-ui.json configuration file:

  • settings.tags.hide

  • settings.infos.hide.tags

71. Migration Guide from release 3.4.0 to release 3.5.0

71.1. Timezone management removal

The feature to manage timeline inside opfab has been removed. In consequence, you can remove the following parameters in your web-ui.json configuration file:

  • i10n.supported.time-zones

  • settings.infos.hide.timezone

71.2. Kafka card format changes

The Kafka message schema has been changed. The single card used to send messages to and receive from Operator Fabric has been split into two cards; one for sending to OpFab and one for receiving messages from OpFab. The CardCommand structure now contains a card (to OpFab) and a responseCard (from OpFab). You will need to change the Kafka listener to get the response card.

For example change

 @KafkaListener
    public void receivedResponseCard(@Payload ConsumerRecord<String, CardCommand> consumerRecord) {
        CardCommand cardCommand = consumerRecord.value();
        Card = cardCommand.getCard();

to

 @KafkaListener
    public void receivedResponseCard(@Payload ConsumerRecord<String, CardCommand> consumerRecord) {
        CardCommand cardCommand = consumerRecord.value();
        ResponseCard = cardCommand.getResponseCard();

71.2.1. Redirect to business application from a card

The templateGateway.redirectToBusinessMenu function to redirect to business application from a card has changed. The third argument of the function is now a url extension that can contain sub paths and/or parameters instead of only parameters.

You will need to change the call adding a '?' character at the beginning of the third argument.

For example change:

templateGateway.redirectToBusinessMenu('myMenu','myEntry','param1=aParam&param2=anotherParam')

to

templateGateway.redirectToBusinessMenu('myMenu','myEntry','?param1=aParam&param2=anotherParam')

72. Migration Guide from release 3.5.0 to release 3.6.0

72.1. Template rendering

Bootstrap library has been migrate to v5.

If you use bootstrap css classes in your templates, it’s strongly recommended checking your template rendering after migration.

Charts library has been upgraded from version 3.2.1 to 3.7.1

If you use charts in your templates, it’s strongly recommended checking your template rendering after migration.

72.2. Mongo data migration steps

This section outlines the necessary steps to migrate existing data.

If you use response card feature, you need to perform these steps before starting up the OperatorFabric instance to avoid displaying wrong data in archives page.
  1. Backup your existing Mongo data.

  2. Run the following scripts in the mongo shell to update the value of the new field deletionDate for archived cards:

    db.archivedCards.updateMany(
    {"deletionDate": {"$exists": false}},
    {"$set": {"deletionDate": new ISODate("1970-01-01T00:00:00Z")}}
    )

72.3. About screen

You do not need anymore to precise opfab version in web-ui.json to see it on about screen. So you need to remove the corresponding line. For example :

      "operatorfabric": {
        "name": "OperatorFabric",
        "rank": 0,
        "version": "2.5.0.RELEASE"
      }

72.4. Deprecated feature

72.4.1. Method templateGateway.getSpecificCardInformation()

Implementation of method templateGateway.getSpecificCardInformation() in usercard templates is now deprecated, replace it by usercardTemplateGateway.getSpecificCardInformation(). It is just a naming change, the behavior is the same.

72.4.2. Restrict recipient dropdown list for user

Using "recipientList" in state definition (in config.json) is now deprecated

If you use "recipientList" to restrict the list of recipients shown to user, replace it with code in template as in the following example :

    usercardTemplateGateway.setDropdownEntityRecipientList([
            {"id": "ENTITY_FR", "levels": [0,1]},
            {"id": "IT_SUPERVISOR_ENTITY"}
        ]);

72.4.3. Set the list of recipients

Using "recipientList" in state definition (in config.json) is now deprecated

If you use "recipientList" to set the list of recipients, provide now the list of recipients when returning the card object in usercardTemplateGateway.getSpecificCardInformation() in the field entityRecipients.

Example:

    usercardTemplateGateway.getSpecificCardInformation = function () {
        const message = document.getElementById('message').value;
        const card = {
          summary : {key : "message.summary"},
          title : {key : "message.title"},
          entityRecipients: ["ENTITY_FR","IT_SUPERVISOR_ENTITY"],
          data : {message: message}
        };
        if (message.length<1) return { valid:false , errorMsg:'You must provide a message'}
        return {
            valid: true,
            card: card
        };

73. Migration Guide from release 3.6.0 to release 3.7.0

73.1. Docker images names changed

From version 3.7.0 the docker names of the former "business services" are changed. Here are there old names and the new ones.

Old docker name New docker name

lfeoperatorfabric/of-cards-consultation-business-service:3.6.0.RELEASE

lfeoperatorfabric/of-cards-consultation-service:3.7.0.RELEASE

lfeoperatorfabric/of-cards-publication-business-service:3.6.0.RELEASE

lfeoperatorfabric/of-cards-publication-service:3.7.0.RELEASE

lfeoperatorfabric/of-users-business-service:3.6.0.RELEASE

lfeoperatorfabric/of-users-service:3.7.0.RELEASE

lfeoperatorfabric/of-businessconfig-business-service:3.6.0.RELEASE

lfeoperatorfabric/of-businessconfig-service:3.7.0.RELEASE

You must change your docker-compose files accordingly.

73.2. Third parties configuration

The configuration of the third parties applications as external recipents has changed. It is now possible to configure wether the user token shall be propagated to the third party or not by setting the "propagateUserToken" boolean property.

If you have third parties configured in your 'cards-publication.yml' you should change the configuration as follows:

From:

externalRecipients-url: "{\
           third-party1: \"http://thirdparty1/test1\", \
           third-party2: \"http://thirdparty2:8090/test2\", \
           }"

To

external-recipients:
  recipients:
    - id: "third-party1"
      url: "http://thirdparty1/test1"
      propagateUserToken: true
    - id: "third-party2"
      url: "http://thirdparty2:8090/test2"
      propagateUserToken: true

73.3. Rename attribute externalDeviceId

It is now possible to have several external device systems, therefore the attribute externalDeviceId is now renamed into externalDeviceIds and is now a list.

73.3.1. In the configuration file

In the configuration file, replace

userConfigurations:
        - userLogin: operator1_fr
          externalDeviceId: CDS_1
        - userLogin: operator2_fr
          externalDeviceId: CDS_2
        - userLogin: operator3_fr
          externalDeviceId: CDS_3
        - userLogin: operator4_fr
          externalDeviceId: CDS_1

by

userConfigurations:
        - userLogin: operator1_fr
          externalDeviceIds: ["CDS_1"]
        - userLogin: operator2_fr
          externalDeviceIds: ["CDS_2"]
        - userLogin: operator3_fr
          externalDeviceIds: ["CDS_3"]
        - userLogin: operator4_fr
          externalDeviceIds: ["CDS_1"]

73.3.2. Changes in the Mongo data base

73.3.2.1. Before the migration

Before the migration to 3.7.0, one should save its current database, then add the attribute "externalDeviceIds" based on the value of the existing attribute "externalDeviceId". To add this attribute, simply launch the commands :

var collection = db.getCollection("userConfigurations")
collection.find().forEach(function(user) {collection.updateOne({_id: user._id}, {$set: {externalDeviceIds: [user.externalDeviceId]}})})
73.3.2.2. Once the migration is done

Once the migration to 3.7.0 is done, the attribute "externalDeviceId" is not used anymore. You can remove it by launching :

var collection = db.getCollection("userConfigurations")
collection.find().forEach(function(user) {collection.updateOne({_id: user._id}, {$unset: {"externalDeviceId": 1}})})

73.4. Deprecated feature

73.4.1. Field recurrence

Using "recurrence" field returned by getSpecificCardInformation() method definition is now deprecated.

If you are using the "viewCardInAgenda" and "recurrence" field, use the "timeSpans" field instead when returning the card object in usercardTemplateGateway.getSpecificCardInformation() to configure the visibility of the card in timeline and agenda.

Example:

    usercardTemplateGateway.getSpecificCardInformation = function () {
        const message = document.getElementById('message').value;
        const card = {
          summary : {key : "message.summary"},
          title : {key : "message.title"},
          entityRecipients: ["ENTITY_FR","IT_SUPERVISOR_ENTITY"],
          data : {message: message}
        };

        const recurrence = {
            daysOfWeek : [1,2,3],
            hoursAndMinutes : {hours:10,minutes:44},
            durationInMinutes: 15
        }

        const mystartDate = new Date();
        const timeSpans = [{
            startDate: mystartDate.getTime(),
            endDate: mystartDate.getTime() + 7 * 24 * 3600000,
            recurrence: recurrence
        }]

        return {
            valid: true,
            card: card,
            timeSpans: timeSpans
        };

74. Migration Guide from release 3.7.0 to release 3.8.0

74.1. Kafka card format changes

The Kafka message schema has been changed. Two new, optional fields are added:

  • wktGeometry

  • wktProjection

Make sure your Kafka consumers and producers are updated and use the latest card and responseCard definitions.

75. Migration Guide from release 3.8.x to release 3.9.0

75.1. web-ui.json config file

To declare settings parameters, you now need to group all fields under settings: { } and you must not use a statement like this one : settings.replayInterval: 10

For example:

Replace the following invalid settings config

  "settings.replayInterval": 10,
  "settings.replayEnabled": true,
  "settings": {
    "about": {
      "keycloak": {
        "name": "Keycloak",
        "rank": 2,
        "version": "6.0.1"
      },
    }
    "locale": "en",
    "dateTimeFormat": "HH:mm DD/MM/YYYY",
    "dateFormat": "DD/MM/YYYY",
    "styleWhenNightDayModeDesactivated": "NIGHT"
  },

By this valid one :

  "settings": {
    "replayInterval": 10,
    "replayEnabled": true,
    "about": {
      "keycloak": {
        "name": "Keycloak",
        "rank": 2,
        "version": "6.0.1"
      },
    }
    "locale": "en",
    "dateTimeFormat": "HH:mm DD/MM/YYYY",
    "dateFormat": "DD/MM/YYYY",
    "styleWhenNightDayModeDesactivated": "NIGHT"
  },

76. Migration Guide from release 3.9.x to release 3.10.0

76.1. Card publication with not existent process or state

OperatorFabric will refuse the publication of cards referring a not existent process or state. To allow the publication of cards with not existent process or state you should set the authorizeToSendCardWithInvalidProcessState property to true in cards-publication.yml configuration file.

76.2. web-ui.json config file

76.2.1. settings.about section

The section to configure about screen has been moved from settings section to the root level. You need to change the configuration file accordingly.

For example:

Replace the following invalid config

  "settings": {
    "about": {
      "keycloak": {
        "name": "Keycloak",
        "rank": 2,
        "version": "6.0.1"
      },
    },
    "replayInterval": 10,
    "replayEnabled": true,
    "locale": "en",
    "dateTimeFormat": "HH:mm DD/MM/YYYY",
    "dateFormat": "DD/MM/YYYY",
    "styleWhenNightDayModeDesactivated": "NIGHT"
  },

By this valid one :

  "settings": {
    "replayInterval": 10,
    "replayEnabled": true,
    "locale": "en",
    "dateTimeFormat": "HH:mm DD/MM/YYYY",
    "dateFormat": "DD/MM/YYYY",
    "styleWhenNightDayModeDesactivated": "NIGHT"
  },
  "about": {
    "keycloak": {
      "name": "Keycloak",
      "rank": 2,
      "version": "6.0.1"
    }
  }

76.2.2. settings.infos.hide section

The section to configure hidden fields for the settings screen has been moved from settings.infos.hide section to settingsScreen.hiddenSettings. This new field is an array of string and possible values are : "description", "language", "remoteLoggingEnabled" and "sounds". So, you need to change the configuration file accordingly.

For example:

Replace the following invalid config

  "settings": {
    "infos": {
      "hide": {
        "description": true
      }
    }
  }

By this valid one :

  "settingsScreen": {
    "hiddenSettings":  ["description"]
  }

76.2.3. Feed filtering and sorting

In feed configuration the following parameters to hide filtering and sorting options have been removed: feed.card.hideAckFilter, feed.card.hideReadSort, feed.card.hideSeveritySort

Instead, it is possible to configure the default behaviour for sorting and acknowledgment filtering by setting the following parameters in web-ui.json:

  • feed.defaultSorting : possible values are : "unread", "date", "severity", "startDate", "endDate"

  • feed.defaultAcknowledgmentFilter : possible values are : "notack", "ack", "all"

77. Migration Guide from release 3.10.x to release 3.11.0

77.1. Card publication

OperatorFabric will now check user perimeter when publishing cards via endpoint /cards (it does not concern user cards which are always controlled). To disable perimeter validation it is possible to set the configuration parameter checkPerimeterForCardSending to false in cards-publication service configuration file.

77.2. templateGateway onStyleChange() method

A new onStyleChange() method has been added to templateGateway. OpFab will call this method when switching day/night mode. It may have to be implemented by a template to refresh styles and reload embedded charts when user change day/night mode, please check your existing templates.

77.3. Nginx configuration

Nginx configuration file has to be modified to allow the forwarding of query parameters to the new 'userActionLogs' API. You have to modify nginx.conf file as follows:

Replace

  location ~ "^/users/(.*)" {
    proxy_set_header Host $http_host;
    proxy_pass http://users:8080/$1;
    proxy_set_header X-Forwarded-For $remote_addr;
  }

With

  location ~ "^/users/(.*)" {
    proxy_set_header Host $http_host;
    proxy_pass http://users:8080/$1$is_args$args;
    proxy_set_header X-Forwarded-For $remote_addr;
  }

78. Migration Guide from release 3.11.x to release 3.12.0

78.1. Java 17

OperatorFabric now runs on java 17, which means that the client library is now compiled with java 17. If you are using the client library, you may need to upgrade your application to java 17 or use client 3.11.2 which is compatible with version 3.12.0 and compiled with java 11.

78.2. Expiration date

A new 'expirationDate' field has been added to the card. Cards with an expiration date will be automatically deleted from the card feed after the expiration date is passed.

The expiration date is by default visible in the usercard screen. You can hide the expiration date in the usercard screen by adding the new 'expirationDateVisible' value to your usercard configuration in the config.json of your bundle.

  • Hide expiration date → "expirationDateVisible" : false

    "processState": {
      "name": "Process example ",
      "description": "Process state",
      "color": "#0070C0",
      "userCard" : {
        "template" : "usercard_process",
        "lttdVisible" : false,
        "expirationDateVisible" : false,
        "recipientList" : [{"id": "ENTITY_FR", "levels": [0,1]}, {"id": "IT_SUPERVISOR_ENTITY"}]
      },
  • Show expiration date → "expirationDateVisible" : false

    "processState": {
      "name": "Process example ",
      "description": "Process state",
      "color": "#0070C0",
      "userCard" : {
        "template" : "usercard_process",
        "lttdVisible" : false,
        "expirationDateVisible" : true,
        "recipientList" : [{"id": "ENTITY_FR", "levels": [0,1]}, {"id": "IT_SUPERVISOR_ENTITY"}]
      },

78.3. Remove deprecated method templateGateway.getSpecificCardInformation()

The deprecated method 'templateGateway.getSpecificCardInformation()' in usercard templates has been removed, use 'usercardTemplateGateway.getSpecificCardInformation()' method instead.

78.4. Nginx configuration

You need to modify your existing opfab nginx configuration (nginx.conf file), replace line :

proxy_pass http://users:8080/$1;

by

proxy_pass http://users:8080/$1$is_args$args;

79. Migration Guide from release 3.12.0 to release 3.13.0

79.1. Trailing slash

When calling an opfab API you cannot use anymore trailing slash on url.

If, for example, one makes a request like opfab/businessconfig/processes/, it must be replace by opfab/businessconfig/processes

79.2. viewCardInAgenda Parameter renamed

The viewCardInAgenda parameter (specified in the usercard template through getSpecificCardInformation) has been renamed viewCardInCalendar to be consistent with other parts of the software. So you may have to update your usercard templates.

79.3. Default value set to false for lttdVisible and expirationDateVisible

In usercard state definition, the default value for fields lttdVisible and expirationDateVisible is now set to false.

80. Migration Guide from release 3.13.0 to release 3.14.0

80.1. Bundles folder

Bundles are not stored directly into the business config storage path defined in configuration anymore (operatorfabric.businessconfig.storage.path) but in a subdirectory named bundles. So in order to migrate, it is needed to move your bundles directories to a subdirectory named bundles.

To help you migrate your production environment you can use the following script : github.com/opfab/operatorfabric-core/tree/develop/src/tooling/migration-opfab3.13/moveBundles.sh

81. Migration Guide from release 3.14.0 to release 3.15.0

81.1. CrossOrigin for GeoJSON

The option feed.geomap.layer.geojson.crossOrigin, used to set a cross-origin option when downloading GeoJSON, has been removed.

81.2. Remove deprecated field from state definition

The deprecated field recipientList has been removed from card state definition. To restrict the recipient list options use the method usercardTemplateGateway.setDropdownEntityRecipientList in template.

81.3. Remove use of groups column in realtime screen

The field groups has been removed from the object entitiesGroups of the configuration file realtimescreens.json. To conform to this new structure, you have to remove this field from your configuration file.

82. Migration Guide from release 3.15.0 to release 4.0.0

82.1. Management of core menus and custom menus

The structure of the ui-menu.json file has changed. Fields coreMenusConfiguration and menus do not exist anymore.

Now ui-menu.json file contains 4 fields :

  • navigationBar : contains the navigation bar mixing core menus and custom menus

  • topRightIconMenus : contains only the two menu icons agenda and usercard on the top right of the screen

  • topRightMenus : contains the core menus you want to see when you click the user, on the top right of the screen

  • locales : contains the translations for the custom menus

With the new structure of the file, it is now possible to mix core menus and custom menus in the navigation bar.

You can find more information and a full example in the documentation : opfab.github.io/documentation/current/reference_doc/#menu_entries

82.2. UI Configuration

Some fields of the web-ui.json file have changed and renamed:

  • alertMessageBusinessAutoClose has been moved to the alerts section and is now called messageBusinessAutoClose

  • alertMessageOnBottomOfTheScreen has been moved to the alerts section and is now called messageOnBottomOfTheScreen

82.3. OpfabAPI

Use of methods or attributes starting with templateGateway are now deprecated , the following table give you the new methods to use

Deprecated method or attribute New method

templateGateway.getEntityName(entityId)

opfab.users.entities.getEntityName(entityId)

templateGateway.getEntity(entityId)

opfab.users.entities.getEntity(entityId)

templateGateway.getAllEntities()

opfab.users.entities.getAllEntities()

templateGateway.redirectToBusinessMenu(menuId, menuItemId, urlExtension)

opfab.navigate.redirectToBusinessMenu(menuId, menuItemId, urlExtension)

templateGateway.isUserAllowedToRespond()

opfab.currentCard.isUserAllowedToRespond()

templateGateway.isUserMemberOfAnEntityRequiredToRespond()

opfab.currentCard.isUserMemberOfAnEntityRequiredToRespond()

templateGateway.getEntitiesAllowedToRespond()

opfab.currentCard.getEntitiesAllowedToRespond()

templateGateway.getEntityUsedForUserResponse()

opfab.currentCard.getEntityUsedForUserResponse()

templateGateway.getDisplayContext()

opfab.currentCard.getDisplayContext()

templateGateway.displayLoadingSpinner()

opfab.currentCard.displayLoadingSpinner()

templateGateway.hideLoadingSpinner()

opfab.currentCard.hideLoadingSpinner()

templateGateway.childCards

opfab.currentCard.getChildCards()

templateGateway.isLocked

opfab.currentCard.isResponseLocked()

templateGateway.lockAnswer = function () {//do some stuff}

opfab.currentCard.listenToResponseLock( () ⇒ {//do some stuff} )

templateGateway.unlockAnswer = function () {//do some stuff}

opfab.currentCard.listenToResponseUnlock( () ⇒ {//do some stuff} )

templateGateway.setLttdExpired = function () {//do some stuff}

opfab.currentCard.listenToLttdExpired( () ⇒ {//do some stuff} )

templateGateway.onStyleChange = function () {//do some stuff}

opfab.currentCard.listenToStyleChange( () ⇒ {//do some stuff} )

templateGateway.setScreenSize = function () {//do some stuff}

opfab.currentCard.listenToScreenSize( () ⇒ {//do some stuff} )

templateGateway.onTemplateRenderingComplete = function () {//do some stuff}

opfab.currentCard.listenToTemplateRenderingComplete( () ⇒ {//do some stuff} )

templateGateway.getUserResponse = function () {//do some stuff}

opfab.currentCard.registerFunctionToGetUserResponse( () ⇒ {//do some stuff} )

templateGateway.applyChildCards = function () {//do some stuff}

opfab.currentCard.listenToChildCards( () ⇒ {//do some stuff} )

usercardTemplateGateway.getCurrentProcess()

opfab.currentUserCard.getProcessId()

usercardTemplateGateway.getEditionMode()

opfab.currentUserCard.getEditionMode()

usercardTemplateGateway.getEndDate()

opfab.currentUserCard.getEndDate()

usercardTemplateGateway.getExpirationDate()

opfab.currentUserCard.getExpirationDate()

usercardTemplateGateway.getLttd()

opfab.currentUserCard.getLttd()

usercardTemplateGateway.getSelectedEntityRecipients()

opfab.currentUserCard.getSelectedEntityRecipients()

usercardTemplateGateway.getSelectedEntityForInformationRecipients()

opfab.currentUserCard.getSelectedEntityForInformationRecipients()

usercardTemplateGateway.getStartDate()

opfab.currentUserCard.getStartDate()

usercardTemplateGateway.getCurrentState()

opfab.currentUserCard.getState()

usercardTemplateGateway.getUserEntityChildCardFromCurrentCard()

opfab.currentUserCard.getUserEntityChildCard()

usercardTemplateGateway.getSpecificCardInformation = function () {//do some stuff}

opfab.currentUserCard.registerFunctionToGetSpecificCardInformation( () ⇒ {//do some stuff} )

usercardTemplateGateway.setDropdownEntityRecipientList(recipients)

opfab.currentUserCard.setDropdownEntityRecipientList(recipients)

usercardTemplateGateway.setDropdownEntityRecipientForInformationList(recipients)

opfab.currentUserCard.setDropdownEntityRecipientForInformationList(recipients)

userCardTemplateGateway.setEntityUsedForSendingCard = function (entityID) {//do some stuff}

opfab.currentUserCard.listenToEntityUsedForSendingCard( (entityID) ⇒ {//do some stuff} )

usercardTemplateGateway.setInitialEndDate(endDate)

opfab.currentUserCard.setInitialEndDate(endDate)

usercardTemplateGateway.setInitialExpirationDate(expirationDate)

opfab.currentUserCard.setInitialExpirationDate(expirationDate)

usercardTemplateGateway.setInitialLttd(lttd)

opfab.currentUserCard.setInitialLttd(lttd)

usercardTemplateGateway.setInitialSelectedRecipients(recipients)

opfab.currentUserCard.setInitialSelectedRecipients(recipients)

usercardTemplateGateway.setInitialSelectedRecipientsForInformation(recipients)

opfab.currentUserCard.setInitialSelectedRecipientsForInformation(recipients)

usercardTemplateGateway.setInitialSeverity(initialSeverity)

opfab.currentUserCard.setInitialSeverity(initialSeverity)

usercardTemplateGateway.setInitialStartDate(startDate)

opfab.currentUserCard.setInitialStartDate(startDate)

82.4. Write right for user (RightsEnum.Write)

The Write right has been removed. Considering Receive and ReceiveAndWrite rights, Write was useless and confusing for the code.

Before upgrading to 4.0, you must replace all "Write" rights by "ReceiveAndWrite" rights. If you want to automate it, you can do it directly in the database via the following request :

db.perimeter.updateMany({"stateRights.right": "Write"}, {"$set": {"stateRights.$.right": "ReceiveAndWrite"}});

82.5. Cards reminder

Cards reminder logic has been moved from front-end to back-end. The reminder logic is handled by the new "cards-reminder" service.

After upgrading to 4.0, you must call the /reset endpoint to populate the reminders database by processing all current cards with reminder set. For example using cURL:

curl http://localhost:2107/reset -H "Authorization:Bearer $token"

82.6. Internal technical account

The new back service for reminder and the new service regarding mail diffusion and supervision introduce the need of an internal account to communicate between opfab back services. Therefore, if you intend to utilize any of these services, it is necessary to create an Opfab technical account with ADMIN permissions and configure it within your shared YAML configuration file, for example :

operatorfabric:
  internalAccount:
    login: opfab
    password: the_password

The services require knowledge of the URL to retrieve the account’s token, and this URL should be configured within operatorfabric.servicesUrls.authToken. A default value, based on OperatorFabric default installation, is set to: "http://web-ui/auth/token".

82.7. Port mapping

In release 4.0, the listening port is not any more 8080 for services in docker, it is now identical to the default port mapping outside the docker.

So you need to modify your port mapping to migrate replacing the 8080 legacy port by the new port :

2100

businessconfig

2102

cards-publication

2103

users

2104

cards-consultation

2105

external-devices

Depending on your production configuration, you may need as well to change the ports in your nginx conf file.

If you want to keep the old port 8080, you can change it via the server.port parameter in the yml config files of the services.

82.8. RabbitMQ

In previous versions, it was necessary to start a RabbitMQ container referencing "rabbitmq:3-management." We now highly recommend that you update your configuration to utilize "lfeoperatorfabric/of-rabbitmq:4.0.0.RELEASE" instead. This adjustment ensures that you have a qualified version that is fully compatible with OpFab.

When migrating your production environment you may be unable to start rabbitMQ with the following error in log :

2023-09-14 13:57:00.803114+00:00 [error] <0.230.0> Feature flags: `maintenance_mode_status`: required feature flag not enabled! It must be enabled before upgrading RabbitMQ.
2023-09-14 13:57:00.955976+00:00 [error] <0.230.0> Failed to initialize feature flags registry:{disabled_required_feature_flag,
2023-09-14 13:57:00.955976+00:00 [error] <0.230.0>                                               maintenance_mode_status}

BOOT
FAILED
===========
Error during startup: {error,failed_to_initialize_feature_flags_registry}

2023-09-14 13:57:01.022987+00:00 [error] <0.230.0>
2023-09-14 13:57:01.022987+00:00 [error] <0.230.0> BOOT FAILED
2023-09-14 13:57:01.022987+00:00 [error] <0.230.0> ===========
2023-09-14 13:57:01.022987+00:00 [error] <0.230.0> Error during startup: {error,failed_to_initialize_feature_flags_registry}

This issue arises because the persisted data (RabbitMQ queues) generated by the previous version of RabbitMQ is incompatible with the current RabbitMQ version. To address this problem, it is necessary to remove the persisted data before launching OpFab, which can be found at the path mapping /var/lib/rabbitmq/mnesia/ within the Docker container.

If you have configured RabbitMQ persistence, we recommend implementing this as a preventive measure to avoid service unavailability in production.

82.9. Configuration

The configuration has been simplified, you have now default parameters you do not need to set anymore in the back configuration:

  • in all yml file you do not need to set anymore spring.application.name

  • a default kafka configuration is provided, you only have to add "kafka.consumer.group-id : opfab-command" to enable kafka

  • a default rabbit configuration is provided

  • default value are provided for "operatorfabric.servicesUrls.users" and "operatorfabric.servicesUrls.businessconfig"

  • "spring.data.mongodb.database" is not to be set anymore

  • you still need to set "management.endpoints.web.exposure.include: '*'" if you want to monitor opfab via prometheus

  • operatorfabric.businessconfig.storage.path is set by default to "/businessconfig-storage"

The nginx configuration has been simplified as well, the best is to redefine your actual nginx based on the example /config/docker/nginx.conf. The main modification is the removal of the following endpoints declaration :

  • /archives

  • /ui

  • /ui/assets/i18n

  • /config/web-ui.json

  • /config/menu-ui.json

We have also implemented data compression for the information supplied by the "businessconfig" service within the "nginx.conf" reference file. This is done by adding in the location /businessconfig :

    gzip on;
    gzip_types application/json;

The nginx conf is not loaded anymore in /usr/share/nginx/html/opfab in the docker but in /usr/share/nginx/html/config. You need to modify your volume configuration . For example in docker compose :

    volumes:
      - "./ui-config:/usr/share/nginx/html/opfab"

becomes :

    volumes:
      - "./ui-config:/usr/share/nginx/html/config"

In the web-ui.json file, you do not need anymore to set : - security.jwt.expire-claim - security.oauth2.flow.provider - security.oauth2.provider-realm - security.oauth2.provider-url

82.10. Normalization of some configuration parameters

Some configuration parameters have been renamed, so you have to check your config files and adapt them. Here are the concerned parameters (old name → new name):

  • daysBeforeLogExpiration → operatorfabric.users.daysBeforeLogExpiration

  • checkAuthenticationForCardSending → operatorfabric.cards-publication.checkAuthenticationForCardSending

  • authorizeToSendCardWithInvalidProcessState → operatorfabric.cards-publication.authorizeToSendCardWithInvalidProcessState

  • checkPerimeterForCardSending → operatorfabric.cards-publication.checkPerimeterForCardSending

  • external-recipients.* → operatorfabric.cards-publication.external-recipients.*

  • opfab.kafka.topics.card.topicname → operatorfabric.cards-publication.kafka.topics.card.topicname

  • opfab.kafka.topics.response-card.topicname → operatorfabric.cards-publication.kafka.topics.response-card.topicname

  • opfab.kafka.schema.registry.url → operatorfabric.cards-publication.kafka.schema.registry.url

  • delayForDeleteExpiredCardsScheduling → operatorfabric.cards-publication.delayForDeleteExpiredCardsScheduling

  • checkIfUserIsAlreadyConnected → operatorfabric.checkIfUserIsAlreadyConnected

  • spring.data.mongodb.uri -→ operatorfabric.mongodb.uri

  • spring.rabbitmq.* -→ operatorfabric.rabbitmq.*

  • spring.security.oauth2.resourceserver.jwt.jwk-set-uri -→ operatorfabric.security.oauth2.resourceserver.jwt.jwk-set-uri

82.11. Mongodb uri

The option "authMode=scram-sha1" has to be removed from mongodb uri as SCRAM authentication is enabled by default and "authMode" option is not supported by node.js mongodb driver. For example you should change :

 mongodb:
    uri: mongodb://root:password@mongodb:27017/operator-fabric?authSource=admin&authMode=scram-sha1

to

 mongodb:
    uri: mongodb://root:password@mongodb:27017/operator-fabric?authSource=admin

82.12. Rate limiter for card sendings

External publishers are now monitored by a new module which limits how many cards they can send cardSendingLimitCardCount in a period of time cardSendingLimitPeriod . This is to avoid potential overloading due to external apps stuck in a card sending loop.

Default value is set to 1000 cards per hour. It can be disabled / enabled with activateCardSendingLimiter

83. Migration Guide from release 4.0.0 to release 4.1.0

83.1. Optimizing Nginx Configuration for Latency Reduction

An optimization has been introduced in the Nginx configuration to reduce latency. To implement this optimization in your nginx.conf you need to add in location /cards/ the directive proxy_buffering off

For example :

  location /cards/ {
    proxy_buffering off;
    proxy_set_header Host $http_host;
    proxy_pass http://cards-consultation:2104/;
    proxy_set_header X-Forwarded-For $remote_addr;
  }

83.2. Updating the file size limit

The limit of maximum file size has been updated to 100 MB. To implement this limit in your nginx.conf you need to add in location /businessconfig/ the directive client_max_body_size 100M;

For example :

  location /businessconfig {
    proxy_set_header Host $http_host;
    proxy_pass http://${MY_DOCKER_HOST}:2100;
    proxy_set_header X-Forwarded-For $remote_addr;
    client_max_body_size 100M;
  }

83.3. Nginx Configuration for entity supervision administration

The new 'Supervised Entities' admin page requires interaction with the supervisor API via Nginx. To enable access to the supervisor API, please incorporate the following configuration into your nginx.conf file:

  location ~ "^/supervisor/(.*)" {
    set $supervisor http://supervisor:2108;
    proxy_set_header Host $http_host;
    proxy_pass $supervisor/$1;
    proxy_set_header X-Forwarded-For $remote_addr;
  }

83.4. Supervisor configuration change

The configuration of 'supervisor.defaultConfig.entitiesToSupervise' parameter has changed. The supervised entity field id has been renamed to entityId. You should modify your configuration accordingly.

For example:

  supervisor:
    defaultConfig:
      entitiesToSupervise:
      - id: ENTITY1_FR
        supervisors:
        - ENTITY2_FR
      - id: ENTITY2_FR
        supervisors:
        - ENTITY1_FR

should be changed to:

  supervisor:
    defaultConfig:
      entitiesToSupervise:
      - entityId: ENTITY1_FR
        supervisors:
        - ENTITY2_FR
      - entityId: ENTITY2_FR
        supervisors:
        - ENTITY1_FR

83.5. Log directories in docker

For the services cards-reminder, supervisor, cards-external-diffusion the log directory in the docker is not anymore /usr/app/logs but /var/log/opfab. So if you map the log directory in your configuration , you need to change it.

83.6. OpfabAPI

The following method is deprecated and it is recommended you use the new method:

Deprecated method or attribute New method

opfab.currentCard.getEntityUsedForUserResponse()

opfab.currentCard.getEntitiesUsableForUserResponse()

84. Migration Guide from release 4.1.0 to release 4.2.0

84.1. Deletion of parameter displayConnectionCirclesInPreview

The parameter displayConnectionCirclesInPreview has been deleted. Now, in the usercard preview, users will see which recipients entities are connected or not via badges : a blue badge if connected and a gray one if not. This feature is provided for all users. So you have to remove this parameter from the configuration file web-ui.json.

84.2. Configuration files of the realtime users screen

Previously, to configure this screen, the json file had to fit the following structure :

{
  "realTimeScreens": [
    {
      "screenName": <Name of the screen>,
      "screenColumns": [
        {
          "entitiesGroups": [
            {
              "name": <Name of a group of entities>,
              "entities": [
                <entity1 id>,
                <entity2 id>
              ]
            },
            {
              "name": <Name of another group of entities>,
              "entities": [
                <entity3 id>,
                <entity4 id>
              ]
            }
          ]
        }
      ]
    }
  ]
}

Now entities shouldn’t be grouped in the configuration file anymore. Instead entities are grouped by a shared parent entity. The configuration file needs to be changed to the following structure:

{
  "realTimeScreens": [
    {
      "screenName": <Name of the screen>,
      "screenColumns": [
        {
          "entitiesGroups": ["<parent entity 1 id>","<parent entity 2 id>"]
        }
      ]
    }
  ]
}

With this parent entity 1 and parent entity 2 need to be declared and the name of the parent entities will be the name of the groups.

84.3. Updating the entity roles in the database

In this release the "roles" attribute has been added to the entity object. It defines the utility of an entity. It absorbs and broaden the logic of the attribute "entityAllowedToSendCard" which becomes deprecated. To update the entities in your mongo database, you can execute the following script located in OF_HOME/src/tooling/migration-entity-roles :

./launchMigration.sh <IP or DNSNameMongoDB> <portMongoDB> <loginMongoDB> <passwordMongoDB>

This script will assign the roles "ACTIVITY_AREA", "CARD_RECEIVER" and "CARD_SENDER" to all the entities that have "entityAllowedToSendCard = true" and the role "CARD_RECEIVER" to the others. Please refer to the documentation to see the other possible roles and their effect.

After successful execution of the previous script you should execute the following script to cleanup the database removing the deprecated attribute "entityAllowedToSendCard":

./cleanMongoAfterMigration.sh <IP or DNSNameMongoDB> <portMongoDB> <loginMongoDB> <passwordMongoDB>

84.4. Deletion of deprecated functions in templateGateway/usercardTemplateGateway

Functions in templateGateway/usercardTemplateGateway do not exist anymore, you can refer to the migration documentation to opfab 4.0 to know which functions you have to use.

84.5. Card keepChildCards

The card field keepChildCards is now deprecated, use the new actions field (string array) including "KEEP_CHILD_CARDS" action instead.

84.6. Feed process and state filter

New filters have been added to feed filters to allow filtering by card process and state. It is possible to hide the process filter by setting feed.card.hideProcessFilter to true in web-ui.json config file. When process filter is hidden than also state filter is hidden. It is possible to hide just the the state filter by setting feed.card.hideStateFilter to true in web-ui.json config file.

84.7. User settings

A new setting to send emails in plain text instead of HTML has been added in the settings screen of the user. This setting can be hidden in the configuration file of the web-ui by the field "settings.settingsScreen.hiddenSettings" by adding the value "emailToPlainText".


1. currently, that means a severity, but in the future it could also be a process id, or anything identifying the signal to be played