OperatorFabric Architecture
1. Introduction
OperatorFabric is a modular, extensible, industrial-strength and field-tested platform for use in electricity, water, and other utility operations.
-
System visualization and console integration
-
Precise alerting
-
Workflow scheduling
-
Historian
-
Scripting (ex: Python, JavaScript)
Workflow Scheduling could be addressed either as an internal module or through simplified and standardized (BPMN) integration with external workflow engines, we’re still weighing the pros and cons of the two options._ |
OperatorFabric is part of the LF Energy coalition, a project of The Linux Foundation that supports open source innovation projects within the energy and electricity sectors.
OpFab is an open source platform licensed under Mozilla Public License V2. The source code is hosted on GitHub in this repository : operatorfabric-core.
The aim of this document is to describe the architecture of the solution, first by defining the business concepts it deals with and then showing how this translates into the technical architecture.
2. Business Architecture
OperatorFabric is based on the concept of cards, which contain data regarding events that are relevant for the operator. A third party tool publishes cards and the cards are received on the screen of the operators. Depending on the type of the cards, the operator can send back information to the third party via a "response card".
2.1. Business components

To do the job, the following business components are defined :
-
Card Publication : this component receives the cards from third party tools or users
-
Card Consultation : this component delivers the cards to the operators and provide access to all cards exchanged (archives)
-
Card rendering and process definition : this component stores the information for the card rendering (templates, internationalization, …) and a light description of the process associate (states, response card, …). This configuration data can be provided either by an administrator or by a third party tool.
-
User Management : this component is used to manage users, groups and entities.
2.2. Business objects
The business objects can be represented as follows :

-
Card : the core business object which contains the data to show to the user(or operator)
-
Publisher : the third party which publishes or receives cards
-
User : the operator receiving cards and responding via response cards
-
Group : a group (containing a list of users)
-
Entity : an entity (containing a list of users)
-
Process : the process the card is dealing with
-
State : the step in the process
-
Card Rendering : data for card rendering
3. Technical Architecture
The architecture is based on independant modules. All business services are accessible via REST API.

3.1. Business components
We find here the business component seen before:
-
We have a "UI" component which stores the static pages and the UI code that is downloaded by the browser. The UI is based an Angular and Handlebars for the card templating.
-
The business component named "Card rendering and process definition" is at the technical level known as "Third service". This service receive card rendering and process definition as a bundle. The bundle is a tar.gz file containing
-
json process configuration file (containing states & actions)
-
templates for rendering
-
stylesheets
-
internationalization information
-
Except form the UI, which is based on angular, all business components are based on SpringBoot and packaged via Docker.
Spring WebFlux is used to provide the card in a fluid way.
3.2. Technical components
3.2.1. Gateway
It provides a filtered view of the APIS and static served pages for external access through browsers or other http compliant accesses. It provides the rooting for accessing the services from outside. It is a nginx server package with docker, this component contains the angular UI component.
3.2.2. Broker
The broker is used to share information asynchronously across the whole services. It is implemented via RabbitMQ
3.2.3. Authentication
The architecture provides a default authentication service via KeyCloak but it can delegate it to an external provider. Authentication is done through the use of Oauth2, three flows are supported : implicit, authorization code and password.
3.2.4. Database
The cards are stored in a MongoDb database. The bundles are stored in a file system.
OperatorFabric Getting Started
4. Prerequisites
To use OperatorFabric, you need a linux OS with the following:
-
Docker install with 4Gb of space
-
16Gb of RAM minimal, 32 Gb recommended
5. Install and run server
To start OperatorFabric, you first need to clone the getting started git
git clone https://github.com/opfab/operatorfabric-getting-started.git
Launch the startserver.sh
in the server directory. You need to wait for all the services to start (it usually takes one minute to start), it is done when no more logs are written on the output (It could continue to log but slowly).
Test the connection to the UI: to connect to OperatorFabric, open in a browser the following page: localhost:2002/ui/ and use tso1-operator
as login and test
as password.
If you are not accessing the server from localhost, there is a bug with authentication redirection. Your must use the following URL, replacing SERVER_IP
by the IP address of your server :
http://SERVER_IP:89/auth/realms/dev/protocol/openid-connect/auth?response_type=code&client_id=opfab-client&redirect_uri=http://SERVER_IP:2002/ui/
After connection, you should see the following screen

To stop the server, use:
docker-compose down &
6. Examples
For each example, useful files and scripts are in the directory client/exampleX
.
All examples assume you connect to the server from localhost
(otherwise change the provided scripts)
6.1. Example 1: Send and update a basic card
Go in directory client/example1
and send a card:
curl -X POST http://localhost:2102/cards -H "Content-type:application/json" --data @card.json
or use the provided script
./sendCard.sh card.json
The result should be a 200 Http status, and a json object such as:
{"count":1,"message":"All pushedCards were successfully handled"}
See the result in the UI, you should see a card, if you click on it you’ll see the detail

6.1.1. Anatomy of the card :
A card is containing information regarding the publisher, the recipients, the process, the data to show…
More information can be found in the Card Structure section of the reference documentation.
{
"publisher" : "message-publisher",
"publisherVersion" : "1",
"process" :"defaultProcess",
"processId" : "hello-world-1",
"state" : "messageState",
"recipient" : {
"type" : "GROUP",
"identity" : "TSO1"
},
"severity" : "INFORMATION",
"startDate" : 1553186770681,
"summary" : {"key" : "defaultProcess.summary"},
"title" : {"key" : "defaultProcess.title"},
"data" : {"message" :"Hello World !!! That's my first message"}
}
6.1.2. Update the card
We can send a new version of the card (updateCard.json):
-
change the message, field data.message in the JSON File
-
the severity , field severity in the JSON File
{
"publisher" : "message-publisher",
"publisherVersion" : "1",
"process" :"defaultProcess",
"processId" : "hello-world-1",
"state" $: "messageState",
"recipient" : {
"type" : "GROUP",
"identity" : "TSO1"
},
"severity" : "ALARM",
"startDate" : 1553186770681,
"summary" : {"key" : "defaultProcess.summary"},
"title" : {"key" : "defaultProcess.title"},
"data" : {"message" :":That's my second message"}
}
You can send the updated card with:
./sendCard.sh cardUpdate.json
The card should be updated on the UI.
6.2. Example 2: Publish a new bundle
The way the card is display in the UI is defined via a Bundle containing templates and process description.
The bundle structure is the following:
├── css : stylesheets files ├── i18n : internalization files └── template : ├── en : handlebar templates for detail card rendering ├── .... config.json : process description and global configuration
The bundle is provided in the bundle directory of example2. It contains a new version of the bundle used in example1.
We just change the template and the stylesheet instead of displaying:
Message : The message
we display:
You received the following message The message
If you look at the template file (template/en/template.handlebars):
<h2> You received the following message </h2>
{{card.data.message}}
In the stylesheet css/style.css we just change the color value to red (#ff0000):
h2{
color:#ff0000;
font-weight: bold;
}
The global configuration is defined in config.json :
{
"name":"message-publisher",
"version":"2",
"templates":["template"],
"csses":["style"],
"processes" : {
"defaultProcess" : {
"states":{
"messageState" : {
"details" : [{
"title" : { "key" : "defaultProcess.title"},
"templateName" : "template",
"styles" : [ "style.css" ]
}]
}
}
}
}
}
To keep the old bundle, we create a new version by setting version to 2.
6.2.1. Package your bundle
Your bundle need to be package in a tar.gz file, a script is available
./packageBundle.sh
A file name bundle.tar.gz will be created.
6.2.2. Get a Token
To send the bundle you need to be authenticated. To get a token you can source the provided script:
source ./getToken.sh
This will run the following command:
curl -s -X POST -d "username=admin&password=test&grant_type=password&client_id=opfab-client" http://localhost:2002/auth/token
This should return a JSON a response like this:
{"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJSbXFOVTNLN0x4ck5SRmtIVTJxcTZZcTEya1RDaXNtRkw5U2NwbkNPeDBjIn0.eyJqdGkiOiIzZDhlODY3MS1jMDhjLTQ3NDktOTQyOC1hZTdhOTE5OWRmNjIiLCJleHAiOjE1NzU1ODQ0NTYsIm5iZiI6MCwiaWF0IjoxNTc1NTQ4NDU2LCJpc3MiOiJodHRwOi8va2V5Y2xvYWs6ODA4MC9hdXRoL3JlYWxtcy9kZXYiLCJhdWQiOiJhY2NvdW50Iiwic3ViIjoiYTNhM2IxYTYtMWVlYi00NDI5LWE2OGItNWQ1YWI1YjNhMTI5IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoib3BmYWItY2xpZW50IiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiODc3NzZjOTktYjA1MC00NmQxLTg5YjYtNDljYzIxNTQyMDBhIiwiYWNyIjoiMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIl19LCJyZXNvdXJjZV9hY2Nlc3MiOnsiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsIm1hbmFnZS1hY2NvdW50LWxpbmtzIiwidmlldy1wcm9maWxlIl19fSwic2NvcGUiOiJlbWFpbCBwcm9maWxlIiwic3ViIjoiYWRtaW4iLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsInByZWZlcnJlZF91c2VybmFtZSI6ImFkbWluIn0.XMLjdOJV-A-iZrtq7sobcvU9XtJVmKKv9Tnv921PjtvJ85CnHP-qXp2hYf5D8TXnn32lILVD3g8F9iXs0otMAbpA9j9Re2QPadwRnGNLIzmD5pLzjJ7c18PWZUVscbaqdP5PfVFA67-j-YmQBwxiys8psF8keJFvmg-ExOGh66lCayClceQaUUdxpeuKFDxOSkFVEJcVxdelFtrEbpoq0KNPtYk7vtoG74zO3KjNGrzLkSE_e4wR6MHVFrZVJwG9cEPd_dLGS-GmkYjB6lorXPyJJ9WYvig56CKDaFry3Vn8AjX_SFSgTB28WkWHYZknTwm9EKeRCsBQlU6MLe4Sng","expires_in":36000,"refresh_expires_in":1800,"refresh_token":"eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIzZjdkZTM0OC05N2Q5LTRiOTUtYjViNi04MjExYTI3YjdlNzYifQ.eyJqdGkiOiJhZDY4ODQ4NS1hZGE0LTQwNWEtYjQ4MS1hNmNkMTM2YWY0YWYiLCJleHAiOjE1NzU1NTAyNTYsIm5iZiI6MCwiaWF0IjoxNTc1NTQ4NDU2LCJpc3MiOiJodHRwOi8va2V5Y2xvYWs6ODA4MC9hdXRoL3JlYWxtcy9kZXYiLCJhdWQiOiJodHRwOi8va2V5Y2xvYWs6ODA4MC9hdXRoL3JlYWxtcy9kZXYiLCJzdWIiOiJhM2EzYjFhNi0xZWViLTQ0MjktYTY4Yi01ZDVhYjViM2ExMjkiLCJ0eXAiOiJSZWZyZXNoIiwiYXpwIjoib3BmYWItY2xpZW50IiwiYXV0aF90aW1lIjowLCJzZXNzaW9uX3N0YXRlIjoiODc3NzZjOTktYjA1MC00NmQxLTg5YjYtNDljYzIxNTQyMDBhIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUifQ.sHskPtatqlU9Z8Sfq6yvzUP_L6y-Rv26oPpykyPgzmk","token_type":"bearer","not-before-policy":0,"session_state":"87776c99-b050-46d1-89b6-49cc2154200a","scope":"email profile"}
Your token is the access_token
value in the JSON, which the script will export to a $token
environment variable.
The sendBundle.sh script below will use of this variable.
The token will be valid for 10 hours, after you will need to ask for a new one. |
6.2.3. Send the bundle
Executing the sendBundle.sh script will send the bundle.
You can now execute the script, it will send the bundle.
./sendBundle.sh
You should received the following JSON in response, describing your bundle.
{"name":"message-publisher","version":"2","templates":["template"],"csses":["style"],"i18nLabelKey":null,"processes":{"defaultProcess":{"statesData":{"messageState":{"detailsData":[{"title":{"key":"defaultProcess.title","parameters":null},"titleStyle":null,"templateName":"template","styles":null}],"actionsData":null,"details":[{"title":{"key":"defaultProcess.title","parameters":null},"titleStyle":null,"templateName":"template","styles":null}],"actions":null}},"states":{"messageState":{"detailsData":[{"title":{"key":"defaultProcess.title","parameters":null},"titleStyle":null,"templateName":"template","styles":null}],"actionsData":null,"details":[{"title":{"key":"defaultProcess.title","parameters":null},"titleStyle":null,"templateName":"template","styles":null}],"actions":null}}}},"menuEntries":null}
6.2.4. Send a card
You can send the following card to test your new bundle:
{
"publisher" : "message-publisher",
"publisherVersion" : "2",
"process" :"defaultProcess",
"processId" : "hello-world-1",
"state": "messageState",
"recipient" : {
"type" : "GROUP",
"identity" : "TSO1"
},
"severity" : "INFORMATION",
"startDate" : 1553186770681,
"summary" : {"key" : "defaultProcess.summary"},
"title" : {"key" : "defaultProcess.title"},
"data" : {"message":"Hello world in new version"}
}
To use the new bundle, we set publisherVersion to "2"
To send the card:
./sendCard.sh
You should see in the UI the detail card with the new template.
6.3. Example 3: Process with state
For this example, we will set the following process:
-
Step 1: A critical situation arises on the High Voltage grid
-
Step 2: The critical situation evolve
-
Step 3: The critical situation ends
To model this process in OperatorFabric, we will use a "Process" with "States", we will model this in the config.json
of the bundle:
{
"name":"alert-publisher",
"version":"1",
"templates":["criticalSituationTemplate","endCriticalSituationTemplate"],
"csses":["style"],
"processes" : {
"criticalSituation" : {
"states":{
"criticalSituation-begin" : {
"details" : [{
"title" : { "key" : "criticalSituation-begin.title"},
"templateName" : "criticalSituationTemplate",
"styles" : [ "style.css" ]
}]
},
"criticalSituation-update" : {
"details" : [{
"title" : { "key" : "criticalSituation-update.title"},
"templateName" : "criticalSituationTemplate",
"styles" : [ "style.css" ]
}]
},
"criticalSituation-end" : {
"details" : [{
"title" : { "key" : "criticalSituation-end.title"},
"templateName" : "endCriticalSituationTemplate",
"styles" : [ "style.css" ]
}]
}
}
}
}
}
You can see in the JSON we define a process name "criticalSituation" with 3 states: criticalSituation-begin, criticalSituation-update and criticalSituation-end. For each state we define a title for the card, and the template a stylesheets to use.
The title is a key which refer to an i18n found in the corresponding i18n repository:
{
"criticalSituation-begin":{
"title":"CRITICAL SITUATION",
"summary":" CRITICAL SITUATION ON THE GRID, SEE DETAIL FOR INSTRUCTION"
},
"criticalSituation-update":{
"title":"CRITICAL SITUATION - UPDATE",
"summary":" CRITICAL SITUATION ON THE GRID, SEE DETAIL FOR INSTRUCTION"
},
"criticalSituation-end":{
"title":"CRITICAL SITUATION - END",
"summary":" CRITICAL SITUATION ENDED"
}
}
The templates can be found in the template directory.
We can now send cards and simulate the process, first we send a card at the beginning of the critical situation:
{
"publisher" : "alert-publisher",
"publisherVersion" : "1",
"process" :"criticalSituation",
"processId" : "alert1",
"state": "criticalSituation-begin",
"recipient" : {
"type" : "GROUP",
"identity" : "TSO1"
},
"severity" : "ALARM",
"startDate" : 1553186770681,
"summary" : {"key" : "criticalSituation-begin.summary"},
"title" : {"key" : "criticalSituation-begin.title"},
"data" : {"instruction":"Critical situation on the grid : stop immediatly all maintenance on the grid"}
}
The card refers to the process "criticalSituation" as defined in the config.json, the state attribute is put to "criticalSituation-begin" which is the first step of the process, again as defined in the config.json. The card can be sent via provided script :
./sendCard.sh card.json
Two other card have be provided to continue the process
-
cardUpdate.json: the state is criticalSituation-update
-
cardEnd.json: the state is criticalSituation-end and severity set to "information"
You can send these cards:
./sendCard.sh cardUpdate.json
./sendCard.sh cardEnd.json
6.4. Example 4: Time Line
To view the card in the time line, you need to set times in the card using timeSpans attributes as in the following card:
{
"publisher" : "scheduledMaintenance-publisher",
"publisherVersion" : "1",
"process" :"maintenanceProcess",
"processId" : "maintenance-1",
"state": "planned",
"recipient" : {
"type" : "GROUP",
"identity" : "TSO1"
},
"severity" : "INFORMATION",
"startDate" : 1553186770681,
"summary" : {"key" : "maintenanceProcess.summary"},
"title" : {"key" : "maintenanceProcess.title"},
"data" : {
"operationDescription":"Maintenance operation on the International France England (IFA) Hight Voltage line ",
"operationResponsible":"RTE",
"contactPoint":"By Phone : +33 1 23 45 67 89 ",
"comment":"Operation has no impact on service"
},
"timeSpans" : [
{"start" : 1576080876779},
{"start" : 1576104912066}
]
}
For this example, we use a new publisher called "scheduledMaintenance-publisher". You won’t need to post the corresponding bundle to the thirds service as it has been loaded in advance to be available out of the box (only for the getting started). If you want to take a look at its content you can find it under server/thirds-storage/scheduledMaintenance-publisher/1.
Before sending the provided card provided, you need to set the good time values as epoch (ms) in the json. For each value you set, you will have a point in the timeline. In our example, the first point represent the beginning of the maintenance operation, and the second the end of the maintenance operation.
To get the dates in Epoch, you can use the following commands:
For the first date:
date -d "+ 600 minutes" +%s%N | cut -b1-13
And for the second
date -d "+ 900 minutes" +%s%N | cut -b1-13
To send the card use the provided script in example4 directory
./sendCard.sh card.json
A second card (card2.json) is provided as example, you need again to set times values in the json file and then send it
./sendCard.sh card2.json
This time the severity of the card is ALERT, you should see the point in red in the timeline

7. Troubleshooting
7.1. My bundle is not loaded
The server send a {"status":"BAD_REQUEST","message":"unable to open submitted
file","errors":["Error detected parsing the header"]}
, despite correct http
headers.
The uploaded bundle is corrupted. Test your bundle in a terminal (Linux solution).
Example for a bundle archive named MyBundleToTest.tar.gz
giving the
mentioned error when uploaded :
tar -tzf MyBundleToTest.tar.gz >/dev/null tar: This does not look like a tar archive tar: Skipping to next header tar: Exiting with failure status due to previous errors
7.2. I can’t upload my bundle
The server responds with a message like the following:
{"status":"BAD_REQUEST","message":"unable to open submitted
file","errors":["Input is not in the .gz format"]}
The bundle has been compressed using an unmanaged format.
7.2.1. Format verification
7.2.1.1. Linux solution
Command line example to verify the format of a bundle archive named
MyBundleToTest.tar.gz
(which gives the mentioned error when uploaded):
tar -tzf MyBundleToTest.tar.gz >/dev/null
which should return in such case the following messages:
gzip: stdin: not in gzip format tar: Child returned status 1 tar: Error is not recoverable: exiting now
7.3. My bundle is rejected due to internal structure
The server sends {"status":"BAD_REQUEST","message":"Incorrect inner file
structure","errors":["$OPERATOR_FABRIC_INSTANCE_PATH/d91ba68c-de6b-4635-a8e8-b58
fff77dfd2/config.json (Aucun fichier ou dossier de ce type)"]}
Where $OPERATOR_FABRIC_INSTANCE_PATH
is the folder where thirds files are
stored server side.
7.4. No template display
The server send 404 for requested template with a response like
{"status":"NOT_FOUND","message":"The specified resource does not
exist","errors":["$OPERATOR_FABRIC_INSTANCE_PATH/thirds-storage/BUNDLE_TEST/1/te
mplate/fr/template1.handlebars (Aucun fichier ou dossier de ce type)"]}
7.4.1. Verification
The previous server response is return for a request like:
http://localhost:2002/thirds/BUNDLE_TEST/templates/template1?locale=fr&version
=1
The bundle is lacking localized folder and doesn’t contain the requested localization.
If you have access to the card-publication
micro service source code you
should list the content of
$CARDS_PUBLICATION_PROJECT/build/docker-volume/third-storage
OperatorFabric Reference Documentation
The aim of this document is to:
-
Explain what OperatorFabric is about and define the concepts it relies on
-
Give a basic tour of its features from a user perspective
-
Describe the technical implementation that makes it possible
8. Introduction
To perform their duties, an operator has to interact with multiple applications (perform actions, watch for alerts, etc.), which can prove difficult if there are too many of them.
The idea is to aggregate all the notifications from all these applications into a single screen, and to allow the operator to act on them if needed.

These notifications are materialized by cards sorted in a feed according to their period of relevance and their severity. When a card is selected in the feed, the right-hand pane displays the details of the card: information about the state of the parent process instance in the third-party application that published it, available actions, etc.
In addition, the cards will also translate as events displayed on a timeline (its design is still under discussion) at the top of the screen. This view will be complimentary to the card feed in that it will allow the operator to see at a glance the status of processes for a given period, when the feed is more like a "To Do" list.
Part of the value of OperatorFabric is that it makes the integration very simple on the part of the third-party applications. To start publishing cards to users in an OperatorFabric instance, all they have to do is:
-
Register as a publisher through the "Thirds" service and provide a "bundle" containing handlebars templates defining how cards should be rendered, i18n info etc.
-
Publish cards as json containing card data through the card publication API
OperatorFabric will then:
-
Dispatch the cards to the appropriate users (by computing the actual users who should receive the card from the recipients rules defined in the card)
-
Take care of the rendering of the cards, displaying details, actions, inputs etc.
-
Display relevant information from the cards in the timeline
Another aim of OperatorFabric is to make cooperation easier by letting operators forward or send cards to other operators, for example:
-
If they need an input from another operator
-
If they can’t handle a given card for lack of time or because the necessary action is out of their scope
This will replace phone calls or emails, making cooperation more efficient and traceable.
For instance, operators might be interested in knowing why a given decision was made in the past: the cards detailing the decision process steps will be accessible through the Archives screen, showing how the operators reached this agreement.
11. Thirds service
As stated above, third-party applications (or "thirds" for short) interact with OperatorFabric by sending cards. The Thirds service allows them to tell OperatorFabric
-
how these cards should be rendered
-
what actions should be made available to the operators regarding a given card
-
if several languages are supported, how cards should be translated
In addition, it lets third-party applications define additional menu entries for the navbar (for example linking back to the third-party application) that can be integrated either as iframe or external links.
11.1. Declaring a Third Party Service
This sections explains Third Party Service Configuration
The third party service configuration is declared using a bundle which is described below. Once this bundle fully created, it must be uploaded to the server which will apply this configuration into current for further web UI calls.
The way configuration is done is explained throughout examples before a more technical review of the configuration details. The following instructions describe tests to perform on OperatorFabric to understand how customization is working in it. The card data used here are sent automatically using a script as described here .
11.1.1. Requirements
Those examples are played in an environment where an OperatorFabric instance (all micro-services) is running along a MongoDB Database and a RabbitMQ instances.
11.1.2. Bundle
Third bundles customize the way third card details are displayed. Those tar.gz
archives contain a descriptor file
named config.json
, eventually some css files
, i18n files
and handlebars templates
to do so.
For didactic purposes, in this section, the third name is BUNDLE_TEST
(to match the parameters used by the script).
This bundle is localized for en
and fr
.
As detailed in the Third core service README
the bundle contains at least a metadata file called config.json
,
a css
folder, an i18n
folder and a template
folder.
All elements except the config.json file
are optional.
The files of this example are organized as below:
bundle ├── config.json ├── css │ └── bundleTest.css ├── i18n │ ├── en.json │ └── fr.json └── template ├── en │ ├── template1.handlebars │ └── template2.handlebars └── fr ├── template1.handlebars └── template2.handlebars
To summarize, there are 5 directories and 8 files.
11.1.2.1. The config.json file
It’s a description file in json
format. It lists the content of the bundle.
example
{ "name": "BUNDLE_TEST", "version": "1", "csses": [ "bundleTest" ], "i18nLabelKey": "third-name-in-menu-bar", "menuEntries": [ { "id": "uid test 0", "url": "https://opfab.github.io/whatisopfab/", "label": "first-menu-entry" }, { "id": "uid test 0", "url": "https://www.lfenergy.org/", "label": "b-menu-entry" }, { "id": "uid test 1", "url": "https://github.com/opfab/opfab.github.io", "label": "the-other-menu-entry" } ], "processes" : { "simpleProcess" : { "start" : { "details" : [ { "title" : { "key" : "start.first.title" }, "titleStyle" : "startTitle text-danger", "templateName" : "template1" } ], "actions" : { "finish" : { "type" : "URL", "url": "http://somewher.org/simpleProcess/finish", "lockAction" : true, "called" : false, "updateStateBeforeAction" : false, "hidden" : true, "buttonStyle" : "buttonClass", "label" : { "key" : "my.card.my.action.label" }, } } }, "end" : { "details" : [ { "title" : { "key" : "end.first.title" }, "titleStyle" : "startTitle text-info", "templateName" : "template2", "styles" : [ "bundleTest.css" ] } ] } } } }
-
name: third name;
-
version: enable the correct display, even the old ones as all versions are stored by the server. Your card has a version field that will be matched to third configuration for correct rendering ;
-
processes : list the available processes and their possible states; actions and templates are associated to states
-
css file template list as
csses
; -
third name in the main bar menu as
i18nLabelKey
: optional, used if the third service add one or several entry in the OperatorFabric main menu bar, see the menu entries section for details; -
extra menu entries as
menuEntries
: optional, see below for the declaration format of objects of this array, see the menu entries section for details;
The mandatory declarations are name
and version
attributes.
See the Thirds API documentation for details.
11.1.2.2. i18n
There are two ways of i18n for third service. The first one is done using l10n files which are located in the i18n
folder, the second one throughout l10n name folder nested in the template
folder.
The i18n
folder contains one json file per l10n.
These localisation is used for integration of the third service into OperatorFabric, i.e. the label displayed for the third service, the label displayed for each tab of the details of the third card, the label of the actions in cards if any or the additional third entries in OperatorFabric(more on that at the chapter ????).
Template folder
The template
folder must contain localized folder for the i18n of the card details. This is why in our example, as the bundle is localized for en
and fr
language, the template
folder contains a en
and a fr
folder.
If there is no i18n file or key is missing, the i18n key is displayed in OperatorFabric.
The choice of i18n keys is left to the Third service maintainer. The keys are referenced in the following places:
-
config.json
file:-
i18nLabelKey
: key used for the label for the third service displayed in the main menu bar of OperatorFabric; -
label
ofmenu entry declaration
: key used to l10n themenu entries
declared by the Third party in the bundle;
-
-
card data
: values ofcard title
andcard summary
refer toi18n keys
as well askey attribute
in thecard detail
section of the card data.
example
So in this example the third service is named Bundle Test
with BUNDLE_TEST
technical name. The bundle provide an english and a french l10n.
The example bundle defined an new menu entry given access to 3 entries. The title and the summary have to be l10n, so needs to be the 2 tabs titles.
The name of the third service as displayed in the main menu bar of OperatorFabric. It will have the key "third-name-in-menu-bar"
. The english l10n will be Bundle Test
and the french one will be Bundle de test
.
A name for the three entries in the third entry menu. Their keys will be in order "first-menu-entry"
, "b-menu-entry"
and "the-other-menu-entry"
for an english l10n as Entry One
, Entry Two
and Entry Three
and in french as Entrée une
, Entrée deux
and Entrée trois
.
The title for the card and its summary. As the card used here are generated by the script of the cards-publication
project we have to used the key declared there. So they are respectively process.title
and process.summary
with the following l10ns for english: Card Title
and Card short description
, and for french l10ns: Titre de la carte
and Courte description de la carte
.
A title for each (two of them) tab of the detail cards. As for card title and card summary, those keys are already defined by the test script. There are "process.detail.tab.first"
and "process.detail.tab.second"
. For english l10n, the values are First Detail List
and Second Detail List
and for the french l10n, the values are Première liste de détails
and Seconde liste de détails
.
Here is the content of en.json
{ "third-name-in-menu-bar":"Bundle Test", "first-menu-entry":"Entry One", "b-menu-entry":"Entry Two", "the-other-menu-entry":"Entry Three", "process":{ "title":"Card Title", "summary":"Card short description", "detail":{ "tab":{ "first":"First Detail List", "second":"Second Detail List" } } } }
Here the content of fr.json
{ "third-name-in-menu-bar":"Bundle de test", "first-menu-entry":"Entrée une", "b-menu-entry":"Entrée deux", "the-other-menu-entry":"Entrée trois", "process":{ "title":"Titre de la carte", "summary":"Courte description de la carte", "detail":{ "tab":{ "first":"Première liste de détails", "second":"Deuxième liste de détails" } } } }
Once the bundle is correctly uploaded, the way to verify if the i18n have been correctly uploaded is to use the GET method of third api for i18n file.
The endpoint is described here .
The locale
language, the version
of the bundle and the technical name
of the third party are needed to get
json in the response.
To verify if the french l10n data of the version 1 of the BUNDLE_TEST third party we could use the following command line
curl -X GET "http://localhost:2100/thirds/BUNDLE_TEST/i18n?locale=fr&version=1" -H "accept: application/json"
The service response with a 200 status and with the json corresponding to the defined fr.json file show below.
{ "third-name-in-menu-bar":"Bundle de test", "first-menu-entry":"Entrée une", "b-menu-entry":"Entrée deux", "the-other-menu-entry":"Entrée trois", "tests":{ "title":"Titre de la carte", "summary":"Courte description de la carte", "detail":{ "tab":{ "first":"Première liste de détails", "second":"Deuxième liste de détails" } } } }
Menu Entries
Those elements are declared in the config.json
file of the bundle.
If there are several items to declare for a third service, a title for the third menu section need to be declared
within the i18nLabelKey
attribute, otherwise the first and only menu entry
item is used to create an entry in the
menu nav bar of OperatorFabric.
This kind of objects contains the following attributes :
-
id
: identifier of the entry menu in the UI; -
url
: url opening a new page in a tab in the browser; -
label
: it’s an i18n key used to l10n the entry in the UI.
In the following examples, only the part relative to menu entries in the config.json
file is detailed, the other parts are omitted and represented with a '…'.
Single menu entry
{ … "menuEntries":[{ "id": "identifer-single-menu-entry", "url": "https://opfab.github.io", "label": "single-menu-entry-i18n-key" }], }
Several menu entries
Here a sample with 3 menu entries.
{ … "i18nLabelKey":"third-name-in-menu-navbar", "menuEntries": [{ "id": "firstEntryIdentifier", "url": "https://opfab.github.io/whatisopfab/", "label": "first-menu-entry" }, { "id": "secondEntryIdentifier", "url": "https://www.lfenergy.org/", "label": "second-menu-entry" } , { "id": "thirdEntryIdentifier", "url": "https://opfab.github.io", "label": "third-menu-entry" }] }
Processes and States
Processes and their states allows to match a Third Party service process specific state to a list of templates for card details and actions allowing specific card rendering for each state of the business process.
The purpose of this section is to display elements of third card data in a custom format.
Regarding the card detail customization, all the examples in this section will be based on the cards generated by the script existing in the Cards-Publication
project. For the examples given here, this script is run with arguments detailed in the following command line:
$OPERATOR_FABRIC_HOME/services/core/cards-publication/src/main/bin/push_card_loop.sh --publisher BUNDLE_TEST --process tests
where:
-
$OPERATOR_FABRIC_HOME
is the root folder of OperatorFabric where tests are performed; -
BUNDLE_TEST
is the name of the Third party; -
tests
is the name of the process referred by published cards.
The process entry in the configuration file is a dictionary of processes, each key maps to a process definition. A process definition is itself a dictionary of states, each key maps to a state definition. A state is defined by:
-
a list of details: details are a combination of an internationalized title (title), css class styling element (titleStyle) and a template reference
-
a dictionary of actions: actions are described below
{ "type" : "URL", "url": "http://somewher.org/simpleProcess/finish", "lockAction" : true, "called" : false, "updateStateBeforeAction" : false, "hidden" : true, "buttonStyle" : "buttonClass", "label" : { "key" : "my.card.my.action.label" } }
An action aggregates both the mean to trigger action on the third party and data for an action button rendering:
-
type - mandatory: for now only URL type is supported:
-
URL: this action triggers a call to an external REST end point
-
-
url - mandatory: a template url for URL type action. this url may be injected with data before actions call, data are specified using curly brackets. Available parameters:
-
processInstance: the name/id of the process instance
-
process: the name of the process
-
state: the state name of the process
-
jwt: the jwt token of the user
-
data.[path]: a path to object in card data structure
-
-
hidden: if true, action won’t be visible on the card but will be available to templates
-
buttonStyle: css style classes to apply to the action button
-
label: an i18n key and parameters used to display a tooltip over the button
-
lockAction: not yet implemented
-
updateStateBeforeAction: not yet implemented
-
called: not yet implemented
For in depth information on the behavior needed for the third party rest endpoints refer to the Actions service reference.
For demonstration purposes, there will be two simple templates. For more advance feature go to the section detailing the handlebars templates in general and helpers available in OperatorFabric.
As the card used in this example are created
above
, the bundle template folder needs to contain 2 templates: template1.handlebars
and template2.handlebars
.
examples of template (i18n versions)
/template/en/template1.handlers
<h2>Template Number One</h2> <div class="bundle-test">'{{card.data.level1.level1Prop}}'</div>
/template/fr/template1.handlers
<h2>Patron numéro Un</h2> <div class="bundle-test">'{{card.data.level1.level1Prop}}'</div>
Those templates display a l10n title and an line containing the value of the scope property card.level1.level1Prop
which is This is a root property
.
/template/en/template2.handelbars
<h2>Second Template</h2> <ul class="bundle-test-list"> {{#each card.data.level1.level1Array}} <li class="bunle-test-list-item">{{this.level1ArrayProp}}</li> {{/each}} </ul>
/template/fr/template2.handelbars
<h2>Second patron</h2> <ul class="bundle-test-list"> {{#each card.data.level1.level1Array}} <li class="bunle-test-list-item">{{this.level1ArrayProp}}</li> {{/each}} </ul>
Those templates display also a l10n title and a list of numeric values from 1 to 3.
This folder contains regular css files.
The file name must be declared in the config.json
file in order to be used in the templates and applied to them.
As above, all parts of files irrelevant for our example are symbolised by a …
character.
Declaration of css files in config.json
file
{ … "csses":["bundleTest"] … }
CSS Class used in ./template/en/template1.handlebars
… <div class="bundle-test">'{{card.data.level1.level1Prop}}'</div> …
As seen above, the value of {{card.data.level1.level1Prop}}
of a test card is This is a level1 property
Style declaration in ./css/bundleTest.css
.h2{ color:#fd9312; font-weight: bold; }
Expected result

Upload
For this, the bundle is submitted to the OperatorFabric server using a POST http method as described in the Thirds Service API documentation .
Example :
cd $BUNDLE_FOLDER curl -X POST "http://localhost:2100/thirds" -H "accept: application/json" -H "Content-Type: multipart/form-data" -F "file=@bundle-test.tar.gz;type=application/gzip"
Where:
-
$BUNDLE_FOLDER
is the folder containing the bundle archive to be uploaded. -
bundle-test.tar.gz
is the name of the uploaded bundle.
These command line should return a 200 http status
response with the details of the content of the bundle in the response body such as :
{ "menuEntriesData": [ { "id": "uid test 0", "url": "https://opfab.github.io/whatisopfab/", "label": "first-menu-entry" }, { "id": "uid test 0", "url": "https://www.lfenergy.org/", "label": "b-menu-entry" }, { "id": "uid test 1", "url": "https://github.com/opfab/opfab.github.io", "label": "the-other-menu-entry" } ], "name": "BUNDLE_TEST", "version": "1", "csses": [ "bundleTest" ], "i18nLabelKey": "third-name-in-menu-bar", "menuEntries": [ { "id": "uid test 0", "url": "https://opfab.github.io/whatisopfab/", "label": "first-menu-entry" }, { "id": "uid test 0", "url": "https://www.lfenergy.org/", "label": "b-menu-entry" }, { "id": "uid test 1", "url": "https://github.com/opfab/opfab.github.io", "label": "the-other-menu-entry" } ], "processes" : { "simpleProcess" : { "start" : { "details" : [ { "title" : { "key" : "start.first.title" }, "titleStyle" : "startTitle text-danger", "templateName" : "template1" } ], "actions" : { "finish" : { "type" : "URL", "url": "http://somewher.org/simpleProcess/finish", "lockAction" : true, "called" : false, "updateStateBeforeAction" : false, "hidden" : true, "buttonStyle" : "buttonClass", "label" : { "key" : "my.card.my.action.label" }, } } }, "end" : { "details" : [ { "title" : { "key" : "end.first.title" }, "titleStyle" : "startTitle text-info", "templateName" : "template2", "styles" : [ "bundleTest.css" ] } ] } } } }
Otherwise please refer to the Troubleshooting section to resolve the problem.
11.2. Bundle Technical overview
See the model section (at the bottom) of the swagger generated documentation for data structure.
11.2.1. Resource serving
11.2.1.1. CSS
CSS 3 style sheet are supported, they allow custom styling of card template
detail all css selector must be prefixed by the .detail.template
parent
selector
11.2.1.2. Internationalization
Internationalization (i18n) files are json file (JavaScript Object Notation). One file must be defined by module supported language. See the model section (at the bottom) of the swagger generated documentation for data structure.
Sample json i18n file
{ "emergency": { "message": "Emergency situation happened on {{date}}. Cause : {{cause}}." "module": { "name": "Emergency Module", "description": "The emergency module managed ermergencies" } } }
i18n messages may include parameters, these parameters are framed with double curly braces.
The bundled json files name must conform to the following pattern : [lang].json
ex:
fr.json en.json de.json
11.2.1.3. Templates
Templates are Handlebars template files. Templates are fuelled with a scope structure composed of
-
a card property (See card data model for more information)
-
a userContext :
-
login: user login
-
token: user jwt token
-
firstName: user first name
-
lastName: user last name
-
In addition to Handlebars basic syntax and helpers, OperatorFabric defines the following helpers :
numberFormat
formats a number parameter using developer.mozilla.org/fr/docs/Web/JavaScript/Reference/Objets_globaux/Nu mberFormat[Intl.NumberFormat]. The locale used is the current user selected one, and options are passed as hash parameters (see Handlebars doc Literals section).
{{numberFormat card.data.price style="currency" currency="EUR"}}
dateFormat
formats the submitted parameters (millisecond since epoch) using mement.format. The locale used is the current user selected one, the format is "format" hash parameter (see Handlebars doc Literals section).
{{dateFormat card.data.birthday format="MMMM Do YYYY, h:mm:ss a"}}
slice
extracts a sub array from ann array
example:
<!-- {"array": ["foo","bar","baz"]} --> <ul> {{#each (slice array 0 2)}} <li>{{this}}</li> {{/each}} </ul>
outputs:
<ul> <li>foo</li> <li>bar</li> </ul>
and
<!-- {"array": ["foo","bar","baz"]} --> <ul> {{#each (slice array 1)}} <li>{{this}}</li> {{/each}} </ul>
outputs:
<ul> <li>bar</li> <li>baz</li> </ul>
now
outputs the current date in millisecond from epoch. The date is computed from application internal time service and thus may be different from the date that one can compute from javascript api which relies on the browsers' system time.
NB: Due to Handlebars limitation you must provide at least one argument to helpers otherwise, Handlebars will confuse a helper and a variable. In the bellow example, we simply pass an empty string.
example:
<div>{{now ""}}</div> <br> <div>{{dateFormat (now "") format="MMMM Do YYYY, h:mm:ss a"}}</div>
outputs
<div>1551454795179</div> <br> <div>mars 1er 2019, 4:39:55 pm</div>
for a local set to FR_fr
preserveSpace
preserves space in parameter string to avoid html standard space trimming.
{{preserveSpace card.data.businessId}}
bool
returns a boolean result value on an arithmetical operation (including object equality) or boolean operation.
Arguments: - v1: left value operand - op: operator (string value) - v2: right value operand
arithmetical operators:
-
==
-
===
-
!=
-
!==
-
<
-
⇐
-
>
-
>=
boolean operators:
-
&&
-
||
examples:
{{#if (bool v1 '<' v2)}} v1 is strictly lower than v2 {{else}} V2 is lower or equal to v1 {{/if}}
math
returns the result of a mathematical operation.
arguments:
-
v1: left value operand
-
op: operator (string value)
-
v2: right value operand
arithmetical operators:
-
+
-
-
-
*
-
/
-
%
example:
{{math 1 '+' 2}}
split
splits a string into an array based on a split string.
example:
<ul> {{#each (split 'my.example.string' '.')}} <li>{{this}}</li> {{/each}} </ul>
outputs
<ul> <li>my</li> <li>example</li> <li>string</li> </ul>
action
outputs a card action button whose card action id is the concatenation of an arbitrary number of helper arguments
{{{action "PREREQUISITE_" id}}}
svg
outputs a svg tag with lazy loading, and missing image replacement message. The image url is the concatenation of an arbitrary number of helper arguments
{{{svg baseUri scheduledOpId "/" substation "/before/" computationPhaseOrdinal}}}
i18n
outputs a i18n result from a key and some parameters. There are two ways of configuration :
-
Pass an object as sole argument. The object must contain a key field (string) and an optional parameter field (map of parameterKey ⇒ value)
{{i18n card.data.i18nTitle}}
-
Pass a string key as sole argument and use hash parameters (see Handlebars doc Literals section) for i18n string parameters.
<!-- emergency.title=Emergency situation happened on {{date}}. Cause : {{cause}}. --> {{i18n "emergency.title" date="2018-06-14" cause="Broken Cofee Machine"}}
outputs
Emergency situation happened on 2018-06-14. Cause : Broken Cofee Machine
sort
sorts an array or some object’s properties (first argument) using an optional field name (second argument) to sort the collection on this fields natural order.
If there is no field argument provided :
-
for an array, the original order of the array is kept ;
-
for an object, the structure is sorted by the object field name.
<!-- users : {"john": { "firstName": "John", "lastName": "Cleese"}, "graham": { "firstName": "Graham", "lastName": "Chapman"}, "terry": { "firstName": "Terry", "lastName": "Gilliam"}, "eric": { "firstName": "Eric", "lastName": "Idle"}, "terry": { "firstName": "Terry", "lastName": "Jones"}, "michael": { "firstName": "Michael", "lastName": "Palin"}, --> <ul> {{#each (sort users)}} <li>{{this.firstName}} {{this.lastName}}</li> {{/each}} </ul>
outputs :
<ul> <li>Eric Idle</li> <li>Graham Chapman</li> <li>John Cleese</li> <li>Michael Pallin</li> <li>Terry Gilliam</li> <li>Terry Jones</li> </ul>
and
<ul> {{#each (sort users "lastName")}} <li>{{this.firstName}} {{this.lastName</li> {{/each}} </ul>
outputs :
<ul> <li>Graham Chapman</li> <li>John Cleese</li> <li>Terry Gilliam</li> <li>Eric Idle</li> <li>Terry Jones</li> <li>Michael Pallin</li> </ul>
11.2.2. Charts
The library charts.js is integrate in operator fabric, it means it’s possible to show charts in cards, you can find a bundle example in the operator fabric git (src/test/utils/karate/thirds/resources/bundle_test_api).
12. OperatorFabric Users Service
The User service manages users, groups and entities.
- Users
-
represent account information for a person destined to receive cards in the OperatorFabric instance.
- Groups
-
-
represent set of users destined to receive collectively some cards.
-
can be used in a way to handle rights on card reception in OperatorFabric.
-
- Entities
-
-
represent set of users destined to receive collectively some cards.
-
can be used in a way to handle rights on card reception in OperatorFabric.
-
The user define here is an internal representation of the individual card recipient in OperatorFabric the authentication is leave to specific OAuth2 external service.
|
In the following commands the $token is an authentication token currently valid for the OAuth2 service used by the current OperatorFabric system.
|
12.1. Users, groups and entities
User service manages users, groups and entities.
12.1.1. Users
Users are the individuals and mainly physical person who can log in OperatorFabric.
The access to this service has to be authorized, in the OAuth2
service used by the current OperatorFabric
instance, at least to access User information and to manage Users. The membership of groups and entities are stored in the user information.
12.1.1.1. Automated user creation
In case of a user does exist in a provided authentication service but he does not exist in the OperatorFabric
instance, when he is authenticated and connected for the first time in the OperatorFabric
instance, the user is
automatically created in the system without attached group or entity.
The administration of the groups and entities is dealt by the administrator manually.
More details about automated user creation
here
.
12.1.2. Groups
The notion of group is loose and can be used to simulate role in OperatorFabric
.
Groups are used to send cards to several users without a name specifically. The information about membership to a
group is stored in the user information. The rules used to send cards are described in the
recipients section
.
12.1.3. Entities
Entities are used to send cards to several users without a name specifically. The information about membership to an entity is stored in the user information. Examples using entities can be found here .
12.1.3.1. Alternative way to manage groups
The standard way to handle groups in OperatorFabric
instance is dealt on the user information.
There is an alternative way to manage groups through the authentication token, the groups are defined by the
administrator of the authentication service.
See
here
for more details to use this feature.
13. Cards Publication Service
The Cards Publication Service exposes a REST API through which third-party applications, or "publishers" can post cards to OperatorFabric. It then handles those cards:
-
Time-stamping them with a "publishDate"
-
Sending them to the message broker (RabbitMQ) to be delivered in real time to the appropriate operators
-
Persisting them to the database (MongoDB) for later consultation
13.1. Card Structure
Cards are represented as Json
objects. The technical design of cards is described in
the cards api documentation
. A card correspond to the state of a Process in OperatorFabric.
13.1.1. Technical Information of the card
Those attributes are used by OperatorFabric to manage how cards are stored, to whom and when they’re sent.
13.1.1.1. Mandatory information
Below, the json
technical key is in the '()' following the title.
Publisher (publisher
)
Quite obviously it’s the Third party which publish the card. This information is used to look up for Presentation resources of the card.
Publisher Version (publisherVersion
)
Refers the version
of publisher third
to use to render this card (i18n, title, summary and details).
As through time, the presentation of a publisher card data changes, this changes are managed through publisherVersion
in OperatorFabric. Each version is keep in the system in order to be able to display correctly old cards.
Process Identifier (processId
)
It’s the way to identify the process to which the card is associated. A card represent a state of a process.
Start Date (startDate
)
This attribute is a part of the card life management system. It’s indicate OperatorFabric the moment since the card can be displayed to the operators or main recipients, see Display rules .
Severity (severity
)
The severity is a core principe of the OperatorFabric Card system. There are 4 severities available. A color is associated in the GUI to each severity. Here the details about severity and their meaning for OperatorFabric:
-
ALARM: represents a critical state of the associated process, need an action from the operator. In the UI, the card is red;
-
ACTION: the associated process need an action form operators in order to evolve correctly. In the UI, the card is orange;
-
COMPLIANT: the process related to the card is in a compliant status. In the UI, the card is green.;
-
INFORMATION: give information to the operator. In the UI, the card is blue.
13.1.1.2. Recipient (recipient
)
Declares to whom the card is send. For more details about way recipient works see
Display rules
. Without recipient declaration a card is useless in OperatorFabric
system.
13.1.1.3. Card Life Management Configuration
With this attributes OperatorFabric knows when to display or hide cards.
Start Date (startDate
)
See Start Date and Display rules for more examples.
End Date (endDate
)
Fixes the moment until when OperatorFabric
displays the card. After the card is remove from the GUI feed,
Display rules
for some examples.
13.1.1.5. Last Time to Decide (lttd
)
Fixes the moment until when a actions
associated to the card are available. After then, the associated actions won’t be displayed or actionable.
13.1.1.6. Store information
13.1.2. User destined Information of the card
There are two kind of User destined information in a card. Some are restricted to the card format, others are defined by the publisher as long as there are encoded in json
format.
13.1.2.2. Custom part
Data (data
)
Determines where custom information is store. The content in this attribute, is purely publisher
choice.
This content, as long as it’s in json
format can be used to display details. For the way the details are
displayed, see below.
You must not use dot in json field names. In this case, the card will be refused with following message : "Error, unable to handle pushed Cards: Map key xxx.xxx contains dots but no replacement was configured!"" |
13.1.3. Presentation Information of the card
13.1.3.1. details (details
)
This attribute is a string of objects containing a title
attribute which is i18n
key and a template
attribute
which refers to a template name contained in the publisher bundle. The bundle in which those resources will be looked
for is the one corresponding of the
version
declared in the card for the current
publisher
.
If no resource is found, either because there is no bundle for the given version or
there is no resource for the given key, then the corresponding key is displayed in the details section of the GUI.
See more documentation about third bundles here .
example:
The TEST
publisher has only a 0.1
version uploaded in the current OperatorFabric
system. The details
value is [{"title":{"key":"first.tab.title"},"template":"template0"}]
.
If the publisherVersion
of the card is 2
then only the title
key declared in the details
array will be displays without any translation, i.e. the tab will contains TEST.2.first.tab.title
and will be empty. If the l10n
for the title is not available, then the tab title will be still TEST.2.first.tab.title
but the template will be compute and the details section will display the template content.
13.1.3.2. TimeSpans (timeSpans
)
When the simple startDate and endDate are not enough to characterize your process business times, you can add a list of TimeSpan to your card. TimeSpans are rendered in the timeline component as cluster bubbles. This has no effect on the feed content
example :
to display the card two times in the timeline you can add two TimeSpan to your card:
{ "publisher":"TSO1", "publisherVersion":"0.1", "processId":"process-000", "startDate":1546297200000, "severity":"INFORMATION", ... "timeSpans" : [ {"start" : 1546297200000}, {"start" : 1546297500000} ] }
In this sample, the card will be displayed twice in the time line. The card start date will be ignored.
For timeSpans, you can specify an end date but it is not implemented in OperatorFabric (it was intended for future uses but it will be deprecated).
13.2. Cards Examples
Before detailing the content of cards, let’s show you what cards look like through few examples of json.
13.2.1. Minimal Card
The OperatorFabric Card specification defines 8 mandatory attributes, but some optional attributes are needed for cards to be useful in OperatorFabric. Let’s clarify those point through few examples of minimal cards and what happens when they’re used as if.
13.2.1.1. Send to One User
The following card contains only the mandatory attributes.
{ "publisher":"TSO1", "publisherVersion":"0.1", "processId":"process-000", "startDate":1546297200000, "severity":"INFORMATION", "title":{"key":"card.title.key"}, "summary":{"key":"card.summary.key"}, "recipient":{ "type":"USER", "identity":"tso1-operator" } }
This an information about the process process-000
, send by the TSO1
. The title and the summary refer to i18n
keys defined in the associated i18n
files of the publisher. This card is displayable since the first january 2019 and should only be received by the user using the tso1-operator
login.
13.2.1.2. Send to several users
Simple case (sending to a group)
The following example is nearly the same as the previous one except for the recipient.
{ "publisher":"TSO1", "publisherVersion":"0.1", "processId":"process-000", "startDate":1546297200000, "severity":"INFORMATION", "title":{"key":"card.title.key"}, "summary":{"key":"card.summary.key"}, "recipient":{ "type":"GROUP", "identity":"TSO1" } }
Here, the recipient is a group, the TSO1
. So all users who are members of this group will receive the card.
Simple case (sending to an entity)
The following example is nearly the same as the previous one except for the recipient.
{ "publisher":"TSO1", "publisherVersion":"0.1", "processId":"process-000", "startDate":1546297200000, "severity":"INFORMATION", "title":{"key":"card.title.key"}, "summary":{"key":"card.summary.key"}, "recipient":{ "type":"USER", } "entityRecipients" : ["ENTITY1"] }
Here, the recipient is an entity, the ENTITY1
. So all users who are members of this entity will receive the card.
Simple case (sending to a group and an entity)
The following example is nearly the same as the previous one except for the recipient.
{ "publisher":"TSO1", "publisherVersion":"0.1", "processId":"process-000", "startDate":1546297200000, "severity":"INFORMATION", "title":{"key":"card.title.key"}, "summary":{"key":"card.summary.key"}, "recipient":{ "type":"GROUP", "identity":"TSO1" } "entityRecipients" : ["ENTITY1"] }
Here, the recipients are a group and an entity, the TSO1
group and ENTITY1
entity. So all users who are both members of this group and this entity will receive the card.
Complex case
If this card need to be view by a user who is not in the TSO1
group, it’s possible to tune more precisely the definition of the recipient. If the tso2-operator
needs to see also this card, the recipient definition could be(the following code details only the recipient part):
"recipient":{ "type":"UNION", "recipients":[ { "type": "GROUP", "identity":"TSO1"}, { "type": "USER", "identity":"tso2-operator"} ] }
So here, all the users of the TSO1
group will receive the INFORMATION
as should the tos2-operator
user.
13.2.2. Regular Card
The previous cards were nearly empty regarding information carrying. In fact, cards are intended to contain more information than a title and a summary. The optional attribute data
is here for that. This attribute is destined to contain any json
object. The creator of the card is free to put any information needed as long as it’s in a json
format.
13.2.2.1. Full of Hidden data
For this example we will use our previous example for the TSO1
group with a data
attribute containing the definition of a json
object containing two attributes: stringExample
and numberExample
.
{ "publisher":"TSO1", "publisherVersion":"0.1", "processId":"process-000", "startDate":1546297200000, "severity":"INFORMATION", "title":{"key":"card.title.key"}, "summary":{"key":"card.summary.key"}, "recipient":{ "type":"USER", "identity":"tso1-operator" }, "data":{ "stringExample":"This is a not so random string of characters.", "numberExample":123 } }
This card contains some data but when selected in the feed nothing more than the previous example of card happen because there is no rendering configuration.
13.2.2.2. Fully useful
When a card is selected in the feed (of the GUI), the data is displayed in the detail panel.
The way details are formatted depends on the template uploaded by Third parties as
described here
. To have an effective example without to many actions to performed, the following example will use an already existing
configuration.The one presents in the development version of OperatorFabric, for test purpose(TEST
bundle).
At the card level, the attributes in the card telling OperatorFabric which template to use is the details
attributes.
{ "publisher":"TEST", "publisherVersion":"1", "processId":"process-000", "startDate":1546297200000, "severity":"INFORMATION", "title":{"key":"process.title"}, "summary":{"key":"process.summary"}, "recipient":{ "type":"USER", "identity":"tso1-operator" }, "data":{"rootProp":"Data displayed in the detail panel"}, "details":[{"title":{"key":"process.detail.tab.first"}, "templateName":"template1"}] }
So here a single custom data is defined and it’s rootProp
. This attribute is used by the template called by the details
attribute. This attribute contains an array of json
object containing an i18n
key and a template
reference. Each of those object is a tab in the detail panel of the GUI. The template to used are defined and configured in the Third
bundle upload into the server by the publisher.
13.2.3. Display Rules
13.2.3.1. Dates
Dates impact both the feed rendering and the timeline rendering.
In the feed cards are visible based on a collection of filters among which a time filter.
In the time line cards are visible based on a similar filter plus the time line renders the "position" in time of said cards. By default, it groups cards at close time in bubbles whom color indicates severity and inner number indicates number of cards.
Start Date (startDate
)
The card is only display after this date is reach by the current time. It’s a mandatory attributes for OperatorFabric cards.
example:
The current day is the 29 january 2019.
A card with the following configuration "startDate":1548758040000
, has a start date equals to the iso date: "2019-01-29T10:34:00Z". So the operator will see it appearing in it’s feed at 10h34 AM universal time. And if there is no endDate
defines for it, it will stay in the feed indefinitely, so this card should be still visible the 30th january 2019. Before "10h34 AM universal time", this card was not visible in the feed.
End Date (endDate
)
This optional attribute, corresponds to the moment after which the card will be remove from the feed of the GUI.
example:
Imagine that the current day is still the 29 january 2019.
The card we are looking after, has the same value for the startDate than in the previous example but has the following configuration for the endDate
: "endDate":1548765240000
. It’s corresponding to "2019-01-29T12:34:00Z" universal time.
So our card is present in the feed between "11h34" and "13h34". Before and after those hours, the card is not available.
13.2.3.2. Recipients
The attribute recipient
of a card tells to whom it’s sent.
The available types are:
-
GROUP
: Card is sent to every user belonging to a group (identity) -
USER
: Card is sent to a single user (identity) -
UNION
: Card is sent to users according to the union of a recipient list (recipients) -
DEADEND
: Card is sent to no one (mostly for testing purposes)
The simplest way to determine the recipient is to assign the card to a user
or a group
as seen previously in
Minimal Card
.
But it’s possible to combine groups and potentially users using UNION
type to have a better control on whom should receive the card.
UNION
For example, if a card is destined to the operators of TSO1
and TSO2
and needs to be also seen by the admin
, the recipient configuration looks like:
"recipient":{"type":"UNION", "recipients":[ {"type":"GROUP","identity":"TSO1"}, {"type":"GROUP","identity":"TSO2"}, {"type":"USER","identity":"admin"} ] }
14. Cards Consultation Service
The User Interface depends on the Cards Consultation service to be notified of new cards and to consult existing cards, both current and archived.
14.1. Archived Cards
14.1.1. Key concepts
Every time a card is published, in addition to being delivered to the users and persisted as a "current" card in MongoDB, it is also immediately persisted in the archived cards.
Archived cards are similar in structure to current cards, but they are managed differently. Current cards are uniquely identified by their id (made up of the publisher and the process id). That is because if a new card is published with id as an existing card, it will replace it in the card collection. This way, the current card reflects the current state of a process instance. In the archived cards collection however, both cards will be kept, so that the archived cards show all the states that a given process instance went through.
14.1.2. Archives screen in the UI
The Archives screen in the UI allows the users to query these archives with different filters. The layout of this screen is very similar to the Feed screen: the results are displayed in a (paginated) card list, and the user can display the associated card details by clicking a card in the list.
The results of these queries are limited to cards that the user is allowed to see, either because this user is direct recipient of the card or because he belongs to a group (or entity) that is a recipient of the card. If a card is sent to an entity and a group, then this user must be part of both the group and the entity. |
14.1.3. Archive endpoints in the card-consultation API
This Archives screen relies on dedicated endpoints in the card-consultation API, as described here
Deployment and Administration of OperatorFabric
15. Deployment
For now OperatorFabric consist of Docker images available either by compiling the project or by using images releases from Dockerhub
Service images are all based on openjdk:8-jre-alpine.
For simple one instance per service deployment, you can find a sample deployment as a docker-compose file here
To run OperatorFabric in development mode, see the development environment documentation .
16. Configuration
OperatorFabric has multiple services to configure.
See the architecture documentation for more information on the different services.
All services are SpringBoot applications and use jetty as an embedded servlet container. As such, they share some common configuration which is described in the following documentation:
Configuration is centralized in the config directory, the dev sub-directory is specific to development environments while the docker sub-directory is a specific configuration meant for use in a full docker environment.
16.1. Business service configuration
16.1.1. Shared business service configuration
The configuration shared by all business services is in a yaml file, you can find an example with the file /config/docker/common-docker.yml.
16.1.2. Business service specific configurations
Each business service has a specific yaml configuration file. It should a least contain the name of the service:
spring:
application:
name: thirds
It can contain references to other services as well, for example :
users:
ribbon:
listOfServers: users:8080
Examples of configuration of each business service can be found either under config/docker or config/dev depending on the type of deployment you’re looking for.
16.1.2.1. Thirds service
The third service has this specific property :
name | default | mandatory? | Description |
---|---|---|---|
operatorfabric.thirds.storage.path |
null |
no |
File path to data storage folder |
16.1.2.2. Users service
The user service has these specific properties :
name | default | mandatory? | Description |
---|---|---|---|
operatorfabric.users.default.users |
null |
no |
Array of user objects to create upon startup if they don’t exist |
operatorfabric.users.default.user-settings |
null |
no |
Array of user settings objects to create upon startup if they don’t exist |
operatorfabric.users.default.groups |
null |
no |
Array of group objects to create upon startup if they don’t exist |
operatorfabric.users.default.entities |
null |
no |
Array of entity objects to create upon startup if they don’t exist |
16.2. Web UI Configuration
OperatorFabric Web UI service is built on top of a NGINX server. It serves the Angular SPA to browsers and act as a reverse proxy for other services.
16.2.1. NGINX configuration
An external nginx.conf
file configures the OperatorFabric Nginx instance named web-ui
service.
Those files are mounted as docker volumes. There are two of them in OperatorFabric, one in config/dev
and one in config/docker
.
The one in config/dev
is set with
permissive CORS
rules to enable web development using ng serve
within the ui/main
project.It’s possible to use ng serve
with the one in config/docker
version also. To do so use the conf file named
nginx-cors-permissive.conf
by configuring the /docker-compose.yml
with the following line:
- "./nginx-cors-permissive.conf:/etc/nginx/conf.d/default.conf"
instead of:
- "./nginx.conf:/etc/nginx/conf.d/default.conf"
The line customized in the nginx configuration file must end with à semi-colon (';') otherwise the Nginx server will stop immediately |
16.2.2. Service specific properties
The properties lie in the web-ui.json
.The following table describes their meaning and how to use them. An example file can be found in the config/docker directory.
name | default | mandatory? | Description |
---|---|---|---|
operatorfabric.security.realm-url |
yes |
The realm name in keycloak server settings page. This is used for the log out process to know which realm should be affected. |
|
operatorfabric.security.provider-url |
yes |
The keycloak server instance |
|
operatorfabric.security.logout-url |
yes |
The keycloak logout URL. Is a composition of: - Your keycloak instance and the auth keyword (ex: www.keycloakurl.com/auth), but we also support domains without auth (ex: www.keycloakurl.com/customPath) - The realm name (Ex: dev) - The redirect URL (redirect_uri): The redirect URL after success authentification |
|
operatorfabric.security.oauth2.flow.mode |
PASSWORD |
no |
authentication mode, awailable options:
|
operatorfabric.security.oauth2.flow.provider |
null |
no |
provider name to display on log in button |
operatorfabric.security.oauth2.flow.delegate-url |
null |
no |
Url to redirect the browser to for authentication. Mandatory with:
|
operatorfabric.feed.subscription.timeout |
60000 |
no |
Milliseconds between card subscription renewal |
operatorfabric.feed.card.time.display |
BUSINESS |
no |
card time display mode in the feed. Values :
|
operatorfabric.feed.timeline.hide |
false |
no |
If set to true, the time line is not loaded in the feed screen |
operatorfabric.feed.card.hideTimeFilter |
false |
no |
Control if you want to show or hide the time filtrer in the feed page |
operatorfabric.feed.notify |
false |
no |
If set to true, new cards are notified in the OS through web-push notifications |
operatorfabric.playSoundForAlarm |
false |
no |
If set to true, a sound is played when Alarm cards are added or updated in the feed |
operatorfabric.playSoundForAction |
false |
no |
If set to true, a sound is played when Action cards are added or updated in the feed |
operatorfabric.playSoundForCompliant |
false |
no |
If set to true, a sound is played when Compliant cards are added or updated in the feed |
operatorfabric.playSoundForInformation |
false |
no |
If set to true, a sound is played when Information cards are added or updated in the feed |
operatorfabric.i18n.supported.locales |
no |
List of supported locales (Only fr and en so far) |
|
operatorfabric.i10n.supported.time-zones |
no |
List of supported time zones, for instance 'Europe/Paris'. Values should be taken from the TZ database. |
|
operatorfabric.navbar.thirdmenus.type |
BOTH |
no |
Defines how thirdparty menu links are displayed in the navigation bar and how they open. Possible values:
|
operatorfabric.archive.filters.page.size |
no |
The page size of archive filters |
|
operatorfabric.archive.filters.page.first |
no |
The first page start of archiving module |
|
operatorfabric.archive.filters.process.list |
no |
List of processes to choose from in the corresponding filter in archives |
|
operatorfabric.archive.filters.tags.list |
no |
List of tags to choose from in the corresponding filter in archives |
|
operatorfabric.settings.tags.hide |
no |
Control if you want to show or hide the tags filter in settings and feed page |
|
operatorfabric.settings.nightDayMode |
false |
no |
if you want to activate toggle for night or day mode |
operatorfabric.settings.styleWhenNightDayModeDesactivated |
no |
style to apply if not using day night mode, possible value are DAY,NIGHT or LEGACY (black background and white timeline) |
|
operatorfabric.settings.infos.disable |
no |
Control if we want to disable/enable editing user email, description in the settings page |
|
operatorfabric.settings.infos.email |
false |
no |
Control if we want to hide(true) or display(false or not specified) the user email in the settings page |
operatorfabric.settings.infos.description |
false |
no |
Control if we want to hide(true) or display(false or not specified) the user description in the settings page |
operatorfabric.settings.infos.language |
false |
no |
Control if we want to hide(true) or display(false or not specified) the language in the settings page |
operatorfabric.settings.infos.timezone |
false |
no |
Control if we want to hide(true) or display(false or not specified) the timezone in the settings page |
operatorfabric.settings.infos.timeformat |
false |
no |
Control if we want to hide(true) or display(false or not specified) the timeformat in the settings page |
operatorfabric.settings.infos.dateformat |
false |
no |
Control if we want to hide(true) or display(false or not specified) the dateformat in the settings page |
operatorfabric.settings.infos.datetimeformat |
false |
no |
Control if we want to hide(true) or display(false or not specified) the datetimeformat in the settings page |
operatorfabric.settings.infos.tags |
false |
no |
Control if we want to hide(true) or display(false or not specified) the tags in the settings page |
operatorfabric.settings.infos.sounds |
false |
no |
Control if we want to hide(true) or display(false or not specified) the checkboxes for sound notifications in the settings page |
operatorfabric.settings.about |
none |
no |
Declares application names and their version into web-ui about section.
|
operatorfabric.logo.base64 |
medium OperatorFabric icon |
no |
The encoding result of converting the svg logo to Base64, use this online tool to encode your svg. If it is not set, a medium (32px) OperatorFabric icon is displayed. |
operatorfabric.logo.height |
32 |
no |
The height of the logo (in px) (only taken into account if operatorfabric.logo.base64 is set). |
operatorfabric.logo.width |
150 |
no |
The width of the logo (in px) (only taken into account if operatorfabric.logo.base64 is set). |
operatorfabric.logo.limitSize |
true |
no |
If it is true, the height limit is 32(px) and the width limit is 200(px), it means that if the height is over than 32, it will be set to 32, if the width is over than 200, it is set to 200. If it is false, no limit restriction for the height and the width. |
operatorfabric.title |
OperatorFabric |
no |
Title of the application, displayed on the browser |
User Settings default values
name |
default |
mandatory? |
Description |
operatorfabric.settings.timeZone |
no |
Default user time zone for users (use |
|
operatorfabric.settings.timeFormat |
LT |
no |
Default user time format (moment) |
operatorfabric.settings.dateFormat |
LL |
no |
Default user date format (moment) |
operatorfabric.settings.dateTimeFormat |
LL LT |
no |
Default user date format (moment) |
operatorfabric.settings.locale |
en |
no |
Default user locale (use en if not set) |
operatorfabric.settings.default-tags |
no |
Default user list of filtered in tags |
16.3. Security Configuration
Configure the security concern throughout several files:
-
nginx.conf
of the nginx server -
config/dev/common-dev.yml
orconfig/docker/common-docker.yml
, called common.yml in the following chapters -
web-ui.json
served by theweb-ui
service;
16.3.1. Authentication configuration
There are 3 OAuth2 Authentication flows available into OperatorFabric UI:
-
password grant: referred as
PASSWORD
mode flow; -
code flow : referred as
CODE
mode flow; -
implicit flow: referred as
IMPLICIT
mode flow.
16.3.1.1. Nginx Configuration
The UI calls need some mapping to reach the Authentication Provider. In the default OperatorFabric configuration it’s a
docker keycloak instance
, called keycloak
in the project docker-compose.yml
files.
There are 3 properties to configure within nginx.conf
file:
-
$KeycloakBaseUrl
: the base url of keycloak; -
$OperatorFabricRealm
: the realm configure within keycloak instance to provide authentication to OperatorFabric; -
$ClientPairOFAuthentication
: base64 encoded string of the pair of client authentication used by OperatorFabric to log to the Authentication Provider (keycloak). The cient-id and the client-secret are separated by a colon(':').
Example of the # Url of the Authentication provider set $KeycloakBaseUrl "http://keycloak:8080"; # Realm associated to OperatorFabric within the Authentication provider set $OperatorFabricRealm "dev"; # base64 encoded pair of authentication in the form of 'client-id:secret-id' set $ClientPairOFAuthentication "b3BmYWItY2xpZW50Om9wZmFiLWtleWNsb2FrLXNlY3JldA==" ; where |
16.3.1.2. Configuration file common.yml
name | default | mandatory? | Description |
---|---|---|---|
operatorfabric.security.oauth2.client-id |
null |
yes |
Oauth2 client id used by OperatorFabric may be specific for each service |
operatorfabric.security.jwt.login-claim |
sub |
no |
Jwt claim is used as user login or id |
operatorfabric.security.jwt.expire-claim |
exp |
no |
Jwt claim is used as token expiration timestamp |
spring.security.provider-url |
null |
no |
The keycloak instance url. |
spring.security.provider-realm |
null |
no |
The realm name within the keycloak instance. |
spring.security.oauth2.resourceserver.jwt.jwk-set-uri |
null |
yes |
The url providing the certificat used to verify the jwt signature |
example of
where |
16.3.1.3. Configuration file web-ui.json
Nginx web server serves this file. OperatorFabric creates and uses a custom Docker image containing an Nginx server with a docker volume containing this file. The two docker-compose environments contain an example of it. The path in the image to it is /usr/share/nginx/html/opfab/web-ui.json
.
For OAuth2 security concerns into this file, there are two ways to configure it, based on the Oauth2 chosen flow. There are several common properties:
-
security.realm-url
: OAuth2 provider realm under which the OpertaroFabric client is declared; -
security.provider-url
: url of the keycloak server instance. -
security.logout-url
: url used when a user is logged out of the UI; -
security.oauth2.flow.provider
: name of the OAuth2 provider; -
security.oauth2.flow.delegate-url
: url used to connect to the Authentication provider; -
security.oauth2.flow.mode
: technical way to be authenticated by the Autentication provider.
16.3.1.4. OAuth2 PASSWORD or CODE Flows
These two modes share the same way of declaring the delegate URL.
CODE
is the default mode of authentication for deploy
docker-compose environment.
-
security.oauth2.flow.mode
toPASSWORD
orCODE
; -
security.oauth2.flow.delegate-url
with the URL of the OAuth2 leading to the protocol used for authentication.
Example of Configuration For CODE Flow
{
"security": {
"oauth2": {
"flow": {
"mode": "CODE",
"provider": "Opfab Keycloak",
"delegate-url": "http://localhost:89/auth/realms/dev/protocol/openid-connect/auth?response_type=code&client_id=opfab-client"
},
"logout-url":"http://localhost:89/auth/realms/dev/protocol/openid-connect/logout?redirect_uri=http://localhost:2002/ui/",
"provider-realm": "dev",
"provider-url": "http://localhost:89"
}
}
}
Within the delegate-url
property dev
is the keycloak client realm of OperatorFabric.
For keycloak instance used for development purposes, this delegate-url
correspond to the realm under which the client opfab-client
is registred.
Here, the client-id
value is opfab-client
which is define as client under the realm
named dev
on the dev keycloak instance.
16.3.1.5. OAuth2 IMPLICIT Flow
It had its own way of configuration. To enable IMPLICIT Flow authentication the following properties need to be set:
-
security.oauth2.flow.mode
toIMPLICIT
; -
security.oauth2.flow.delegate-url
with the URL of the OAuth2 leading to the.well-known/openid-configuration
end-point used for authentication configuration.
Example of configuration for IMPLICIT Flow
{
"operatorfabric": {
"security": {
"oauth2": {
"flow": {
"mode": "IMPLICIT",
"provider": "Opfab Keycloak",
"delegate-url": "http://localhost:89/auth/realms/dev"
},
"logout-url":"http://localhost:89/auth/realms/dev/protocol/openid-connect/logout?redirect_uri=http://localhost:2002/ui/",
"provider-realm": "dev",
"provider-url": "http://localhost:89"
}
}
}
}
Within the delegate-url
property dev
is the keycloak client realm of OperatorFabric.
For keycloak instance used for development purposes, this delegate-url
correspond to the realm under which the client opfab-client
is registred.
The url look up by the implicit ui mechanism is localhost:89/auth/realms/dev/.well-known/openid-configuration
.
16.3.2. User creation
Setting automated user creation==. Creation user requires a user id. Given name and family name are optional.
name | default | mandatory? | Description |
---|---|---|---|
operatorfabric.security.jwt.login-claim |
sub |
no |
Jwt claim is used as a user login or id |
operatorfabric.security.jwt.given-name-claim |
given-name |
no |
Jwt claim is used to set the user’s given name |
operatorfabric.security.jwt.family-name-claim |
family-name |
no |
Jwt claim is used to set the user’s family name |
16.3.3. Alternative way to manage groups (and/or entities)
By default, OperatorFabric
manages groups (and/or entities) through the user's collection in the database.
Another mode can be defined, the JWT mode. The groups (and/or entities) come from the authentication token.
The administrator of the authentication service has to set what claims define a group (and/or entity).
In the Operator-Fabric
configuration, the opfab administrator has to set properties to retrieve those groups (and/or entities).
name | default | mandatory? | Description |
---|---|---|---|
operatorfabric.security.jwt.groups.mode |
OPERATOR_FABRIC |
no |
Set the group mode, possible values JWT or OPERATOR_FABRIC |
operatorfabric.security.jwt.groups.rolesClaim.rolesClaimStandard.path |
no |
path in the JWT to retrieve the claim that defines a group |
|
operatorfabric.security.jwt.groups.rolesClaim.rolesClaimStandardArray.path |
no |
path in the JWT to retrieve the claim that defines an array of groups |
|
operatorfabric.security.jwt.groups.rolesClaim.rolesClaimStandardList.path |
no |
path in the JWT to retrieve the claim that defines a list of group |
|
operatorfabric.security.jwt.groups.rolesClaim.rolesClaimStandardList.separator |
no |
set the separator value of the list of group |
|
operatorfabric.security.jwt.groups.rolesClaim.rolesClaimCheckExistPath.path |
no |
path in the JWT to check if that path does exist, if it does, use the roleValue as a group |
|
operatorfabric.security.jwt.groups.rolesClaim.rolesClaimCheckExistPath.roleValue |
no |
set the value of the group if the path exists |
|
operatorfabric.security.jwt.entitiesIdClaim |
no |
set the name of the field in the token |
|
operatorfabric.security.jwt.gettingEntitiesFromToken |
no |
boolean indicating if you want the entities of the user to come from the token and not mongoDB (possible values : true/false) |
application.yml
operatorfabric:
security:
jwt:
entitiesIdClaim: entitiesId
gettingEntitiesFromToken: true
groups:
mode: JWT # value possible JWT | OPERATOR_FABRIC
rolesClaim:
rolesClaimStandard:
- path: "ATTR1"
- path: "ATTR2"
rolesClaimStandardArray:
- path: "resource_access/opfab-client/roles"
rolesClaimStandardList:
- path: "roleFieldList"
separator: ";"
rolesClaimCheckExistPath:
- path: "resource_access/AAA"
roleValue: "roleAAA"
- path: "resource_access/BBB"
roleValue: "roleBBB"
JWT example
{
"jti": "5ff87583-10bd-4946-8753-9d58171c8b7f",
"exp": 1572979628,
"nbf": 0,
"iat": 1572961628,
"iss": "http://localhost:89/auth/realms/dev",
"aud": [
"AAA",
"BBB",
"account"
],
"sub": "example_user",
"typ": "Bearer",
"azp": "opfab-client",
"auth_time": 0,
"session_state": "960cbec4-fcb2-47f2-a155-975832e61300",
"acr": "1",
"realm_access": {
"roles": [
"offline_access",
"uma_authorization"
]
},
"resource_access": {
"AAA": {
"roles": [
"role_AAA"
]
},
"BBB": {
"roles": [
"role_BBB"
]
},
"opfab-client": {
"roles": [
"USER"
]
},
"account": {
"roles": [
"manage-account",
"manage-account-links",
"view-profile"
]
}
},
"scope": "openid ATTR2 email ATTR1 profile roleFieldList",
"email_verified": false,
"name": "example_firtstname example_lastname",
"ATTR2": "roleATTR2",
"ATTR1": "roleATTR1",
"preferred_username": "example_user",
"given_name": "example_firtstname",
"entitiesId": "ENTITY1",
"family_name": "example_lastname",
"email": "example_user@mail.com",
"roleFieldList": "roleA;roleB;roleC"
}
As the result, the group will be [ATTR1, ATTR2, roleA, roleB, roleC, USER, roleBBB, roleAAA]
16.3.4. Adding certification authorities or certificates to the Java keystore
If you’re using certificates (for example for Keycloak) that are not from a certification authority trusted by the JVM, this will cause errors such as this one:

If that is the case, you can pass the additional authorities or certificates that you use to the containers at runtime.
To do so, put the relevant files (*.der files for example) under src/main/docker/certificates.
-
This directory should only contain the files to be added to the keystore.
-
The files can be nested inside directories.
-
Each certificate will be added with its filename as alias. For example, the certificate in file mycertificate.der will be added under alias mycertificate. As a consequence, filenames should be unique or it will cause an error.
-
If you need to add or remove certificates while the container is already running, the container will have to be restarted for the changes to be taken into account.
If you would like certificates to be sourced from a different location, replace the volumes declarations in the deploy docker-compose.yml file with the selected location:
volumes: - "path/to/my/selected/location:/certificates_to_add"
instead of
volumes: - "../../../../src/main/docker/certificates:/certificates_to_add"
The steps described here assume you’re running OperatorFabric in docker mode using the deploy docker-compose, but they can be adapted for single container deployments and development mode. |
If you want to check that the certificates were correctly added, you can do so with the following steps:
-
Open a bash shell in the container you want to check
docker exec -it deploy_thirds_1 bash
-
Run the following command
$JAVA_HOME/bin/keytool -list -v -keystore /tmp/cacerts -storepass changeit
You can also look at the default list of authorities and certificates trusted by the JVM with this command:
$JAVA_HOME/bin/keytool -list -v -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass changeit
16.4. OperatorFabric Mongo configuration
We only use URI configuration for mongo through the usage of the
,
it allows us to share the same configuration behavior for simple or cluster
configuration and with both spring classic and reactive mongo configuration.
See mongo connection string for the complete URI syntax.spring.data.mongodb.uris
16.4.1. Define time to live for archived cards
By default, archived cards will remain stored in the database forever. It is possible to have them automatically removed after a specified duration by using the TTL index feature of mongoDB on their publishDate field.
For example, to have cards expire after 10 days (864000s), enter the following commands in the mongo shell:
use operator-fabric
db.archivedCards.createIndex( { "publishDate": 1 }, { expireAfterSeconds: 864000 } )
You cannot use createIndex() to change the value of expireAfterSeconds of an existing index. Instead use the collMod database command in conjunction with the index collection flag. Otherwise, to change the value of the option of an existing index, you must drop the index first and recreate. |
17. RabbitMQ
17.1. Docker container
In development mode, the simplest way to deploy a RabbitMQ server is to create a RabbitMQ docker container. A docker-compose file is provided to allow quick setup of a convenient RabbitMQ server.
17.2. Server installation
This section is dedicated to production deployment of RabbitMQ. It is not complete and needs to be tailored to any specific production environment.
17.2.1. Download & Installation
Download and install RabbitMQ following the official procedure for the target environment
17.2.2. Used ports
If RabbitMQ may not bind to the following ports, it won’t start :
-
4369: epmd, a peer discovery service used by RabbitMQ nodes and CLI tools
-
5672, 5671: used by AMQP 0-9-1 and 1.0 clients without and with TLS
-
25672: used for inter-node and CLI tools communication (Erlang distribution server port) and is allocated from a dynamic range (limited to a single port by default, computed as AMQP port + 20000). Unless external connections on these ports are really necessary (e.g. the cluster uses federation or CLI tools are used on machines outside the subnet), these ports should not be publicly exposed. See networking guide for details.
-
35672-35682: used by CLI tools (Erlang distribution client ports) for communication with nodes and is allocated from a dynamic range (computed as server distribution port + 10000 through server distribution port + 10010). See networking guide for details.
-
15672: HTTP API clients, management UI and rabbitmqadmin (only if the management plugin is enabled)
-
61613, 61614: STOMP clients without and with TLS (only if the STOMP plugin is enabled)
-
1883, 8883: (MQTT clients without and with TLS, if the MQTT plugin is enabled)
-
15674: STOMP-over-WebSockets clients (only if the Web STOMP plugin is enabled)
-
15675: MQTT-over-WebSockets clients (only if the Web MQTT plugin is enabled)
17.2.3. Production configuration
See the guide for production configuration guidelines
18. Users, Groups and Entities Administration
A new operator call John Doe, who has OAuth granted right to connect ot current OperatorFabric
instance, need to receive cards within current OperatorFabric
instance. As a user of OperatorFabric, he needs to be added to the system with a login
(john-doe-operator), his firstName
(John) and his lastName
(Doe).
operator
As there is no Administration GUI
for the moment, it must be performed through command line, as detailed in the Users API.
18.1. Users
18.1.1. List all users
First of all, list the users (who are the recipients in OperatorFabric) of the system with the following commands:
Httpie
http http://localhost:2103/users "Authorization:Bearer $token" "Content-Type:application/type"
cURL
curl -v http://localhost:2103/users -H "Authorization:Bearer $token" -H "Content-Type:application/type"
response
HTTP/1.1 200 OK [ { "firstName": null, "groups": [ "ADMIN" ], "entities": [ "ENTITY1", "ENTITY2" ], "lastName": null, "login": "admin" }, { "firstName": null, "groups": [ "RTE", "ADMIN", "CORESO", "TRANS", "TEST" ], "lastName": null, "login": "rte-operator" }, { "firstName": null, "groups": [ "ELIA" ], "lastName": null, "login": "elia-operator" }, { "firstName": null, "groups": [ "CORESO" ], "lastName": null, "login": "coreso-operator" }, { "firstName": null, "groups": [ "TSO1", "TRANS", "TEST" ], "entities": [ "ENTITY1" ], "lastName": null, "login": "tso1-operator" }, { "firstName": null, "groups": [ "TSO2", "TRANS" ], "entities": [ "ENTITY2" ], "lastName": null, "login": "tso2-operator" }, ]
18.1.2. Create a new User
We are sure that no John-doe-operator exists in our OperatorFabric instance. We can add him in our OperatorFabric instance using the following command use httpie:
echo '{"login":"john-doe-operator","firstName":"Jahne","lastName":"Doe"}' | http POST http://localhost:2103/users "Authorization:Bearer $token" "Content-Type:application/json"
Or here cURL:
curl -X POST http://localhost:2103/users -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '{"login":"john-doe-operator","firstName":"Jahne","lastName":"Doe"}'
response
HTTP/1.1 200 OK { "firstName": "Jahne", "lastName": "Doe", "login": "john-doe-operator" }
18.1.3. Fetch user details
It’s always a good thing to verify if all the information has been correctly recorded in the system:
with httpie:
http -b http://localhost:2103/users/john-doe-operator "Authorization:Bearer $token" "Content-Type:application/json"
or with cURL:
curl http://localhost:2103/users/john-doe-operator -H "Authorization:Bearer $token" -H "Content-Type:application/json"
response
HTTP/1.1 200 OK { "firstName": "Jahne", "groups": [], "entities": [], "lastName": "Doe", "login": "john-doe-operator" }
18.1.4. Update user details
As shown by this result, the firstName of the new operator has been misspelled.We need
to update the existing user
with john-doe-operator
login. To correct this mistake, the following commands can be used :
with httpie:
echo '{"login":"john-doe-operator","lastName":"Doe","firstName":"John"}' | http PUT http://localhost:2103/users/john-doe-operator "Authorization:Bearer $token" "Content-Type:application/json"
or with cURL:
curl -X PUT http://localhost:2103/users/john-doe-operator -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '{"login":"john-doe-operator","firstName":"John","lastName":"Doe"}'
response
HTTP/1.1 200 OK { "firstName": "John", "lastName": "Doe", "login": "john-doe-operator" }
18.2. Groups/Entities
All the commands below :
-
List groups
-
Create a new group
-
Fetch details of a given group
-
Update details of a group
-
Add a user to a group
-
Remove a user from a group
are available for both groups and entities. In order not to overload the documentation, we will only detail groups endpoints.
18.2.1. List groups (or entities)
This operator is the first member of a new group operator called the OPERATORS
, which doesn’t exist for the moment in
the system. As shown when we
list the groups
existing in the server.
Httpie
http http://localhost:2103/groups "Authorization:Bearer $token" "Content-Type:application/type"
cURL
curl http://localhost:2103/groups -H "Authorization:Bearer $token" -H "Content-Type:application/json"
response
HTTP/1.1 200 OK [ { "description": "The admin group", "name": "ADMIN group", "id": "ADMIN" }, { "description": "RTE TSO Group", "name": "RTE group", "id": "RTE" }, { "description": "ELIA TSO group", "name": "ELIA group", "id": "ELIA" }, { "description": "CORESO Group", "name": "CORESO group", "id": "CORESO" }, { "description": "TSO 1 Group", "name": "TSO1 group", "id": "TSO1" }, { "description": "TSO 2 Group", "name": "TSO2 group", "id": "TSO2" }, { "description": "Transnationnal Group", "name": "TRANS group", "id": "TRANS" } ]
18.2.2. Create a new group (or entity)
Firstly, the group called OPERATORS
has to be
added to the system
using the following command:
using httpie:
echo '{"id":"OPERATORS","decription":"This is the brand new group of operator"}' | http POST http://localhost:2103/groups "Authorization:Bearer $token" "Content-Type:application/json"
using cURL:
curl -X POST http://localhost:2103/groups -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '{"id":"OPERATORS","decription":"This is the brand new group of operator"}'
response
HTTP/1.1 200 OK { "description": null, "name": null, "id": "OPERATORS" }
18.2.3. Fetch details of a given group (or entity)
The result returned seems strange, to verify if it’s the correct answer by
displaying the details of the group
called OPERATORS
, use the following command:
using httpie:
http http://localhost:2103/groups/OPERATORS "Authorization:Bearer $token" "Content-Type:application/json"
using cURL:
curl http://localhost:2103/groups/OPERATORS -H "Authorization:Bearer $token" -H "Content-Type:application/json"
response
HTTP/1.1 200 OK { "description": null, "name": null, "id": "OPERATORS" }
18.2.4. Update details of a group (or entity)
The description is really null. After verification, in our first command used to create the group, the attribute for the description is misspelled. Using the following command to update the group with the correct spelling, the new group of operator gets a proper description:
with httpie:
echo '{"id":"OPERATORS","description":"This is the brand-new group of operator"}' | http -b PUT http://localhost:2103/groups/OPERATORS "Authorization:Bearer $token" "Content-Type:application/json"
with cURL:
curl -X PUT http://localhost:2103/groups/OPERATORS -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '{"id":"OPERATORS","description":"This is the brand-new group of operator"}'
response
{ "description": "This is the brand-new group of operator", "name": null, "id": "OPERATORS" }
18.2.5. Add a user to a group (or entity)
As both new group and new user are correct it’s time to make the user a member of the group . To achieve this, use the following command:
with httpie:
echo '["john-doe-operator"]' | http PATCH http://localhost:2103/groups/OPERATORS/users "Authorization:Bearer $token" "Content-Type:application/json"
with cURL:
curl -X PATCH http://localhost:2103/groups/OPERATORS/users -H "Authorization:Bearer $token" -H "Content-Type:application/json" --data '["john-doe-operator"]'
response
HTTP/1.1 200 OK
Let’s verify that the changes are correctly recorded by fetching the :
http http://localhost:2103/users/john-doe-operator "Authorization:Bearer $token" "Content-Type:application/json"
with cURL
curl http://localhost:2103/users/john-doe-operator -H "Authorization:Bearer $token" -H "Content-Type:application/json"
response
HTTP/1.1 200 OK { "firstName": "John", "groups": ["OPERATORS"], "entities": [], "lastName": "Doe", "login": "john-doe-operator" }
It’s now possible to send cards either specifically to john-doe-operator
or more generally to the OPERATORS
group.
18.2.6. Remove a user from a group (or entity)
When John Doe is no longer in charge of hypervising cards for OPERATORS
group, this group has to be removed from his login by using the following command:
with httpie:
http DELETE http://localhost:2103/groups/OPERATORS/users/john-doe-operator "Authorization:Bearer $token"
with cURL:
curl -X DELETE -H "Authorization:Bearer $token" http://localhost:2103/groups/OPERATORS/users/john-doe-operator
response
HTTP/1.1 200 OK { "login":"john-doe-operator"," firstName":"John", "lastName":"Doe", "groups":[], "entities":[] }
A last command to verify that OPERATORS
is no longer linked to john-doe-operator
:
with httpie:
http http://localhost:2103/users/john-doe-operator "Authorization:Bearer $token" "Content-Type:application/json"
with cURL:
curl http://localhost:2103/users/john-doe-operator -H "Authorization:Bearer $token" -H "Content-Type:application/json"
response
HTTP/1.1 200 OK { "firstName": "John", "groups": [], "entities": [], "lastName": "Doe", "login": "john-doe-operator" }
19. Service port table
By default all service built artifacts are configured with server.port set to 8080
If you run the services using bootRun
Gradle task, the run_all.sh
script or the full docker docker-compose
(found under config/docker),
the used ports are:
Port | Service | Forwards to | Description |
---|---|---|---|
89 |
KeyCloak |
89 |
KeyCloak api port |
2002 |
web-ui |
8080 |
Web ui and gateway (Nginx server) |
2100 |
thirds |
8080 |
Third party management service http (REST) |
2102 |
cards-publication |
8080 |
Cards publication service http (REST) |
2103 |
users |
8080 |
Users management service http (REST) |
2104 |
cards-consultation |
8080 |
Cards consultation service http (REST) |
4100 |
thirds |
5005 |
java debug port |
4102 |
cards-publication |
5005 |
java debug port |
4103 |
users |
5005 |
java debug port |
4104 |
cards-consultation |
5005 |
java debug port |
27017 |
mongo |
27017 |
mongo api port |
5672 |
rabbitmq |
5672 |
amqp api port |
15672 |
rabbitmq |
15672 |
rabbitmq api port |
20. Restricted operations (administration)
Some operations are restricted to users with the ADMIN role, either because they are administration operations with the potential to impact the OperatorFabric instance as a whole, or because they give access to information that should be private to a user.
Below is a quick recap of these restricted operations.
Any action (read, create/update or delete) regarding a single user’s data (their personal info such as their first and last name, as well as their settings) can be performed either by the user in question or by a user with the ADMIN role.
Any action on a list of users or on the groups (or entities) (if authorization is managed in OperatorFabric) can only be performed by a user with the ADMIN role.
Any write (create, update or delete) action on bundles can only be performed by a user with the ADMIN role. As such, administrators are responsible for the quality and security of the provided bundles. In particular, as it is possible to use scripts in templates, they should perform a security check to make sure that there is no XSS risk.
The ADMIN role doesn’t grant any special privileges when it comes to card consultation (be they current or archived), so a user with the ADMIN role will only see cards that have been addressed to them (or to one of their groups (or entities)), just like any other user. |
Development environment
21. Requirements
This section describes the projects requirements regardless of installation options. Please see Setting up your environment below for details on:
-
setting up a development environment with these prerequisites
-
building and running OperatorFabric
21.1. Tools and libraries
-
Gradle 6
-
Java 8.0
-
Maven 3.5.3
-
Docker
-
Docker Compose with 2.1+ file format support
-
Chrome (needed for UI tests in build)
the current Jdk used for the project is Java 8.0.252-zulu. |
Once you have installed sdkman and nvm, you can source the following script to set up your development environment (appropriate versions of Gradle, Java, Maven and project variables set):
source bin/load_environment_light.sh
21.2. Software
-
RabbitMQ 3.7.6 +: AMQP messaging layer allows inter service communication
-
MongoDB 4.0 +: Card persistent storage
RabbitMQ is required for :
-
Card AMQP push
-
Multiple service sync
MongoDB is required for :
-
Current Card storage
-
Archived Card storage
-
User Storage
Installing MongoDB and RabbitMQ is not necessary as preconfigured MongoDB and RabbitMQ are available in the form of docker-compose configuration files at src/main/docker |
22. Setting up your development environment
The steps below assume that you have installed and are using sdkman and nvm to manage tool versions ( for java, gradle, node and npm). |
There are several ways to get started with OperatorFabric
. Please look into
the section that best fits your needs.
If you encounter any issue, see Troubleshooting below. In particular, a command that hangs then fails is often a proxy issue. |
The following steps describe how to launch MongoDB, RabbitMQ and Keycloak
using Docker, build OperatorFabric using gradle and run it using the
run_all.sh
script.
22.1. Clone repository
git clone https://github.com/opfab/operatorfabric-core.git
cd operatorfabric-core
22.2. Set up your environment (environment variables & appropriate versions of gradle, maven, etc…)
source bin/load_environment_light.sh
From now on, you can use environment variable ${OF_HOME} to go back to
the home repository of OperatorFabric .
|
22.3. Deploy needed docker containers
22.3.1. Minimal configuration for gradle
build
Two docker container must be available during a gradle build of OperatorFabric
:
* RabbitMQ;
* MongoDB.
They can be launch using the ${OF_HOME}/src/main/docker/test-environment/docker-compose.yml
.
Remind that, during a gradle build, before the assemble
task, the test
one is called. The Unit tests depend on those
two software.
22.3.2. Enabling local quality report generation
To get a Sonarqube report, in addition to the two previously listed docker containers, a SonarQube
docker container is
required. Use the ${OF_HOME}/src/main/docker/test-quality-environment/docker-compose.yml
to get them all running.
To generate the quality report use the following command:
cd ${OF_HOME}
./gradlew jacocoTestReport
To export the different report into the SonarQube
docker instance you need to install and use SonarScanner.
22.3.3. Development environment
During OperatorFabric
development the running docker images of MongoDB
, RabbitMQ
, web-ui
and Keycloak
are needed.
The docker-compose
can be run in detached mode:
cd ${OF_HOME}/config/dev
docker-compose up -d
The configuration of the web-ui
embeds a grayscale favicon which can be usefull to spot the OperatorFabric
dev tab in the browser.
Sometime a CTRL+F5
on the tab is required to refresh the favicon.
22.4. Build OperatorFabric with Gradle
Using the wrapper in order to ensure building the project the same way from one machine to another.
To only compile and package the jars:
cd ${OF_HOME}
./gradlew assemble
To launch the Unit Test, compile and package the jars:
cd ${OF_HOME}
docker-compose -f ${OF_HOME}/src/main/docker/test-environment/docker-compose.yml up -d
./gradlew build
22.5. Run OperatorFabric Services using the run_all.sh
script
cd ${OF_HOME}
docker-compose -f ${OF_HOME}/config/dev/docker-compose.yml up -d
bin/run_all.sh start
See bin/run_all.sh -h for details.
|
22.7. Log into the UI
URL: localhost:2002/ui/
login: tso1-operator
password: test
The other users available in development mode are rte-operator
and admin
, both with test
as password.
It might take a little while for the UI to load even after all services are running. |
Don’t forget the final slash in the URL or you will get an error, a 404 page.
|
23. User Interface
The Angular CLI version 6.0.8 has been used to generate this project.
In the following document the variable declared as OF_HOME is the root folder of the operatorfabric-core project .
|
CLI |
stands for Command Line Interface |
SPA |
stands for Single Page Application |
23.1. Run
23.1.1. Front End development
OperatorFabric uses 4 external services to run properly :
-
an event queue: RabbitMQ;
-
a no SQL database: MongoDB;
-
an authentication provider: keycloak;
-
a web server: Nginx. Those instances are available as docker images in the project. Use
docker-compose
and the${OF_HOME}/config/dev/docker-compose.yml
to run them. After launching docker containers, use the following command line$OF_HOME/bin/run_all.sh start
to run the application. Once the whole application is ready, you should have the following output in your terminal:
##########################################################
Starting users-business-service, debug port: 5009
##########################################################
pid file: $OF_HOME/services/core/users/build/PIDFILE
Started with pid: 7483
##########################################################
Starting cards-consultation-business-service, debug port: 5011
##########################################################
pid file: $OF_HOME/services/core/cards-consultation/build/PIDFILE
Started with pid: 7493
##########################################################
Starting cards-publication-business-service, debug port: 5012
##########################################################
pid file: $OF_HOME/services/core/cards-publication/build/PIDFILE
Wait a moment before trying to connect to the`SPA`, leaving time for the OperatorFabricServices to boot up completely.
The SPA
, on a local machine, is available at the following Url: localhost:2002/ui/
.
To log in you need to use a valid user among the following: tso1-operator
, rte-operator
or admin
.
The common password is test
for them all.
To test the reception of cards, you can use the following script to create dummy cards:
${OF_HOME}/services/core/cards-publication/src/main/bin/push_cards_loop.sh
For more realistic card sending use, once Karate env correctly configured, the Karate script called ${OF_HOME}/src/test/api/karate/launchAllCards.sh
.
Once logged in, after one of those scripts have been running, you should be able to see some cards displayed in localhost:2002/ui/feed
.
23.2. Build
Run ng build
to build the project. The build artifacts will be stored in :
${OF_HOME}/ui/main/build/distribution
23.3. Test
23.3.2. Test during UI development
-
if the RabbitMQ, MongoDB and Keycloak docker containers are not running, launch them;
-
set your environment variables with
source ${OF_HOME}/bin/load_environment_light.sh
; -
run the micro services using the same command as earlier:
${OF_HOME}/bin/run_all.sh start
; -
if needed, enable a card-operation test flow using the script
${OF_HOME}/service/core/cards-publication/src/main/bin/push_cards_loop.sh
; -
launch an angular server with the command:
ng serve
; -
test your changes in your browser using this url:
localhost:4200
which leads tolocalhost:4200/#/feed
.
24. Environment variables
These variables are loaded by bin/load_environment_light.sh bin/load_environment_ramdisk.sh
-
OF_HOME: OperatorFabric root dir
-
OF_CORE: OperatorFabric business services subroot dir
-
OF_INFRA: OperatorFabric infrastructure services subroot dir
-
OF_CLIENT: OperatorFabric client data definition subroot dir
-
OF_TOOLS: OperatorFabric tooling libraries subroot dir
Additionally, you may want to configure the following variables
-
Docker build proxy configuration (used to configure alpine apk proxy settings)
-
APK_PROXY_URI
-
APK_PROXY_HTTPS_URI
-
APK_PROXY_USER
-
APK_PROXY_PASSWORD
-
25. Project Structure
25.1. Tree View
project
├──bin
├──CICD
│ └─ travis
├──client
│ ├──cards (cards-client-data)
│ ├──src
│ └──users (users-client-data)
├──config
│ ├──dev
│ ├──docker
│ └──keycloak
├──services
│ ├──core
│ │ ├──cards-consultation (cards-consultation-business-service)
│ │ ├──cards-publication (cards-publication-business-service)
│ │ ├──src
│ │ ├──thirds (third-party-business-service)
│ │ └──users (users-business-service)
├──web-ui
├──src
| ├──docs
| │ └──asciidoc
| |──main
| | ├──docker
| | └──headers
| |──test
| | ├──api
| | ├──cypress
| | └──utils
├──tools
│ ├──generic
│ │ ├──test-utilities
│ │ └──utilities
│ ├── spring
│ │ ├──spring-mongo-utilities
│ │ ├──spring-oauth2-utilities
│ │ ├──spring-test-utilities
│ │ └──spring-utilities
│ └──swagger-spring-generators
└─ui
25.2. Content Details
-
bin: contains useful scripts for dev purposes
-
travis: scripts used by Travis for the build process
-
-
client: contains REST APIs simple beans definition, may be used by external projects
-
cards (cards-client-data): simple beans regarding cards
-
users (users-client-data): simple beans regarding users
-
-
config: contains external configurations for all services , keycloak and docker-compose files to help with tests and demonstrations
-
services: contains the microservices that make up OperatorFabric
-
core: contains core business microservices
-
cards-consultation (cards-consultation-business-service): Card consultation service
-
cards-publication (cards-publication-business-service): Card publication service
-
src: contains swagger templates for core business microservices
-
thirds (third-party-business-service): Third-party information management service
-
users (users-business-service): Users management service
-
-
-
web-ui: project based on Nginx server to serve the OperatorFabric UI
-
-
-
generic: Generic (as opposed to Spring-related) utility code
-
test-utilities: Test-specific utility code
-
utilities: Utility code
-
-
spring: Spring-related utility code
-
spring-mongo-utilities : Utility code with Spring-specific dependencies, used to share common features across MongoDB-dependent services
-
spring-oauth2-utilities : Utility code with Spring-specific dependencies, used to share common features across OAuth2-dependent services
-
spring-test-utilities : Utility code with Spring-specific dependencies for testing purposes
-
spring-utilities : Utility code with Spring-specific dependencies
-
-
swagger-spring-generators : Spring Boot generator for swagger, tailored for OperatorFabric needs
-
-
ui: Angular sources for the UI
25.3. Conventions regarding project structure and configuration
Sub-projects must conform to a few rules in order for the configured Gradle tasks to work:
25.3.1. Java
[sub-project]/src/main/java |
contains java source code |
[sub-project]/src/test/java |
contains java tests source code |
[sub-project]/src/main/resources |
contains resource files |
[sub-project]/src/test/resources |
contains test resource files |
25.3.2. Modeling
Core services projects declaring REST APIS that use Swagger for their definition must declare two files:
[sub-project]/src/main/modeling/swagger.yaml |
Swagger API definition |
[sub-project]/src/main/modeling/config.json |
Swagger generator configuration |
25.3.3. Docker
Services project all have docker image generated in their build cycle. See Gradle Tasks for details.
Per project configuration :
-
docker file : [sub-project]/src/main/docker/Dockerfile
-
docker-compose file : [sub-project]/src/main/docker/docker-compose.yml
-
runtime data : [sub-project]/src/main/docker/volume is copied to [sub-project]/build/docker-volume/ by task copyWorkingDir. The latest can then be mounted as volume in docker containers.
26. Development tools
26.1. Scripts (bin and CICD)
bin/load_environment_light.sh |
sets up environment when sourced (java version, gradle version, maven version, node version) |
bin/load_environment_ramdisk.sh |
sets up environment and links build subdirectories to a ramdisk when sourced at ~/tmp |
bin/run_all.sh |
runs all all services (see below) |
bin/setup_dockerized_environment.sh |
generate docker images for all services |
26.1.1. load_environment_ramdisk.sh
There are prerequisites before sourcing load_environment_ramdisk.sh:
-
Logged user needs sudo rights for mount
-
System needs to have enough free ram
Never ever run a gradle clean or ./gradlew clean to avoid deleting those links.
|
26.1.2. run_all.sh
Please see run_all.sh -h
usage before running.
Prerequisites
-
mongo running on port 27017 with user "root" and password "password" (See src/main/docker/mongodb/docker-compose.yml for a pre configured instance).
-
rabbitmq running on port 5672 with user "guest" and password "guest" (See src/main/docker/rabbitmq/docker-compose.yml for a pre configured instance).
2002 web-ui Web ui and gateway (Nginx server) 2100 thirds Third party management service http (REST) 2102 cards-publication card publication service http (REST) 2103 users Users management service http (REST) 2104 cards-consultation card consultation service http (REST) 4100 thirds java debug port 4102 cards-publication java debug port 4103 users java debug port 4103 cards-consultation java debug port
Ports configuration
Port
26.2. Gradle Tasks
In this section only custom tasks are described. For more information on tasks, refer to the output of the "tasks" gradle task and to gradle and plugins official documentation.
26.2.1. Services
26.2.1.1. Common tasks for all sub-projects
-
Test tasks
-
unitTest: runs unit tests
-
-
Other:
-
copyWorkingDir: copies [sub-project]/src/main/docker/volume to [sub-project]/build/
-
copyDependencies: copy dependencies to build/libs
-
26.2.1.2. Core
-
Swagger Generator tasks
-
debugSwaggerOperations: generate swagger code from /src/main/modeling/config.json to build/swagger-analyse
-
swaggerHelp: display help regarding swagger configuration options for java
-
26.2.1.3. Thirds Service
-
Test tasks
-
prepareTestDataDir: prepare directory (build/test-data) for test data
-
compressBundle1Data, compressBundle2Data: generate tar.gz third party configuration data for tests in build/test-data
-
prepareDevDataDir: prepare directory (build/dev-data) for bootRun task
-
createDevData: prepare data in build/test-data for running bootRun task during development
-
-
Other tasks
-
copyCompileClasspathDependencies: copy compile classpath dependencies, catching lombok that must be sent for sonarqube
-
27. Useful recipes
27.1. Running sub-project from jar file
-
gradle :[sub-projectPath]:bootJar
-
or java -jar [sub-projectPath]/build/libs/[sub-project].jar
27.2. Overriding properties when running from jar file
-
java -jar [sub-projectPath]/build/libs/[sub-project].jar –spring.config.additional-location=file:[filepath] NB : properties may be set using ".properties" file or ".yml" file. See Spring Boot configuration for more info.
-
Generic property list extract :
-
server.port (defaults to 8080) : embedded server port
-
-
:services:core:third-party-service properties list extract :
-
operatorfabric.thirds.storage.path (defaults to "") : where to save/load OperatorFabric Third Party data
-
27.3. Generating docker images
To Generate all docker images run bin/setup_dockerized_environment.sh
.
INFORMATION: If you work behind a proxy you need to specify the following properties to configure alpine apk package manager:
-
apk.proxy.uri: proxy http uri ex: "http://somewhere:3128[somewhere:3128]" (defaults to blank)
-
apk.proxy.httpsuri: proxy http uri ex: "http://somewhere:3128[somewhere:3128]" (defaults to apk.proxy.uri value)
-
apk.proxy.user: proxy user
-
apk.proxy.password: proxy unescaped password
Alternatively, you may configure the following environment variables :
-
APK_PROXY_URI
-
APK_PROXY_HTTPS_URI
-
APK_PROXY_USER
-
APK_PROXY_PASSWORD
28. Troubleshooting
When running docker-compose files using third-party images(such as rabbitmq,
mongodb etc.) the first time, docker will need to pull these images from
their repositories.
If the docker proxy isn’t set properly, you will see the above message. To set the proxy, follow these
steps from the docker documentation. If your proxy needs authentication, add your user and password as follows:
Proxy error when running third-party docker-compose
Pulling rabbitmq (rabbitmq:3-management)...
ERROR: Get https://registry-1.docker.io/v2/: Proxy Authentication Required
HTTP_PROXY=http://user:password@proxy.example.com:80/
The password should be URL-encoded.
Gradle task (for example gradle build) fails with the following error: Issue with the Gradle daemon. Stopping the daemon using
Gradle Metaspace error
* What went wrong:
Metaspace
gradle --stop
and re-launching the build should solve this issue.
Select the next available version and update
load_environment_light accordingly before
sourcing it again. The java version currently listed in the script might have been deprecated
(for security reasons) or might not be available for your operating system
(for example, 8.0.192-zulu wasn’t available for Ubuntu). Run
Java version not available when setting up environment
Stop! java 8.0.192-zulu is not available. Possible causes:
* 8.0.192-zulu is an invalid version
* java binaries are incompatible with Linux64
* java has not been released yet
sdk list java
to find out which versions are available. You will get
this kind of output:================================================================================
Available Java Versions
================================================================================
13.ea.16-open 9.0.4-open 1.0.0-rc-11-grl
12.0.0-zulu 8.0.202-zulu 1.0.0-rc-10-grl
12.0.0-open 8.0.202-amzn 1.0.0-rc-9-grl
12.0.0-librca 8.0.202.j9-adpt 1.0.0-rc-8-grl
11.0.2-zulu 8.0.202.hs-adpt
11.0.2-open 8.0.202-zulufx
11.0.2-amzn 8.0.202-librca
11.0.2.j9-adpt 8.0.201-oracle
11.0.2.hs-adpt > + 8.0.192-zulu
11.0.2-zulufx 7.0.211-zulu
11.0.2-librca 6.0.119-zulu
11.0.2-sapmchn 1.0.0-rc-15-grl
10.0.2-zulu 1.0.0-rc-14-grl
10.0.2-open 1.0.0-rc-13-grl
9.0.7-zulu 1.0.0-rc-12-grl
================================================================================
+ - local version
* - installed
> - currently in use
================================================================================
A
BUILD FAILED with message
Execution failed for task ':ui:main-user-interface:npmInstall'.
FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':ui:main-user-interface:npmInstall'.
sudo
has been used before the ./gradlew assemble
.
Don’t use sudo to build OperatorFabric otherwise unexpected problems could arise.
29. Keycloak Configuration
The configuration needed for development purposes is automatically loaded from the dev-realms.json file. However, the steps below describe how they can be reproduced from scratch on a blank Keycloak instance in case you want to add to it.
The Keycloak Management interface is available here: [host]:89/auth/admin Default credentials are admin/admin.
29.2. Setup at least one client (or best one per service)
29.2.1. Create client
-
Click Clients in left menu
-
Click Create Button
-
Set client ID to "opfab-client" (or whatever)
-
Select Openid-Connect Protocol
-
Enable Authorization
-
Access Type to Confidential
-
save
29.2.2. Add a Role to Client
-
In client view, click Roles tab
-
Click Add button
-
create a USER role (or whatever)
-
save == create a Mapper
Used to map the user name to a field that suits services
-
name it sub
-
set mapper type to User Property
-
set Property to username
-
set Token claim name to sub
-
enable add to access token
-
save
29.3. Create Users
-
Click Users in left menu
-
Click Add User button
-
Set username to admin
-
Save
-
Select Role Mappings tab
-
Select "opfab-client" in client roles combo (or whatever id you formerly chose)
-
Select USER as assigned role (or whatever role you formerly created)
-
Select Credentials tab
-
set password and confirmation to "test" *
repeat process for other users: rte-operator, tso1-operator, tso2-operator
29.3.1. Development-specific configuration
To facilitate development, in the configuration file provided in the git (dev-realms.json) ,session are set to have a duration of 10 hours (36000 seconds) and SSL is not required. These parameters should not be used in production.
The following parameters are set : accessTokenLifespan : 36000 ssoSessionMaxLifespan : 36000 accessCodeLifespan" : 36000 accessCodeLifespanUserAction : 36000 sslRequired : none
Using OAuth2 token with the CLI
30. Get a token
End point: localhost:2002/auth/token
Method: POST
Body arguments:
-
client_id:
string
constant=clientIdPassword
; -
grant_type:
string
constant=password
;-
username:
string
any value, must match an OperatorFabric registered user name;
-
-
password:
string
any value;
The following examples will be for admin
user.
30.1. Curl
command:
curl -s -X POST -d "username=admin&password=test&grant_type=password&client_id=clientIdPassword" http://localhost:2002/auth/token
example of expected result:
{"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cC I6MTU1MjY1OTczOCwiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOi IwMmQ4MmU4NS0xM2YwLTQ2NzgtOTc0ZC0xOGViMDYyMTVhNjUiLCJjbGllbnRfaWQiOiJjbGllbnRJZF Bhc3N3b3JkIiwic2NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.SDg-BEzzonIVXfVBnnfq0oMbs_0 rWVtFGAZzRHj7KPgaOXT3bUhQwPOgggZDO0lv2U1klwB94c8Cb6rErzd3yjJ8wcVcnFLO4KxrjYZZxdK VAz0CkMKqng4kQeQm_1UShsQXGLl48ezbjXyJn6mAl0oS4ExeiVsx_kYGEdqjyb5CiNaAzyx0J-J5jVD SJew1rj5EiSybuy83PZwhluhxq0D2zPK1OSzqiezsd5kX5V8XI4MipDhaAbPYroL94banZTn9RmmAKZC AYVM-mmHbjk8mF89fL9rKf9EUNhxOG6GE0MDqB3LLLcyQ6sYUmpqdP5Z94IkAN-FpC7k93_-RDw","to ken_type":"bearer","refresh_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWI iOiJhZG1pbiIsInNjb3BlIjpbInJlYWQiLCJ1c2VyX2luZm8iXSwiYXRpIjoiMDJkODJlODUtMTNmMC0 0Njc4LTk3NGQtMThlYjA2MjE1YTY1IiwiZXhwIjoxNTUyNzAxMTM4LCJhdXRob3JpdGllcyI6WyJST0x FX0FETUlOIiwiUk9MRV9VU0VSIl0sImp0aSI6IjMwOWY2ZDllLWNmOGEtNDg0YS05ZjMxLWViOTAxYzk 4YTFkYSIsImNsaWVudF9pZCI6ImNsaWVudElkUGFzc3dvcmQifQ.jnZDt6TX2BvlmdT5JV-A7eHTJz_s lC5fHrJFVI58ly6N7AUUfxebG_52pmuVHYULSKqTJXaLR866r-EnD4BJlzhk476FtgtVx1nazTpLFRLb 8qDCxeLrzClQBkzcxOt6VPxB3CD9QImx3bcsDwjkPxofUDmdg8AxZfGTu0PNbvO8TKLXEkeCztLFvSJM GlN9zDzWhKxr49I-zPZg0XecgE9j4WITkFoDVwI-AfDJ3sGXDi5AN55Sz1j633QoqVjhtc0lO50WPVk5 YT7gU8HLj27EfX-6vjnGfNb8oeq189-NX100QHZM9Wgm79mIm4sRgwhpv-zzdDAkeb3uwIpb8g","exp ires_in":1799,"scope":"read user_info","jti":"02d82e85-13f0-4678-974d-18eb06215a65"}
30.2. Httpie
http --form POST http://localhost:2002/auth/token username=admin password=test grant_type=password client_id=clientIdPassword
example of expected result:
.HTTP/1.1 200 OK Cache-Control: no-store Content-Type: application/json;charset=utf-8 Date: Fri, 15 Mar 2019 13:57:19 GMT Pragma: no-cache X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block transfer-encoding: chunked { "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY2MDAzOS wiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiI2MjQzMDliMS03Yz g3LTRjZGMtODQ0My0wMTI0NTE1Zjg3ZjgiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkIiwic2 NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.VO4OZL7ycqNez0cHzM5WPuklr0r6SAOkUdUV2qFa5Bd 3PWx3DFHAHUxkfSX0-R4OO6iG2Zu7abzToAZNVLwk107LH_lWXOMQBriGx3d2aSgCf1yx_wI3lHDd8ST 8fxV7uNeolzywYavSpMGfgz9GXLzmnyeuPH4oy7eyPk9BwWVi0d7a_0d-EfhE1T8eaiDfymzzNXJ4Bge 8scPy-93HmWpqORtJaFq1qy4QgU28N2LgHFEEEWCSzfhYXH-LngTCP3-JSNcox1hI51XBWEqoeApKdfD J6o4szR71SIFCBERxCH9TyUxsFywWL3e-YnXMiP2J08eB8O4YwhYQEFqB8Q", "expires_in": 1799, "jti": "624309b1-7c87-4cdc-8443-0124515f87f8", "refresh_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsInNjb3BlIjpbInJlYWQiLC J1c2VyX2luZm8iXSwiYXRpIjoiNjI0MzA5YjEtN2M4Ny00Y2RjLTg0NDMtMDEyNDUxNWY4N2Y4IiwiZX hwIjoxNTUyNzAxNDM5LCJhdXRob3JpdGllcyI6WyJST0xFX0FETUlOIiwiUk9MRV9VU0VSIl0sImp0aS I6ImRiYzMxNTJiLTM4YTUtNGFmZC1hY2VmLWVkZTI4MjJkOTE3YyIsImNsaWVudF9pZCI6ImNsaWVudE lkUGFzc3dvcmQifQ.Ezd8kbfNQHOOvUCNNN4UmOOkncHiT9QVEM63FiW1rq0uXDa3xfBGil8geM5MsP0 7Q2He-mynkFb8sGNDrAXTdO-8r5o4a60zWrktrMg2QH4icC1lyeZpiwZxe6675QpLpSeMlXt9PdYj-pb 14lrRookxXP5xMQuIMteZpbtby7LuuNAbNrjveZ1bZ4WMi7zltUzcYUuqHlP1AYPteGRrJVKXiuPpoDv gwMsEk2SkgyyACI7SdZZs8IT9IGgSsIjjgTMQKzj8P6yYxNLUynEW4o5y1s2aAOV0xKrzkln9PchH9zN qO-fkjTVRjy_LBXGq9zkn0ZeQ3BUe1GuthvGjaA", "scope": "read user_info", "token_type": "bearer" }
31. Extract token
From the previous results, the data need to be considered to be authenticated by OperatorFabric services is the content of the "access_token"
attribute of the body response.
Once this value extracted, it need to be passed at the end of the value of the http HEADER of type Authorization:Bearer
. Note that a space is needed between Bearer
and token actual value.
example from previous results:
31.1. Curl
Authorization:Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY1OTczOCw iYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiIwMmQ4MmU4NS0xM2Y wLTQ2NzgtOTc0ZC0xOGViMDYyMTVhNjUiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkIiwic2N vcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.SDg-BEzzonIVXfVBnnfq0oMbs_0rWVtFGAZzRHj7KPga OXT3bUhQwPOgggZDO0lv2U1klwB94c8Cb6rErzd3yjJ8wcVcnFLO4KxrjYZZxdKVAz0CkMKqng4kQeQm _1UShsQXGLl48ezbjXyJn6mAl0oS4ExeiVsx_kYGEdqjyb5CiNaAzyx0J-J5jVDSJew1rj5EiSybuy83 PZwhluhxq0D2zPK1OSzqiezsd5kX5V8XI4MipDhaAbPYroL94banZTn9RmmAKZCAYVM-mmHbjk8mF89f L9rKf9EUNhxOG6GE0MDqB3LLLcyQ6sYUmpqdP5Z94IkAN-FpC7k93_-RDw
31.2. Httpie
Authorization:Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY2MDAzOSw iYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiI2MjQzMDliMS03Yzg 3LTRjZGMtODQ0My0wMTI0NTE1Zjg3ZjgiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkIiwic2N vcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.VO4OZL7ycqNez0cHzM5WPuklr0r6SAOkUdUV2qFa5Bd3 PWx3DFHAHUxkfSX0-R4OO6iG2Zu7abzToAZNVLwk107LH_lWXOMQBriGx3d2aSgCf1yx_wI3lHDd8ST8 fxV7uNeolzywYavSpMGfgz9GXLzmnyeuPH4oy7eyPk9BwWVi0d7a_0d-EfhE1T8eaiDfymzzNXJ4Bge8 scPy-93HmWpqORtJaFq1qy4QgU28N2LgHFEEEWCSzfhYXH-LngTCP3-JSNcox1hI51XBWEqoeApKdfDJ 6o4szR71SIFCBERxCH9TyUxsFywWL3e-YnXMiP2J08eB8O4YwhYQEFqB8Q
32. Check a token
32.1. Curl
from previous example
curl -s -X POST -d "token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY1 OTczOCwiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiIwMmQ4MmU4 NS0xM2YwLTQ2NzgtOTc0ZC0xOGViMDYyMTVhNjUiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3Jk Iiwic2NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.SDg-BEzzonIVXfVBnnfq0oMbs_0rWVtFGAZzR Hj7KPgaOXT3bUhQwPOgggZDO0lv2U1klwB94c8Cb6rErzd3yjJ8wcVcnFLO4KxrjYZZxdKVAz0CkMKqn g4kQeQm_1UShsQXGLl48ezbjXyJn6mAl0oS4ExeiVsx_kYGEdqjyb5CiNaAzyx0J-J5jVDSJew1rj5Ei Sybuy83PZwhluhxq0D2zPK1OSzqiezsd5kX5V8XI4MipDhaAbPYroL94banZTn9RmmAKZCAYVM-mmHbj k8mF89fL9rKf9EUNhxOG6GE0MDqB3LLLcyQ6sYUmpqdP5Z94IkAN-FpC7k93_-RDw" http://localhost:2002/auth/check_token
which gives the following example of result:
{ "sub":"admin", "scope":["read","user_info"], "active":true,"exp":1552659738, "authorities":["ROLE_ADMIN","ROLE_USER"], "jti":"02d82e85-13f0-4678-974d-18eb06215a65", "client_id":"clientIdPassword" }
32.2. Httpie
from previous example:
http --form POST http://localhost:2002/auth/check_token token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY2M DAzOSwiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiI2MjQzMDliM S03Yzg3LTRjZGMtODQ0My0wMTI0NTE1Zjg3ZjgiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkI iwic2NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.VO4OZL7ycqNez0cHzM5WPuklr0r6SAOkUdUV2q Fa5Bd3PWx3DFHAHUxkfSX0-R4OO6iG2Zu7abzToAZNVLwk107LH_lWXOMQBriGx3d2aSgCf1yx_wI3lH Dd8ST8fxV7uNeolzywYavSpMGfgz9GXLzmnyeuPH4oy7eyPk9BwWVi0d7a_0d-EfhE1T8eaiDfymzzNX J4Bge8scPy-93HmWpqORtJaFq1qy4QgU28N2LgHFEEEWCSzfhYXH-LngTCP3-JSNcox1hI51XBWEqoeA pKdfDJ6o4szR71SIFCBERxCH9TyUxsFywWL3e-YnXMiP2J08eB8O4YwhYQEFqB8Q
which gives the following example of result:
HTTP/1.1 200 OK Cache-Control: no-cache, no-store, max-age=0, must-revalidate Content-Type: application/json;charset=utf-8 Date: Fri, 15 Mar 2019 14:19:31 GMT Expires: 0 Pragma: no-cache X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block transfer-encoding: chunked { "active": true, "authorities": [ "ROLE_ADMIN", "ROLE_USER" ], "client_id": "clientIdPassword", "exp": 1552660039, "jti": "624309b1-7c87-4cdc-8443-0124515f87f8", "scope": [ "read", "user_info" ], "sub": "admin" }
33. Extract token
The utility jq
, not always available on every Linux distro, parse json input and can extract requested json path value.
Here is a way to do so.
curl -d "username=&dminpassword=test&grant_type=password&client_id=opfab-client&secret=opfab-keycloack-secret" "http://localhost:2002/auth/token" | jq -r .access_token
The -r
opttion, for raw, leaves the output without any quotes.
OperatorFabric Community
The aim of this document is to present the OperatorFabric community, its code of conduct and to welcome contributors!
First of all, thank you for your interest !
We can’t stress enough that feedback, discussions, questions and contributions on OperatorFabric are very much appreciated. However, because the project is still in its early stages, we’re not fully equipped for any of it yet, so please bear with us while the contribution process and tooling are sorted out.
This project and everyone participating in it is governed by the OperatorFabric Code of Conduct . By participating, you are expected to uphold this code. Please report unacceptable behavior to opfab_AT_lists.lfenergy.org.
34. License and Developer Certificate of Origin
OperatorFabric is an open source project licensed under the Mozilla Public License 2.0. By contributing to OperatorFabric, you accept and agree to the terms and conditions for your present and future contributions submitted to OperatorFabric.
The project also uses a mechanism known as a Developer Certificate of Origin (DCO) to ensure that we are legally allowed to distribute all the code and assets for the project. A DCO is a legally binding statement that asserts that you are the author of your contribution, and that you wish to allow OperatorFabric to use your work.
Contributors sign-off that they adhere to these requirements by adding a Signed-off-by
line to commit messages. All
commits to any repository of the OperatorFabric organization have to be signed-off like this:
This is my commit message. Signed-off-by: John Doe <john.doe@email-provider.com>
You can write it manually but Git has a -s command line option to append it automatically to your commit message:
$ git commit -s -m 'This is my commit message'
Note that in the future a check will be performed during the integration, making sure all commits in the Pull Request
contain a valid Signed-off-by
line.
These processes and templates have been adapted from the ones defined by the PowSyBl project.
36. Contributing Code or Documentation
36.1. Contribution Workflow
The project started out using a Feature Branch workflow, but as the project team grew and we needed to manage support to previous releases we decided to switch to a workflow based on the Git Flow workflow, starting after version 1.3.0.RELEASE.
The principles for this workflow were first described in the blog post linked above, and this document attempts to summarize how we adapted it to our project. Statements in quotes are from the original blog post.
In this document, "repository version" refers to the version defined in the VERSION file at the root of the project, which is a parameter for certain build tasks and for our CICD pipeline. |
36.1.1. Principles
36.1.1.1. develop
branch
The role of the develop
branch is quite similar to that of the master
branch in our previous "Feature Branch"
workflow.
The develop
branch is where feature branches are branched off from, and where they’re merged back to. This way,
the HEAD of the develop
branch "always reflects a state with the latest delivered development changes for the next
release".
The repository version on the develop
branch should always be SNAPSHOT
.
The daily CRON Travis job generating the documentation and docker images for the SNAPSHOT
version are run from
this branch (see our
CICD documentation
for details).
36.1.1.2. master
branch
"When the source code in the develop branch reaches a stable point and is ready to be released, all of the changes should be merged back into master somehow and then tagged with a release number."
This means that any commit on master is a production-ready release, be it a patch, a minor or a major version.
Any commit on master
triggers a Travis build generating and pushing documentation and docker images for the
corresponding release version. If the ci_latest
keyword is used in the commit message, the docker images are also
tagged with latest
.
36.1.1.3. Feature branches
Feature branches are used to develop new features or fix bugs for the next release. The version number for this next release is usually not known during the developments as it is its final contents that will determine whether it’s a major or minor version (or even a simple patch). By contrast, hotfix branches fix bugs in existing releases and give rise to new patches.
The lifecycle of feature branches is as follows:
-
A new feature branch is branched off
develop
before starting work on a feature or bug. -
Once the developments are deemed ready by the developer(s) working on the feature, a pull request should be created for this branch.
-
New pull requests are discussed during daily meetings to assign someone from the Reviewers group to the issue.
-
The pull request author and reviewer(s) then make use of the Git Hub pull requests features (comments, proposed changes, etc.) to make changes to the PR until it meets the project requirements.
-
Once it is ready to be included in the next version, the pull request is then merged back into
develop
.
Naming convention: Feature branches names should always start with the reference of the JIRA issue they’re addressing, optionally followed by additional information if several branches are associated with a given JIRA issue.
-
OC-123
-
OC-123_Documentation
-
OC-123_1
-
123
-
OC123
-
SomeTextDescribingTheBranch
Commit messages should also start with the JIRA issue ID between brackets: [OC-123] My commit message
This allows the branch, PR and commits to be directly accessible from the JIRA issue.
36.1.1.4. Release branches
Once developments are in a release-ready state and have been tested on the develop
branch, a release branch should
be created off develop
to prepare the merge into master.
"All features that are targeted for the release-to-be-built must be merged in to develop at this point in time. All features targeted at future releases may not—they must wait until after the release branch is branched off." |
By contrast to what is described in the original blog post, for now we have chosen to
only create the release branch once the developments are completely ready and tested on the develop branch, so that no
fixes should be made on the release branch. This simplifies things because it means that release branches don’t have to
be merged back into develop .
|
Once the X.X.X.release
branch has been created, a new commit should be made on this branch to change the repository
version from SNAPSHOT
to X.X.X.RELEASE
.
Then, pushing the branch will trigger a build and a "dry-run" generation of documentation and docker images. The aim
is to detect any issue with this generation before moving to master.
Finally, the X.X.X.release
can be merged into master
, triggering
The resulting merge commit on master
should then be tagged with X.X.X.RELEASE
.
All commits on master
should be merge commits from release
branches, direct pushes on master will be disabled.
Naming convention: The name of a release branch should match the repository version it is meant to merge into
master
but in lower case to avoid confusion with release tags on master.
Example: The valid branch name for the branch bringing 1.3.0.RELEASE into master
is 1.3.0.release
36.1.1.5. Hotfix branches
Work in progress: detail hotfix branches lifecycle and constraints.
Naming convention: Hotfix branches names should always start with "HF_" and the reference of the JIRA issue they’re addressing, optionally followed by additional information if several branches are associated with a given JIRA issue.
-
HF_OC-123
-
HF_OC-123_Documentation
-
HF_OC-123_1
-
123
-
OC-123
-
OC123
-
HF_SomeTextDescribingTheFix
-
SomeTextDescribingTheFix
36.1.2. Examples and commands
The aim of this section is to illustrate how our workflow works on a concrete example, complete with the required
git
commands.
36.1.2.1. Initial state
In the initial state of our example, only develop
and master
exist.
The repository version in master
is 1.3.0.RELEASE
, and the develop
branch has just been branched off it. Commits
have been added to develop
to change the repository version to SNAPSHOT
and implement the changes necessary for
Git flow.
36.1.2.2. Starting work on a new feature
Let’s say we want to start working on feature OC-Feature1 described in our JIRA.
git checkout develop (1) git pull (2) git checkout -b OC-Feature1 (3)
1 | Check out the develop branch |
2 | Make sure it is up to date with the remote (=GitHub repository) |
3 | Create a OC-Feature1 off the develop branch |
Then, you can start working on the feature and commit your work to the branch.
git commit -m "[OC-Feature1] Developments for OC-Feature1"
At any point during your work you can push your feature branch to the GitHub repository, to back your work up, let others look at your work or contribute to the feature, and also to trigger a build (see above). To do this, just run:
git push
If it’s your first push to this branch, Git will prompt you to define the remote branch to be associated with your local branch with the following command: git push --set-upstream origin OC-Feature1 |
You can re-work, squash your commits and push as many times as you want on a feature branch, but try limiting pushes so as to make good use of the build resources provided by Travis. Force pushes are allowed on feature branches.
To see your branch (and the status of the associated builds):
-
Go to the operatorfabric-core repository on GitHub
-
Click the
branches
tab

You can see your OC-Feature1 branch, and the green check mark next to it indicates that the associated build(s) is/are passing.
Clicking this check mark displays a pop-up describing the associated build(s), and clicking "Details" redirects you to the build report on Travis.

Feel free to add a copyright header (on top of the existing ones) to files you create or amend. See src/main/headers for examples. |
36.1.2.3. Submitting a pull request
Once you are satisfied with the state of your developments, you can submit it as a pull request.
Before submitting your branch as a pull request, please squash/fix your commits so as to reduce the number of commits and comment them accordingly. In the end, the division of changes into commits should make the PR easier to understand and review. |
You should also take a look at the review check list below to make sure your branch meets its criteria.
Once you feel your branch is ready, submit a pull request. Open pull requests are then reviewed by the core maintainers to assign a reviewer to each of them.
To do so, go to the branches
tab of the repository as described above.
Click the "New Pull Request" button for your branch.

If you haven’t done so before, read through the PR checklist pasted in the comment block, and then replace it with your own comment containing a short summary of the PR goal and any information that should go into the release notes. It’s especially important for PRs that have a direct impact on existing OperatorFabric deployments, to alert administrators of the impacts of deploying a new version and help them with the migration. Whenever possible/relevant, a link to the corresponding documentation is appreciated.

Make sure that the base branch for the PR is ![]() |
At this point, GitHub will tell you whether your branch could be automatically merged into develop
or whether
there are conflicts to be fixed.
Case 1: GitHub is able to automatically merge your branch

This means that either your branch was up to date with develop or there were no conflicts. In this case, just go ahead and fill in the PR title and message, then click "Create pull request".
Case 2: GitHub can’t merge your branch automatically

This means that there are conflicts with the current state of the develop
branch on GitHub.
To fix these conflicts, you need to update your local copy of the develop branch and merge it into your feature branch.
git checkout develop (1) git pull (2) git checkout OC-Feature1 (3) git merge develop (4)
1 | Check out the develop branch |
2 | Make sure it is up to date with the remote (=GitHub repository) |
3 | Check out the OC-Feature1 branch |
4 | Merge the new commits from develop into the feature branch |
Then, handle any conflicts from the merge. For example, let’s say there is a conflict on file dummy1.txt
:
Auto-merging dummy1.txt CONFLICT (add/add): Merge conflict in dummy1.txt Automatic merge failed; fix conflicts and then commit the result.
Open file dummy1.txt
:
<<<<<<< HEAD Some content from feature 1. ======= Some content that has been changed on develop since Feature 1 branched off. >>>>>>> develop
Update the content to reflect the changes that you want to keep:
Some content from feature 1 and some content that has been changed on develop since Feature 1 branched off.
git add dummy1.txt (1) git commit (2) git push (3)
1 | Add the manually merged file to the changes to be committed |
2 | Commit the changes to finish the merge |
3 | Push the changes to GitHub |
Now, if you go back to GitHub and try to create a pull request again, GitHub should indicate that it is able to merge automatically.

36.1.2.4. Reviewing a Pull Request
As of today, only developers from the reviewers
group can merge pull requests into develop
, but this shouldn’t
stop anyone interested in the topic of a PR to comment and review it.
If the PR doesn’t meet some of these criteria, use the GitHub review process to suggest changes and discuss problems with the PR author.
36.1.2.5. Merging a Pull Request
Once the pull request meets all the criteria from the above check list, you can merge it into the develop
branch.
-
Go to the pull request page on GitHub
-
Check that the base branch for the pull request is
develop
. This information is visible at the top of the page. -
If that is not the case, you can edit the base branch by clicking the
Edit
button in the top right corner. -
Click the
merge pull request
button at the bottom of the PR page -
Go to the JIRA page for the corresponding issue and
-
Set the
Fix version
field toNext Version
-
Set the
Status
field toDone
-
-
Go to the release-notes repository and add the issue to the list with the information provided in the PR comments.
36.1.2.6. Creating a release or hotfix
See the release process described in our CICD documentation for details.
36.2. Code Guidelines
-
We don’t mention specific authors by name in each file (in Javadoc or in the documentation for example), so as not to have to maintain these mentions (since this information is tracked by git anyway).
36.3. Documentation Guidelines
The aim of this section is to explain how the documentation is currently organized and to give a few guidelines on how it should be written.
36.3.1. Structure
All the sources for the AsciiDoc documentation published on our website are found under the src/docs/asciidoc folder in the operatorfabric-core repository.
It is organized into several folders (architecture documentation, deployment documentation, etc.). Each of these folders represent a document and contain an index.adoc file, optionally referencing other adoc files (to break it down into sections).
In addition, an images folder contains images for all documents and a resources folder contains various appendices that might be of use to some people but that we felt weren’t part of the main documentation.
The table below gives a short summary of the content of each document as well as advice on which ones you should focus on depending on your profile.
Contributor |
A developer who contributes (or wishes to) to the OperatorFabric project |
Developer |
A developer working on an application using OperatorFabric or a third-party application posting content to an OperatorFabric instance |
Admin |
Someone who is in charge of deploying and maintaining OperatorFabric in production as part of an integrated solution |
Product Owner |
Project managers, anyone interested in using OperatorFabric for their business requirements. |
Folder | Content | Contributor | Developer | Admin | Product Owner |
---|---|---|---|---|---|
architecture |
Architecture documentation Describes the business objects and concepts handled by OperatorFabric as well as the microservices architecture behind it. |
Yes |
Yes |
Yes |
|
CICD |
CICD Pipeline documentation Describes our CICD pipeline and release process |
Yes |
|||
community |
OF Community documentation Everything about the OperatorFabric Community: Code of conduct, governance, contribution guidelines, communication channels. |
Yes |
|||
deployment |
Deployment documentation Explains how to deploy and configure an OperatorFabric instance |
Yes |
Yes |
Yes |
|
dev_env |
Development Environment documentation Explains how to set up a working development environment for OperatorFabric with all the appropriate tooling and how to run OperatorFabric in development mode. |
Yes |
|||
docs |
This folder contains the documentation that should be archived for previous releases (as of today, the release notes and single page documentation - see below). |
Yes |
Yes |
Yes |
Yes |
getting_started |
Getting Started Guide guides you through setting up OperatorFabric and experimenting with its main features |
Yes |
Yes |
Maybe |
|
reference_doc |
Reference Documentation contains the reference documentation for each microservice. It starts off with a high-level functional documentation and then goes into more technical details when needed. |
Yes |
Yes |
Yes |
In addition to this asciidoctor documentation, API documentation is available in the form of SwaggerUI-generated html
pages. It is generated by the generateSwaggerUI
Gradle task, using the swagger.yaml files from each service (for
example for the
Actions API
). It can be found under the build/docs/api folder for each client or service project.
36.3.2. Conventions
-
In addition to the "visible" structure described above, documents are broken down into coherent parts using the "include" feature of AsciiDoc. This is done mostly to avoid long files that are harder to edit, but it also allows us to reuse some content in different places.
-
Given the number of files this creates, we try to keep header attributes in files to a minimum. Instead, they’re set in the configuration of the asciidoctor gradle task:
build.gradleasciidoctor { sources { include '*/index.adoc','docs/*' } resources { from('src/docs/asciidoc') { include 'images/*' } } attributes nofooter : '', revnumber : operatorfabric.version, revdate : operatorfabric.revisionDate, sectnums : '', sectnumlevels : '4', sectanchors : '', toc : 'left', toclevels : '4', icons : 'font', imagesdir : '../images', "hide-uri-scheme" : '', "source-highlighter": 'coderay' }
In particular, the version and revision date are set automatically from the version defined in the VERSION file at the root of the project and the current date.
-
All files are created starting with level 0 titles so:
-
They can be generated on their own if need be.
-
They can be included at different levels in different documents using leveloffset.
-
-
In addition to being available as separate documents (architecture, reference, etc.) for the current release, the documentation is also generated as a single page document available for all releases from the releases page. This is also a way to make searching the documentation for specific terms easier, and could be used to generate a single page pdf documentation.
-
Unfortunately, this entails a little complexity for cross-references and relative links, because the path to the content is a little different depending on whether the content is generated as different pages or as a single page document.
For example, to link to the "Card Structure" section of the reference document from the architecture document, one needs to use the following external cross-reference:
<<{gradle-rootdir}/documentation/current/reference_doc/index.adoc#card_structure, Card Structure>>
In the case of the single-page documentation however, both the architecture content and the reference content are part of the same document, so the cross-reference becomes a simple internal cross-reference:
<<card_structure, Card Structure>>
This is managed by using the
ifdef
andindef
directives to define which link syntax should be used:ifdef::single-page-doc[<<card_structure, Card Structure>>] ifndef::single-page-doc[<<{gradle-rootdir}/documentation/current/reference_doc/index.adoc#card_structure, Card Structure>>]
The label ("Card Structure" in this example) is defined with each link because it seems that defining it in the target file along with the ID ( [[my_section_id, text to display]]
) doesn’t work with relative links.In the same way, for relative links to external files (mostly the API documentation):
ifdef::single-page-doc[link:../api/cards/index.html#/archives[here]] ifndef::single-page-doc[link:{gradle-rootdir}/documentation/current/api/cards/index.html#/archives[here]]
For this to work, the single_page_doc.adoc file needs to have :single-page-doc:
as a header attribute. -
As you can see in the examples above, we are using custom-defined section ids as anchors rather than taking advantage of generated ones (see documentation). This is cumbersome but:
-
Generation will give a warning if duplicate ids are defined, whereas with generated ids it will silently link to the wrong section.
-
Restructuring the document might change the generated section ID, creating broken links.
-
Its easier to find referenced text (ctrl-f on id)
-
The presence of a custom-defined ID is a sign that the content is referenced somewhere else, which you should take into account if you’re thinking of deleting or moving this content.
-
-
The :imagesdir: attribute is set globally as
../images
, because all images are stored undersrc/docs/asciidoc/images
. -
In addition to links, it is sometimes necessary to display the actual content of files (or part of it) in the documentation (in the case of configuration examples, for instance). Whenever possible, this should be done by using the include directive rather than copying the content into the adoc file. This way the documentation will always be up to date with the file content at the time of generation.
See the build.gradle include above for an example using tags to include only part of a file.
-
Source-highlighting is done using Coderay. See their documentation for supported languages, and the AsciiDoctor documentation on how to apply source-highlighting.
-
Avoid long lines whenever possible (for example, try not to go beyond 120 characters). This makes editing the documentation easier and diffs more readable.
-
Most links to other OperatorFabric documents should be relative (see above) so they automatically point to the document in the same version rather than the latest version. Links starting with opfab.github.io/documentation/current/ should only be used when we want to always refer to the latest (release) version of the documentation.
-
If necessary, add the relevant copyright at the top of the file.
36.4. Copyright Headers
All source files and documentation files for the project should bear copyright headers.
36.4.1. Header templates
In the case of source files (*.java, *.css or *.scss, *.html, *.ts, etc.), we are working with the Mozilla Public License, v. 2.0, so the header should be something like this:
Copyright (c) YYYY-YYYY, Entity Name (website or contact info)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.
In the case of documentation files (*.adoc), we use the Creative Commons Attribution 4.0 International license, so the header should be:
Copyright (c) YYYY-YYYY, Entity Name (website or contact info)
See AUTHORS.txt
This document is subject to the terms of the Creative Commons Attribution 4.0 International license.
If a copy of the license was not distributed with this
file, You can obtain one at https://creativecommons.org/licenses/by/4.0/.
SPDX-License-Identifier: CC-BY-4.0
These templates should of course be converted to comments depending on the file type. See src/main/headers for examples.
Please make sure to include the appropriate header when creating new files and to update the existing one when making changes to a file.
In the case of a first time contribution, the GitHub username of the person making the contribution should also be added to the AUTHORS file.
36.4.2. Examples
36.4.2.1. Creating a new file
Let’s say a developer from entity Entity X creates a new java file in 2020. The header should read:
Copyright (c) 2020, Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.
36.4.2.2. Updating a file
Given an existing java file with the following header:
Copyright (c) 2020, Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.
If a developer from entity Entity X edits it in 2021, the header should now read:
Copyright (c) 2020-2021, Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.
However, if a developer from entity Entity X edits it in 2022, but no one from Entity X had touched it in 2021, the header should now read:
Copyright (c) 2020, 2022 Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.
36.4.2.3. Multiple contributors
Given an existing java file with the following header:
Copyright (c) 2020-2021, Entity X (http://www.entityX.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.
If a developer from entity Entity Y edits it in 2021, the header should now read:
Copyright (c) 2020-2021, Entity X (http://www.entityX.org)
Copyright (c) 2021, Entity Y (http://www.entityY.org)
See AUTHORS.txt
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
SPDX-License-Identifier: MPL-2.0
This file is part of the OperatorFabric project.
37. Project Governance
37.1. Project Owner
OperatorFabric is part of the LF Energy Foundation, a project of the Linux Foundation that supports open source innovation projects within the energy and electricity sectors.
37.2. Committers
Committers are contributors who have made several valuable contributions to the project and are now relied upon to both write code directly to the repository and screen the contributions of others. In many cases they are programmers but it is also possible that they contribute in a different role. Typically, a committer will focus on a specific aspect of the project, and will bring a level of expertise and understanding that earns them the respect of the community and the project owner.
37.3. Technical Steering Committee
The Technical Steering Committee (TSC) is composed of voting members elected by the active Committers as described in the project’s Technical Charter.
OperatorFabric TSC voting members are:
Boris Dolley will chair the TSC, with Hanae Safi as his deputy.
37.4. Contributors
Contributors include anyone in the technical community that contributes code, documentation, or other technical artifacts to the project.
Anyone can become a contributor. There is no expectation of commitment to the project, no specific skill requirements and no selection process. To become a contributor, a community member simply has to perform one or more actions that are beneficial to the project.
38. Communication channels
In addition to GitHub we have set up:
38.1. Website: opfab.org
Our website contains all the documentation and resources we’re currently working on. Here is what we aim to provide:
-
Architecture documentation
-
REST API documentation
-
Reference documentation for each component
-
Javadoc/Compodoc for each component
-
Tutorials and QuickStart guides and videos
This documentation is our priority right now so future contributors can quickly find their way around the project. Needless to say, it’s a work in progress so feel free to tell us what you feel is missing or what type of documentation you would be interested in as a contributor.
We also use this website to broadcast any news we have about the project so don’t hesitate to subscribe to the RSS feed on the home page to be informed of any update. |
38.2. Spectrum Community : spectrum.chat/opfab
If you would like to join the discussions regarding OperatorFabric, please join our community on Spectrum!
Regarding issue tracking, our Jira platform should be open soon.
38.3. LF Energy Mailing Lists
Several mailing lists have been created by LF Energy for the project, please feel free to subscribe to the ones you could be interested in:
-
OperatorFabric Announcements (such as new releases)
-
OperatorFabric Developers for project development discussions
And if you’re interested in LF Energy in general: LF Energy General Discussion
39. Code of Conduct
The Code of Conduct for the OperatorFabric community is version 2.0 of the Contributor Covenant.
39.1. Our Pledge
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
39.2. Our Standards
Examples of behavior that contributes to a positive environment for our community include:
-
Demonstrating empathy and kindness toward other people
-
Being respectful of differing opinions, viewpoints, and experiences
-
Giving and gracefully accepting constructive feedback
-
Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
-
Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behavior include:
-
The use of sexualized language or imagery, and sexual attention or advances of any kind
-
Trolling, insulting or derogatory comments, and personal or political attacks
-
Public or private harassment
-
Publishing others’ private information, such as a physical or email address, without their explicit permission
-
Other conduct which could reasonably be considered inappropriate in a professional setting
39.3. Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
39.4. Scope
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
39.5. Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at opfab-tsc_AT_lists.lfenergy.org. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
-
Correction
- Community Impact
-
Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
- Consequence
-
A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
-
Warning
- Community Impact
-
A violation through a single incident or series of actions.
- Consequence
-
A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
-
Temporary Ban
- Community Impact
-
A serious violation of community standards, including sustained inappropriate behavior.
- Consequence
-
A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
-
Permanent Ban
- Community Impact
-
Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
- Consequence
-
A permanent ban from any sort of public interaction within the community.
39.6. Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 2.0, available at www.contributor-covenant.org/version/2/0/code_of_conduct.html. Community Impact Guidelines were inspired by Mozilla’s code of conduct enforcement ladder. For answers to common questions about this code of conduct, see the FAQ at www.contributor-covenant.org/faq. Translations are available at www.contributor-covenant.org/translations. www.contributor-covenant.org/version/2/0/code_of_conduct/code_of_conduct.txt
OperatorFabric CICD
40. Pipeline Configuration
This section briefly describes the organization of our CICD pipeline. If you are looking for more detailed information, see this document describing the steps that were necessary to create our mock pipeline as well as the issues we ran into.
Most of the access and permissions required by our CICD plateform (Travis) are managed by tokens that are created on each of the required services (SonarCloud, DockerHub, GitHub). A technical user account (opfabtech) has been created for each of these services so that these tokens are not linked to the account of any member of the team.
40.1. CICD Pipeline
40.1.2. Travis CI
We use Travis CI to manage our pipeline. As of today, it is composed of 7 stages:
test-sonar |
Builds the commit, runs tests and sonar analysis |
test |
Similar to |
doc |
Generates the documentation (from asciidoc sources and API documentation) and pushes it to the opfab.github.io repository to update the website. |
doc-dry-run |
Generates the documentation without pushing it |
docker-push-version |
Builds Docker images, tags them with the current version (either |
docker-push-latest |
Builds Docker images, tags them with |
docker-tag-version |
Builds Docker images and tags them with the current version, without pushing them anywhere. This stage can be triggered when we just want to check that the images can be built without actually updating them on DockerHub. |
Among these stages, four can be considered "sensitive" because they push public content that serves as a reference for
the project (docker images, documentation and to a lesser extent, sonar analysis), meaning we don’t want it tampered with.
These stages are test-sonar
, doc
and the two docker-push
stages.
These stages are triggered depending on:
-
branch type
-
event type (CRON job, push or pull request)
-
commit message hooks
In the table below:
doc hook |
stands for adding the keyword |
docker hook |
stands for adding the keyword |
latest hook |
stands for adding the keyword |
develop |
release |
master |
feature or hotfix |
pull request |
|||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Stage |
CRON |
push |
doc hook |
docker hook |
push |
push |
latest hook |
push |
doc hook |
docker hook |
push |
test-sonar |
X |
X |
X |
X |
X |
internal |
|||||
test |
external |
||||||||||
doc |
X |
X |
X |
||||||||
doc-dry-run |
X |
X |
|||||||||
docker-push-version |
X |
X |
X |
||||||||
docker-push-latest |
X |
||||||||||
docker-tag-version |
X |
X |
-
The
test-sonar
phase is ran for every build except those triggered by external PRs (i.e. originating from a fork of the repository). This is because thesonar-scanner
step it comprises requires access to an encrypted token (to be able to push the analysis to SonarCloud, see below for details) that is not shared with external PRs for security reasons, so this would cause the stage (and the build) to fail. This is why in the case of external PRs thetest
phase is ran instead (leaving out sonar-scanner).
40.1.3. SonarCloud
To be allowed to push results to SonarCloud, Travis needs to be authenticated. This is done by generating a token on SonarCloud with an account (opfabtech) that has admin rights to the organization, and then providing this token to Travis either through the .travis.yml file or as an environment variable through Travis settings.
40.1.4. GitHub (documentation)
To be allowed to push the generated documentation to the opfab.github.io, Travis needs write access to the repository. This is done by setting up a Personal Access Token in GitHub using the technical account. This token is then passed to Travis as an environment variable through Travis settings, and is used in the .travis.yml file. Right now the scope of this token is maximal, it can probably be reduced (see OC-755).
After new content is pushed to the opfab.github.io repository, it can take a few minutes before this content is visible on the website because it needs to be built by GitHub pages, and this can take a short while depending on how busy the service is. |
41. Release process
41.1. Version numbers
We work with two types of versions:
-
X.Y.Z.RELEASE versions are stable versions
-
SNAPSHOT version represents the current state of merged developments
Version numbers for X.Y.Z.RELEASE should be understood like this:
-
X: Major version, a major version adds new features and breaks compatibility with previous major and minor versions.
-
Y: Minor version, a minor version adds new features and does not break compatibility with previous minor versions for the same major version.
-
Z: Patch, a patch version only contains bug fixes of current minor version
41.2. Releasing a Version
To release a version we use some Travis dedicated jobs. These jobs are triggered by specific commit keywords and rely on the VERSION file at the root of this repository to know which version is being produced. It is thus crucial to double-check the content of this file before any push (triggering the Travis jobs) is made. |
Before releasing a version, you need to prepare the release.
41.2.1. Checking the release notes
-
Click the
Next Release
from JIRA the release list to get the release notes (click "Release notes" under the version name at the top) listing new features, fixed bugs etc… -
Make sure that the release_notes.adoc file lists all the issues, bugs, tags or feature requests that are relevant for OperatorFabric users along with explanations if need be.
-
Based on the content of this version and the rules listed above, determine the version number for next version.
41.2.2. Creating a release branch and preparing the release
-
Create a branch off the
develop
branch namedX.X.X.release
(note the lowercaserelease
to distinguish it fromX.X.X.RELEASE
tags).git checkout -b X.X.X.release
-
Cut the contents from the release_notes.adoc file from the release-notes repository and paste it to the release_notes.adoc file found under src/docs/asciidoc/docs.
-
Replace the
Version SNAPSHOT
title byVersion X.X.X.RELEASE
-
In the release page, change the name from "Next Version" to "X.X.X.RELEASE"
-
Use the ./CICD/prepare_release_version.sh script to automatically perform all the necessary changes:
./CICD/prepare_release_version.sh -v X.X.X.RELEASE
You should get the following output:
Current version is SNAPSHOT (based on VERSION file) Preparing X.X.X.RELEASE Updating version for pipeline in VERSION file Replacing SNAPSHOT with X.X.X.RELEASE in swagger.yaml files Using X.X.X.RELEASE for lfeoperatorfabric images in dev and docker environment docker-compose files The following files have been updated: M VERSION M config/dev/docker-compose.yml M config/docker/docker-compose.yml M services/core/cards-publication/src/main/modeling/swagger.yaml M services/core/thirds/src/main/modeling/swagger.yaml M services/core/users/src/main/modeling/swagger.yaml
This script performs the following changes:
-
Replace SNAPSHOT with X.X.X.RELEASE in swagger.yaml files and the VERSION file at the root operator-fabric folder
-
Change the version from SNAPSHOT to X.X.X.RELEASE in the docker-compose files for dev and docker deployments
-
-
Commit the changes with the template message:
git add . git commit -m "[RELEASE] X.X.X.RELEASE"
-
Push the commit
git push --set-upstream origin X.X.X.release
-
Check that the build is correctly triggered
You can check the status of the build job triggered by the commit on Travis CI. The build job should have the following three stages:
Wait for the build to complete (around 20 minutes) and check that all stages have been successful. This ensures that the code builds, tests are OK and there is no error preventing documentation or Docker images generation.
41.2.3. Merging the release branch into master
Once the release branch build is passing, you should merge the release branch into master
to bring the new
developments into master
and trigger the CICD tasks associated with a release (Docker images for DockerHub and
documentation).
git checkout master (1) git pull (2) git merge X.X.X.release (3) git tag X.X.X.RELEASE (4) git push (5) git push origin X.X.X.RELEASE (6)
1 | Check out the master branch |
2 | Make sure your local copy is up to date |
3 | Merge the X.X.X.release branch into master |
4 | Tag the commit with the X.X.X.RELEASE tag |
5 | Push the commits to update the remote master branch |
6 | Push the tag |
-
Check that the build is correctly triggered
You can check the status of the build job triggered by the commit on Travis CI. The build job should have the following four stages:
Wait for the build to complete (around 20 minutes) and check that all stages have been successful.
-
Check that the
X.X.X.RELEASE
images have been generated and pushed to DockerHub. -
Check that the
latest
images have been updated on DockerHub. -
Check that the documentation has been generated and pushed to the GitHub pages website
-
Check the version and revision date at the top of the documents in the current documentation (for example the architecture documentation)
-
Check that you see the X.X.X.RELEASE under the releases page and that the links work.
-
-
Check that the tag was correctly pushed to GitHub and is visible under the releases page for the repository.
41.2.4. Checking deploy docker-compose
The deploy docker-compose file should always rely on the latest RELEASE version available on DockerHub. Once the CI pipeline triggered by the previous steps has completed successfully, and you can see X.X.X.RELEASE images for all services on DockerHub, you should:
-
Remove your locally built X.X.X.RELEASE images if any
-
Run the deploy docker-compose file to make sure it pulls the images from DockerHub and behaves as intended.
41.3. Advertising the new release on the LFE mailing list
-
Send an email to the opfab-announce@lists.lfenergy.org mailing list with a link to the release notes on the website.
41.4. Preparing the next version
You should wait for all the tasks associated with creating the X.X.X.RELEASE version to finish and make sure that they’ve had the expected output before starting the preparation of the next version. This is because any committed/pushed changes preparing the new version will make rolling back or correcting any mistake on the release more complicated. |
Resources
42. Appendix A: Mock CICD Pipeline
We wanted to be able to test changes to this configuration or to the scripts used by the pipeline without risks to the real master branch, to our docker images or to the documentation. We didn’t find any such "dry-run" options on Travis CI so we decided to create a complete mock pipeline replicating the actual pipeline. This is a short summary of the necessary steps.
42.1. GitHub
-
Create organization opfab-mock
-
Create user account opfabtechmock
-
Invite opfabtechmock to opfab-mock organization as owner (see later if this can/should be restricted)
-
Fork operatorfabric-core and opfab.github.io repos to opfab-mock organization
42.2. Travis CI
-
Go to travis-ci.org (not travis-ci.com)
-
Sign in with GitHub (using opfabtechmock) This redirects to a page offering to grant access to your account to Travis CI for Open Source, click ok.
Note: This page (github.com/settings/applications) lets you review the apps that have keen granted this kind of access and when it was last used.
-
Looking at travis-ci.org/account/repositories, the opfab-mock organization didn’t appear even after syncing the account, so click on "review and add your authorized organizations" (this redirects to github.com/settings/connections/applications/f244293c729d5066cf27).
-
Click "grant" next to the organization.
-
After that, Travis CI for Open Source is visible here: github.com/organizations/opfab-mock/settings/oauth_application_policy.
This allowed the opfab-mock organization to appear in the account: travis-ci.org/account/repositories Click on opfab-mock to get to the list of repositories for this organization and toggle "operatorfabric-core" on.
-
Under the Travis settings for the operatorfabric-core repository, create a Cron Job on branch
develop
, that runs daily and is always run.
42.3. SonarCloud
-
Go to SonarCloud.io
-
Sign in with GitHub
When switching between accounts (your own and the technical account for example), make sure to log out of SonarCloud when you’re done using an account because otherwise it will keep the existing session (even in a new tab) rather than sign you back in with the account you’re currently using on GitHub. -
Authorize SonarCloud by sonarcloud + Create a new organization from the "+" dropdown menu to the left of your profile picture
Click on "Just testing? You can create manually" on the bottom right, and not "Choose an organization on GitHub". This is because the "opfab" SonarCloud organization that we are trying to replicate is not currently linked to its GitHub counterpart (maybe the option didn’t exist at the time), so we’re doing the same here. In the future it might be a good idea to link them (see OC-751)as SonarCloud states that
Binding an organization from SonarCloud with GitHub is an easy way to keep them synchronized. To bind this organization to GitHub, the SonarCloud application will be installed.
And then it warns again that
Manual setup is not recommended, and leads to missing features like appropriate setup of your project or analysis feedback in the Pull Request.
-
Click "Analyze new project" then "Create manually" (for the same reasons as the organization).
Project key and display name : org.lfenergy.operatorfabric:operatorfabric-core-mock Then choose "Public" and click "Set Up".
-
This redirects to the the following message : We initialized your project on SonarCloud, now it’s up to you to launch analyses!
-
Now we need to make Sonar aware of our different branch types and of the fact that we have
develop
and notmaster
as our default branch on GitHub. Under branches/administration:-
Change the "long living branches pattern" to
(master|develop)
-
Delete the
develop
branch if one had been created -
Rename the main branch from
master
todevelop
-
-
Now we need to provide Travis with a token to use to access SonarCloud. To generate a token, go to Account/Security
There are two options to pass this token to Travis:
-
Option A: Define a new SONAR_TOKEN environment variable in the repository’s settings in Travis, then use it in the .travis.yml file as follows:
addons: sonarcloud: organization: "opfab-mock" token: secure: ${SONAR_TOKEN}
-
Option B: Encrypt this token using the travis gem:
travis encrypt XXXXXXXXXXXXXXXXXXXXXXXXXXXX
This has to be run at the root of the repository otherwise you get the following error: Can’t figure out GitHub repo name. Ensure you’re in the repo directory, or specify the repo name via the -r option (e.g. travis <command> -r <owner>/<repo>)
Do not use the --add option (to add the encrypted value directly to the .travis.yml file) as it changes a lot of things in the file (remove comments, change indentation and quotes, etc.). Paste the result (YYYY) in the .travis.yml file:
addons: sonarcloud: organization: "opfab-mock" token: secure: "YYYY"
Option A would be better as it’s not necessary to make a new commit if the token needs to be changed, but it stopped working suddenly, maybe as a result of a travis policy change regarding encryption. OC-752 was created to investigate.
-
There is still a SONAR_TOKEN environment variable defined in the Travis settings (with a dummy value) because there is a test on its presence to decide whether sonar-scanner should be launched or not (in the case of external PRs) (see OC-700 / OC-507). |
-
Finally change the organization in .travis.yml file and the project key in sonar-project.properties (replace the actual values with mock values).
In travis.yml we launch the sonar-scanner command whereas the tutorials mention gradle sonarqube. It looks like we’re following this which says that "The SonarScanner is the scanner to use when there is no specific scanner for your build system." But there is a specific scanner for Gradle: |
The SonarScanner for Gradle provides an easy way to start SonarCloud analysis of a Gradle project. The ability to execute the SonarCloud analysis via a regular Gradle task makes it available anywhere Gradle is available (CI service, etc.), without the need to manually download, setup, and maintain a SonarScanner installation. The Gradle build already has much of the information needed for SonarCloud to successfully analyze a project. By configuring the analysis based on that information, the need for manual configuration is reduced significantly.
→ This could make sonar easier to run locally and reduce the need for configuration (see OC-754).
42.4. GitHub (documentation)
-
Create a personal access token for GitHub (for the documentation). Its name is not important.
See GitHub documentation.
-
Create a GH_DOC_TOKEN env variable in Travis settings for the operatorfabric-core repository , making it available to all branches.
42.5. DockerHub
-
Create account opfabtechmock
-
Create organization lfeoperatorfabricmock
-
Change organization name in docker config in services.gradle
docker { name "lfeoperatorfabricmock/of-${project.name.toLowerCase()}" tags 'latest', dockerVersionTag labels (['project':"${project.group}"]) files( jar.archivePath , 'src/main/resources/bootstrap-docker.yml' , '../../../src/main/docker/java-config-docker-entrypoint.sh') buildArgs(['JAR_FILE' : "${jar.archiveName}", 'http_proxy' : apk.proxy.uri, 'https_proxy' : apk.proxy.uri, 'HTTP_PROXY_AUTH': "basic:*:$apk.proxy.user:$apk.proxy.password"]) dockerfile file("src/main/docker/Dockerfile") }
-
Add the opfabtechmock dockerhub account credentials as DOCKER_CLOUD_USER / DOCKER_CLOUD_PWD in Travis env variables in settings (see GH_DOC_TOKEN above).
42.6. Updating the fork
To make the mock repositories catch up with the upstream (the real repositories) from time to time, follow this procedure (the command line version), except you should do a rebase instead of a merge: rick.cogley.info/post/update-your-forked-repository-directly-on-github/