1. Appendix A: Mock CICD Pipeline

We wanted to be able to test changes to this configuration or to the scripts used by the pipeline without risks to the real master branch, to our docker images or to the documentation. We didn’t find any such "dry-run" options on Travis CI so we decided to create a complete mock pipeline replicating the actual pipeline. This is a short summary of the necessary steps.

1.1. GitHub

  1. Create organization opfab-mock

  2. Create user account opfabtechmock

  3. Invite opfabtechmock to opfab-mock organization as owner (see later if this can/should be restricted)

  4. Fork operatorfabric-core and opfab.github.io repos to opfab-mock organization

1.2. Travis CI

  1. Go to travis-ci.org (not travis-ci.com)

  2. Sign in with GitHub (using opfabtechmock) This redirects to a page offering to grant access to your account to Travis CI for Open Source, click ok.

    Note: This page (github.com/settings/applications) lets you review the apps that have keen granted this kind of access and when it was last used.

  3. Looking at travis-ci.org/account/repositories, the opfab-mock organization didn’t appear even after syncing the account, so click on "review and add your authorized organizations" (this redirects to github.com/settings/connections/applications/f244293c729d5066cf27).

  4. Click "grant" next to the organization.

  5. After that, Travis CI for Open Source is visible here: github.com/organizations/opfab-mock/settings/oauth_application_policy.

    This allowed the opfab-mock organization to appear in the account: travis-ci.org/account/repositories Click on opfab-mock to get to the list of repositories for this organization and toggle "operatorfabric-core" on.

  6. Under the Travis settings for the operatorfabric-core repository, create a Cron Job on branch develop, that runs daily and is always run.

1.3. SonarCloud

  1. Go to SonarCloud.io

  2. Sign in with GitHub

    When switching between accounts (your own and the technical account for example), make sure to log out of SonarCloud when you’re done using an account because otherwise it will keep the existing session (even in a new tab) rather than sign you back in with the account you’re currently using on GitHub.
  3. Authorize SonarCloud by sonarcloud + Create a new organization from the "+" dropdown menu to the left of your profile picture

    Click on "Just testing? You can create manually" on the bottom right, and not "Choose an organization on GitHub".

    This is because the "opfab" SonarCloud organization that we are trying to replicate is not currently linked to its GitHub counterpart (maybe the option didn’t exist at the time), so we’re doing the same here. In the future it might be a good idea to link them (see OC-751)as SonarCloud states that

Binding an organization from SonarCloud with GitHub is an easy way to keep them synchronized. To bind this organization to GitHub, the SonarCloud application will be installed.

And then it warns again that

Manual setup is not recommended, and leads to missing features like appropriate setup of your project or analysis feedback in the Pull Request.

  1. Click "Analyze new project" then "Create manually" (for the same reasons as the organization).

    Project key and display name : org.lfenergy.operatorfabric:operatorfabric-core-mock Then choose "Public" and click "Set Up".

  2. This redirects to the the following message : We initialized your project on SonarCloud, now it’s up to you to launch analyses!

  3. Now we need to make Sonar aware of our different branch types and of the fact that we have develop and not master as our default branch on GitHub. Under branches/administration:

    • Change the "long living branches pattern" to (master|develop)

    • Delete the develop branch if one had been created

    • Rename the main branch from master to develop

  4. Now we need to provide Travis with a token to use to access SonarCloud. To generate a token, go to Account/Security

    There are two options to pass this token to Travis:

    1. Option A: Define a new SONAR_TOKEN environment variable in the repository’s settings in Travis, then use it in the .travis.yml file as follows:

      addons:
        sonarcloud:
          organization: "opfab-mock"
          token:
               secure: ${SONAR_TOKEN}
    2. Option B: Encrypt this token using the travis gem:

      travis encrypt XXXXXXXXXXXXXXXXXXXXXXXXXXXX
      This has to be run at the root of the repository otherwise you get the following error: Can’t figure out GitHub repo name. Ensure you’re in the repo directory, or specify the repo name via the -r option (e.g. travis <command> -r <owner>/<repo>)
      Do not use the --add option (to add the encrypted value directly to the .travis.yml file) as it changes a lot of things in the file (remove comments, change indentation and quotes, etc.).

      Paste the result (YYYY) in the .travis.yml file:

      addons:
        sonarcloud:
          organization: "opfab-mock"
          token:
               secure: "YYYY"

      Option A would be better as it’s not necessary to make a new commit if the token needs to be changed, but it stopped working suddenly, maybe as a result of a travis policy change regarding encryption. OC-752 was created to investigate.

There is still a SONAR_TOKEN environment variable defined in the Travis settings (with a dummy value) because there is a test on its presence to decide whether sonar-scanner should be launched or not (in the case of external PRs) (see OC-700 / OC-507).
  1. Finally change the organization in .travis.yml file and the project key in sonar-project.properties (replace the actual values with mock values).

In travis.yml we launch the sonar-scanner command whereas the tutorials mention gradle sonarqube. It looks like we’re following this which says that "The SonarScanner is the scanner to use when there is no specific scanner for your build system." But there is a specific scanner for Gradle:

The SonarScanner for Gradle provides an easy way to start SonarCloud analysis of a Gradle project. The ability to execute the SonarCloud analysis via a regular Gradle task makes it available anywhere Gradle is available (CI service, etc.), without the need to manually download, setup, and maintain a SonarScanner installation. The Gradle build already has much of the information needed for SonarCloud to successfully analyze a project. By configuring the analysis based on that information, the need for manual configuration is reduced significantly.

→ This could make sonar easier to run locally and reduce the need for configuration (see OC-754).

1.4. GitHub (documentation)

  1. Create a personal access token for GitHub (for the documentation). Its name is not important.

    perso access token doc
  2. Create a GH_DOC_TOKEN env variable in Travis settings for the operatorfabric-core repository , making it available to all branches.

    adding gh doc token travis

1.5. DockerHub

  1. Create account opfabtechmock

  2. Create organization lfeoperatorfabricmock

  3. Change organization name in docker config in services.gradle

    docker {
        name "lfeoperatorfabricmock/of-${project.name.toLowerCase()}"
        tags 'latest', dockerVersionTag
        labels (['project':"${project.group}"])
        files( jar.archivePath
            , 'src/main/resources/bootstrap-docker.yml'
            , '../../../src/main/docker/java-config-docker-entrypoint.sh')
        buildArgs(['JAR_FILE'       : "${jar.archiveName}",
                   'http_proxy'     : apk.proxy.uri,
                   'https_proxy'    : apk.proxy.uri,
                   'HTTP_PROXY_AUTH': "basic:*:$apk.proxy.user:$apk.proxy.password"])
        dockerfile file("src/main/docker/Dockerfile")
    }
  4. Add the opfabtechmock dockerhub account credentials as DOCKER_CLOUD_USER / DOCKER_CLOUD_PWD in Travis env variables in settings (see GH_DOC_TOKEN above).

1.6. Updating the fork

To make the mock repositories catch up with the upstream (the real repositories) from time to time, follow this procedure (the command line version), except you should do a rebase instead of a merge: rick.cogley.info/post/update-your-forked-repository-directly-on-github/

2. Migration Guide from release 1.4.0 to release 1.5.0

2.1. Refactoring of configuration management

2.1.1. Motivation for the change

The initial situation was to have a Third concept that was meant to represent third-party applications that publish content (cards) to OperatorFabric. As such, a Businessconfig was both the sender of the message and the unit of configuration for resources for card rendering.

Because of that mix of concerns, naming was not consistent across the different services in the backend and frontend as this object could be referred to using the following terms: * Third * ThirdParty * Bundle * Publisher

But now that we’re aiming for cards to be sent by entities, users (see Free Message feature) or external services, it doesn’t make sense to tie the rendering of the card ("Which configuration bundle should I take the templates and details from?") to its publisher ("Who/What emitted this card and who/where should I reply?").

2.1.2. Changes to the model

To do this, we decided that the publisher of a card would now have the sole meaning of emitter, and that the link to the configuration bundle to use to render a card would now be based on its process field.

2.1.2.1. On the Businessconfig model

We used to have a Businessconfig object which had an array of Process objects as one of its properties. Now, the Process object replaces the Businessconfig object and this new object combines the properties of the old Businessconfig and Process objects (menuEntries, states, etc.).

In particular, this means that while in the past one bundle could "contain" several processes, now there can be only one process by bundle.

The Businessconfig object used to have a name property that was actually its unique identifier (used to retrieve it through the API for example). It also had a i18nLabelKey property that was meant to be the i18n key to determine the display name of the corresponding businessconfig, but so far it was only used to determine the display name of the associated menu in the navbar in case there where several menu entries associated with this businessconfig.

Below is a summary of the changes to the config.json file that all this entails:

Field before Field after Usage

name

id

Unique identifier of the bundle. Used to match the publisher field in associated cards, should now match process

name

I18n key for process display name.

states.mystate.name

I18n key for state display name.

i18nLabelKey

menuLabel

I18n key for menu display name in case there are several menu entries attached to the process

processes array is a root property, states array being a property of a given process

states array is a root property

Here is an example of a simple config.json file:

Before
{
  "name": "TEST",
  "version": "1",
  "defaultLocale": "fr",
  "menuEntries": [
    {"id": "uid test 0","url": "https://opfab.github.io/","label": "menu.first"},
    {"id": "uid test 1","url": "https://www.la-rache.com","label": "menu.second"}
  ],
  "i18nLabelKey": "businessconfig.label",
  "processes": {
    "process": {
      "states": {
        "firstState": {
          "details": [
            {
              "title": {
                "key": "template.title"
              },
              "templateName": "operation"
            }
          ]
        }
      }
    }
  }
}
After
{
  "id": "TEST",
  "version": "1",
  "name": "process.label",
  "defaultLocale": "fr",
  "menuLabel": "menu.label",
  "menuEntries": [
    {"id": "uid test 0","url": "https://opfab.github.io/","label": "menu.first"},
    {"id": "uid test 1","url": "https://www.la-rache.com","label": "menu.second"}
  ],
  "states": {
    "firstState": {
      "name" :"mystate.label",
      "details": [
        {
          "title": {
            "key": "template.title"
          },
          "templateName": "operation"
        }
      ]
    }
  }
}
You should also make sure that the new i18n label keys that you introduce match what is defined in the i18n folder of the bundle.
2.1.2.2. On the Cards model
Field before Field after Usage

publisherVersion

processVersion

Identifies the version of the bundle. It was renamed for consistency now that bundles are linked to processes not publishers

process

process

This field is now required and should match the id field of the process (bundle) to use to render the card.

processId

processInstanceId

This field is just renamed , it represent an id of an instance of the process

These changes impact both current cards from the feed and archived cards.

The id of the card is now build as process.processInstanceId an not anymore publisherID_process.

2.2. Change on the web-ui.json

The parameter navbar.thirdmenus.type has been removed from this file. Starting from this release the related functionality has been moved on bundle basis and it’s not more global. See "Changes on bundle config.json" for more information.

2.3. Changes on bundle config.json

Under menuEntries a new subproperty has been added: linkType. This property replace the old property navbar.thirdmenus.type in web-ui.json, making possible a more fine control of the related behaviour.

2.4. Component name

We also change the component name of third which is now named businessconfig.

2.5. Changes to the endpoints

The /third endpoint becomes /businessconfig/processes.

2.6. Migration steps

This section outlines the necessary steps to migrate existing data.

You need to perform these steps before starting up the OperatorFabric instance because starting up services with the new version while there are still "old" bundles in the businessconfig storage will cause the businessconfig service to crash.
  1. Backup your existing bundles and existing Mongo data.

  2. Edit your bundles as detailed above. In particular, if you had bundles containing several processes, you will need to split them into several bundles. The id of the bundles should match the process field in the corresponding cards.

  3. If you use navbar.thirdmenus.type in web-ui.json, rename it to navbar.businessmenus.type

  4. Run the following scripts in the mongo shell to copy the value of publisherVersion to a new processVersion field and to copy the value of processId to a new processInstanceId field for all cards (current and archived):

+ .Current cards

db.cards.updateMany(
{},
{ $rename: { "publisherVersion": "processVersion", "processId": "processInstanceId" } }
)

+ .Archived cards

db.archivedCards.updateMany(
{},
{ $rename: { "publisherVersion": "processVersion", "processId": "processInstanceId" } }
)
  1. Make sure you have no cards without process using the following mongo shell commands:

    db.cards.find({ process: null})
    db.archivedCards.find({ process: null})
  2. If it turns out to be the case, you will need to set a process value for all these cards to finish the migration. You can do it either manually through Compass or using a mongo shell command. For example, to set the process to "SOME_PROCESS" for all cards with an empty process, use:

    db.cards.updateMany(
    { process: null },
    {
    $set: { "process": "SOME_PROCESS"}
    }
    )
    db.archivedCards.updateMany(
    { process: null },
    {
    $set: { "process": "SOME_PROCESS"}
    }
    )
  3. If you have any code or scripts that push bundles, you should update it to point to the new endpoint.