The aim of this document is to provide all the necessary information to developers who would like to start working on OperatorFabric. It will walk you through setting up the necessary tooling to be able to launch OperatorFabric in development mode, describe the structure of the project and point out useful tools (Gradle tasks, scripts, etc.) for development purposes.
1. Requirements
To install a dev environment for Operator Fabric , you need
-
A linux physical or virtual machine
-
A git client
-
Docker
-
Docker Compose with 2.1+ file format support
-
Chrome (needed for UI tests in build)
2. Setting up your development environment
2.1. Clone repository
git clone https://github.com/opfab/operatorfabric-core.git
cd operatorfabric-core
Do not forget to set proxy if needed , for example :
git config --global http.proxy http://LOGIN:PWD@PROXY_URL:PORT
2.2. Install sdkman and nvm
sdkman is used to manage java version , see sdkman.io/ for installation
nvm is used to manage node and npm version , see github.com/nvm-sh/nvm for installation See :
Do not forget to set proxy if needed , for example :
export https_proxy= http://LOGIN:PWD@PROXY_URL:PORT
Once you have installed sdkman and nvm, you can source the following script to set up your development environment (appropriate versions of Node Java and project variables set):
source bin/load_environment_light.sh
From now on, you can use environment variable ${OF_HOME} to go back to
the home repository of OperatorFabric .
|
2.3. Setting up your proxy configuration
If you use a proxy to access internet, you must configure it for all tools needed to build opfab
2.3.1. Docker
To download images , you need to set the proxy for the docker daemon
See docker documentation to set the proxy
To build docker images, you need to set a proxy via the variable https_proxy
export https_proxy= http://LOGIN:PWD@PROXY_URL:PORT
You may have some DNS error when building docker images, in this case use the IP address of your proxy instead of the FQDN .
2.4. Deploy needed docker containers
OperatorFabric
development needs docker images of MongoDB
, RabbitMQ
, web-ui
and Keycloak
running.
For this, use:
cd ${OF_HOME}/config/dev
./docker-compose.sh
2.5. Build OperatorFabric with Gradle
Using the wrapper in order to ensure building the project the same way from one machine to another.
To fully build opfab :
cd ${OF_HOME}
./gradlew buildDocker
Optionally, it is possible to
2.6. Run OperatorFabric Services using the run_all.sh
script
cd ${OF_HOME}
bin/run_all.sh -w start
See bin/run_all.sh -h for details.
|
2.7. Log into the UI
URL: localhost:2002
login: operator1_fr
password: test
Other users available in development mode are operator2_fr
, operator3_fr
, operator4_fr
and admin
, with test
as password.
2.8. Push cards to the feed
You can check that you see cards into the feed by running the following scripts.
./src/test/resources/loadTestConf.sh
./src/test/resources/send6TestCards.sh
2.9. Enabling local quality report generation
This step is optional and is generally not needed (only needed if you want a Sonarqube report locally)
Sonarqube reporting needs a SonarQube
docker container.
Use the ${OF_HOME}/src/main/docker/test-quality-environment/docker-compose.yml
to get them all running.
To generate the quality report, run the following commands:
cd ${OF_HOME}
./gradlew jacocoTestReport
To export the reports into the SonarQube
docker instance, install and use SonarScanner.
3. User Interface
In the following document the variable declared as OF_HOME is the root folder of the operatorfabric-core project .
|
CLI |
stands for Command Line Interface |
3.1. Build
Within the folder ${OF_HOME}/ui/main
, run ng build
to build the project.
The build artifacts will be stored in:
${OF_HOME}/ui/main/build/distribution
The previous command could lead to the following error:
Generating ES5 bundles for differential loading...
An unhandled exception occurred: Call retries were exceeded
See "/tmp/ng-<random-string>/angular-errors.log" for further details.
where ng-<random-string>
is a temporary folder created by Angular to build the front-end.
Use node --max_old_space_size=4096 node_modules/@angular/cli/bin/ng build
instead to solve this problem.
3.2. Test
3.2.2. Test during UI development
-
if the RabbitMQ, MongoDB and Keycloak docker containers are not running, launch them;
-
set your environment variables with
source ${OF_HOME}/bin/load_environment_light.sh
; -
run the micro services using the same command as earlier:
${OF_HOME}/bin/run_all.sh start
; -
launch an angular server with the command:
ng serve
; -
test your changes in your browser using this url:
localhost:4200
which leads tolocalhost:4200/#/feed
.
3.2.2.1. Troubleshooting :
If ng serve
returns the error Command 'ng' not found
, install the Angular CLI globally with the following
command.
npm install -g @angular/cli
This will install the latest version of the Angular command line, which might not be in line with the one used by the
project, but it’s not an issue as when you run ng serve
the local version of the Angular CLI
(as defined in the package.json file) will be used.
If it is still not running , launch in the ui/main directory
npm link @angular/cli
4. Environment variables
These variables are loaded by bin/load_environment_light.sh
-
OF_HOME: OperatorFabric root dir
-
OF_VERSION : OperatorFabric version, as defined in the
$OF_HOME/VERSION
file -
OF_CLIENT_REL_COMPONENTS : List of modules for the client libraries
5. Project Structure
-
bin: contains useful scripts for dev purposes
-
CICD/github: scripts used by Github for the build process
-
client: contains REST APIs simple beans definition, may be used by external projects
-
config: contains external configurations for all services , keycloak and docker compose files to help with tests and demonstrations
-
node-services: contains the node microservices that make up OperatorFabric
-
services: contains the java microservices that make up OperatorFabric
-
cards-consultation (cards-consultation-service): Card consultation service
-
cards-publication (cards-publication-service): Card publication service
-
external-devices (external-devices-service): External Devices service
-
src: contains swagger templates for services
-
businessconfig (businessconfig-service): Businessconfig-party information management service
-
users (users-service): Users management service
-
-
web-ui: project based on Nginx server to serve the OperatorFabric UI
-
-
-
asciidoc: General documentation (Architecture, Getting Started Guide, etc.)
-
-
-
docker: contains docker compose files to help with tests and demonstrations
-
-
-
api : karate code for automatic api testing (non-regression tests)
-
clientApp : test application that send card (used to test client lib)
-
cypress : cypress code for automatic ui testing
-
dummyModbusDevice : application emulating a Modbus device for test purposes
-
externalApp : external test application that received card
-
externalWebAppExample : example application to show integration of an external application in opfab UI
-
resources : scripts and data for manual testing
-
-
-
tools : code common to every back services
-
ui: Angular sources for the UI
5.1. Conventions regarding project structure and configuration
Sub-projects must conform to a few rules in order for the configured Gradle tasks to work:
5.1.1. Java
[sub-project]/src/main/java |
contains java source code |
[sub-project]/src/test/java |
contains java tests source code |
[sub-project]/src/main/resources |
contains resource files |
[sub-project]/src/test/resources |
contains test resource files |
5.1.2. Modeling
Core services projects declaring REST APIS that use Swagger for their definition must declare two files:
[sub-project]/src/main/modeling/swagger.yaml |
Swagger API definition |
[sub-project]/src/main/modeling/config.json |
Swagger generator configuration |
5.1.3. Docker
Services project all have docker image generated in their build cycle. See Gradle Tasks for details.
Per project configuration :
-
docker file : [sub-project]/Dockerfile
6. Development tools
6.1. Scripts (bin and CICD)
bin/load_environment_light.sh |
sets up environment when sourced (java version & node version) |
bin/run_all.sh |
runs all all services (see below) |
6.1.1. run_all.sh
Please see run_all.sh -h
usage before running.
Prerequisites
-
mongo running on port 27017 with user "root" and password "password" (See src/main/docker/mongodb/docker-compose.yml for a pre configured instance).
-
rabbitmq running on port 5672 with user "guest" and password "guest" (See src/main/docker/rabbitmq/docker-compose.yml for a pre configured instance).
2002 web-ui Web ui and gateway (Nginx server) 2100 businessconfig Businessconfig service http (REST) 2102 cards-publication card publication service http (REST) 2103 users Users management service http (REST) 2104 cards-consultation card consultation service http (REST) 2105 external-devices External devices service http (REST) 4100 businessconfig java debug port 4102 cards-publication java debug port 4103 users java debug port 4104 cards-consultation java debug port 4105 external-devices java debug port
Ports configuration
Port
6.2. Gradle Tasks
In this section only custom tasks are described. For more information on tasks, refer to the output of the "tasks" gradle task and to gradle and plugins official documentation.
6.2.1. Services
6.2.1.1. Common tasks for all sub-projects
-
Test tasks
-
Other:
-
copyDependencies: copy dependencies to build/support_libs (for Sonar)
-
6.2.1.2. Businessconfig Service
-
Test tasks
-
prepareTestDataDir: prepare directory (build/test-data) for test data
-
compressBundle1Data, compressBundle2Data: generate tar.gz businessconfig party configuration data for tests in build/test-data
-
prepareDevDataDir: prepare directory (build/dev-data) for bootRun task
-
createDevData: prepare data in build/test-data for running bootRun task during development
-
-
Other tasks
-
copyCompileClasspathDependencies: copy compile classpath dependencies, catching lombok that must be sent for sonarqube
-
6.2.2. Client Library
The jars produced by the projects under "client" will now be published to Maven Central after each release to make integration in client applications more manageable (see the official Sonatype documentation) for more information about the requirements and publishing process.
To that end, we are using:
-
the Maven Publish Gradle plugin to take care of the metadata (producing the required pom.xml for example) and publishing the artifacts to a staging repository
-
the Signing Gradle plugin to sign the produced artifacts using a GPG key.
6.2.2.1. Configuration
For the signing task to work, you need to :
Set the signing configuration in your gradle.properties file
Add to your gradle.properties :
signing.gnupg.keyName=ID_OF_THE_GPG_KEY_TO_USE signing.secretKeyRingFile=LOCATION_OF_THE_KEYRING_HOLDING_THE_GPG_KEY
To get the keyName (ID_OF_THE_GPG_KEY_TO_USE) use :
gpg2 --list-secret-keys
LOCATION_OF_THE_KEYRING_HOLDING_THE_GPG_KEY is usually /YOUR_HOME_DIRECTORY/.gnupg/pubring.kbx
Set the credential for the publication
For the publication to the staging repository (OSSRH) to work, you need to set the credentials in your gradle.properties file:
ossrhUsername=SONATYPE_JIRA_USERNAME ossrhPassword=SONATYPE_JIRA_PASSWORD
The credentials need to belong to an account that has been granted the required privileges on the project (this is done by Sonatype on request via the same JIRA). |
More information
See this link for more information about importing a GPG key to your machine and getting its id.
6.2.2.2. Relevant tasks
These plugins and the associated configuration in the client.gradle
file make the following tasks available:
-
publishClientLibraryMavenPublicationToOssrhRepository
: this will publish the client jars to the OSSRH repository (in the case of a X.X.X.RELEASE version) or to arepos
directory in the build directory (in the case of aSNAPSHOT
version). -
publishClientLibraryMavenPublicationToMavenLocal
: this will publish the client jars to the local Maven repository
The publication tasks will call the signing task automatically.
See the plugins documentations for more details on the other tasks available and the dependencies between them.
As the client library publication is currently the only configured publication in our build, it is also possible
to use the corresponding aggregate tasks as shortcuts: publish instead of
publishClientLibraryMavenPublicationToOssrhRepository and publishToMavenLocal instead of
publishClientLibraryMavenPublicationToMavenLocal .
|
6.3. API testing with Karate DSL
If your OperatorFabric instance is not running on localhost, you need to replace localhost with the address
of your running instance within the karate-config.js file.
|
All the scripts and test files are in src/test/api/karate.
6.3.1. Run a feature
To launch a specific test, launch in src/test/api/karate.
$OF_HOME/gradlew karate --args=myfeature.feature
The result will be available in the target
repository.
6.3.2. Non regression tests
You can launch operatorFabric non-regression tests via the script launchAll.sh in src/test/api/karate.
To have the test passed, you need to have a clean Mongo DB database. To do that, you can use the scripts :
-
src/test/resources/deleteAllCards.sh
-
src/test/resources/deleteAllArchivedCards.sh
6.4. Cypress Tests
Automatic UI testing
All paths for cd are given assuming you’re starting from $OF_HOME .
|
6.4.1. Installation
Before running Cypress tests for the first time you need to install it using NPM.
cd src/test/cypress
npm install
6.4.2. Cypress file structure
By default, all test files are located in cypress/cypress/integration
but it is possible to put it on another directory
The commands.js
file under cypress/cypress/support
is used to create custom commands and overwrite existing commands.
6.4.3. Launching the OpFab instance to test
6.4.3.1. Commands
You can launch the OpFab instance for Cypress tests either in dev or docker mode. The following commands launch the
instance in docker mode, just substitute dev
for docker
to launch it in dev mode.
cd config/docker
docker compose down (1)
./startOpfabForCypress.sh (2)
1 | Remove existing config/docker containers to avoid conflicts |
2 | Start "Cypress-flavoured" containers |
After you’re done with your tests, you can stop and destroy containers (as it is better to start with fresh containers to avoid side-effects from previous tests) with the following commands:
cd config/docker
docker compose down
6.4.3.2. Explanation
The Cypress tests rely on a running OpFab instance that is an adaptation from the config/docker docker compose file (environment name, shorter time before lttd clock display, etc.).
The generateUIConfigForCypress.sh
script performs this adaptation to create this base Cypress configuration.
This will create the following files under config/cypress/ui-config
:
-
ui-menu.json
-
web-ui.json
-
web-ui-base.json
Where XXX-base.json
and XXX.json
are created by copying the corresponding XXX.json
file for standard docker
configuration (found under config/docker/ui-config
) and making the necessary adaptations needed for the cypress instance to
work well for the tests (changing the authentication mode, making all features visible, etc.).
Then, during the course of the cypress tests, the web-ui.json
file will be modified to test specific features
(for example, hiding a feature, defining a new menu, etc.). It is reset with the content of web-ui-base.json
before each test or series of test.
The docker container relies on the XXX.json
files under config/cypress/ui-config
.
For convenience, the generateUIConfigForCypress.sh
is launched as part of the startOpfabForCypress.sh
scripts.
6.4.4. Running existing tests
To launch the Cypress test runner:
cd src/test/cypress
./node_modules/.bin/cypress open
This will open the Cypress test runner. Either click on the test you want to run or run all X tests
on the right to
run all existing tests.
You can select the browser (and version) that you want to use from a dropdown list on the right. The list will display the browsers that are currently installed on your computer (and supported by Cypress). |
6.4.5. Running tests on 4200 (ng serve)
Follow the steps described above in "Dev mode" to start a Cypress-flavoured OpFab instance in development mode, then run ng serve to start a dynamically generated ui on port 4200:
cd ui/main
ng serve
Then launch the Cypress test runner as follows:
To launch the Cypress test runner:
cd src/test/cypress
./node_modules/.bin/cypress open --config baseUrl=http://localhost:4200
6.4.6. Running tests with Gradle
The tests can also be run in command line mode using a Gradle task :
./gradlew runCypressTests
You can run a subset of the tests, for example if you want to run all the tests starting with 'User':
./gradlew runSomeCypressTests -PspecFiles=User*
6.4.7. Clearing MongoDB
If you want to start with a clean database (from the cards and archived cards point of view), you can purge the associated collections through the MongoDB shell with the following commands:
docker exec -it docker-mongodb-1 bash
mongo "mongodb://root:password@localhost:27017/?authSource=admin"
use operator-fabric
db.cards.remove({})
db.archivedCards.remove({})
6.4.8. Current status of tests
All tests should be passing when run alone (i.e. not with run all specs) against empty card/archived cards collections. However, tests in the "Flaky" folder can sometimes fail because they involve dates (round up errors for example).
6.4.9. Creating new tests
Create a new XXXX.spec.js file under cypress/cypress/integration
We will need to define a convention for naming and organizing tests. |
6.4.9.2. Guidelines and tips
-
Use the find or within commands rather than complex CSS selectors to target descendants elements.
-
If you want to access aliases using the
this
keyword, make sure you are using anonymous functions rather than fat arrow functions, otherwise use cy.get('@myAlias') to access it asynchronously (the documentation has recently been updated on this topic). -
When running tests, make sure that you are not connected to OpFab as it can cause unexpected behaviour with read cards for example.
-
When chaining a
should
assertion to acy.get
command that returns several elements, it will pass if it is true for ANY of these elements. Use each + callback to check that an assertion is true on every element. -
cy.contains
is a command, not an assertion. If you want to test the attribute, classes, content etc. of an element, it’s better to target the element by id or data attribute using acy.get()
command for example and then chain an assertion withshould()
. This way, you will get an expected/actual error message if the assertion fails, you will avoid false positives (text is found in another sibling element) and hard to debug behaviour with retries. -
Be careful with
find()
(see #1751 for an example of issue that it can cause). See the Cypress documentation for an explanation and a less flaky alternative.
6.4.10. Configuration
In cypress.config.js
:
-
e2e.baseUrl
: The base url of the OperatorFabric instance you’re testing against. It will be appended in front of anyvisit
call. -
e2e.env.host
: The host corresponding to the OperatorFabric instance you’re testing against. It will be used for API calls. -
e2e.env.defaultWaitTime
: Using the custom-defined command cy.waitDefaultTime() instead of cy.wait(XXX) allows the wait time to be changed globally for all steps to the value defined by this property.
6.5. Load testing with Gatling
Load tests using Gatling are written in java. All test java sources are in src/test/gatling/src/java
If your OperatorFabric instance is not running on localhost, you need to edit the tests classes and replace localhost with the address of your running instance. |
6.6. Node services
When developing Node services, you have the option to run the service outside of the Docker environment. To do this, follow these steps:
Stop the Docker container running the Node service. For example:
docker stop cards-reminder
After stopping the Docker container, you can start the service in development mode with hot reload :
cd node-services/cards-reminder
npm run start:dev
7. Useful recipes
7.2. Overriding properties when running from jar file
-
java -jar [sub-projectPath]/build/libs/[sub-project].jar –spring.config.additional-location=file:[filepath] NB : properties may be set using ".properties" file or ".yml" file. See Spring Boot configuration for more info.
-
Generic property list extract :
-
server.port : embedded server port
-
-
:services:core:businessconfig-party-service properties list extract :
-
operatorfabric.businessconfig.storage.path (defaults to "") : where to save/load OperatorFabric Businessconfig data
-
7.4. Generating documentation (from AsciiDoc sources)
The sources for the documentation are located under src/docs/asciidoc
. To generate HTML pages from these sources,
use the asciidoctor
gradle task from the project root:
cd $OF_HOME
./gradlew asciidoctor
The task output can be found under $OF_HOME/build/docs/asciidoc
7.5. Generating API documentation
The documentation for the API is generated from the swagger.yaml
definitions using SwaggerUI. To generate the
API documentation, use the generateSwaggerUI
gradle task, either from the project root or from one of the services:
cd $OF_HOME
./gradlew generateSwaggerUI
The task output can be found for each service under [service-path]/build/docs/api
(for example services/businessconfig/build/docs/api
). Open the index.html
file in a browser to have a look at
the generated SwaggerUI.
8. Troubleshooting
When running docker compose files using businessconfig-party images(such as rabbitmq,
mongodb etc.) the first time, docker will need to pull these images from
their repositories.
If the docker proxy isn’t set properly, you will see the above message. To set the proxy, follow these
steps from the docker documentation. If your proxy needs authentication, add your user and password as follows:
Proxy error when running businessconfig-party docker-compose
Pulling rabbitmq (rabbitmq:3-management)...
ERROR: Get https://registry-1.docker.io/v2/: Proxy Authentication Required
HTTP_PROXY=http://user:password@proxy.example.com:80/
The password should be URL-encoded.
Gradle task (for example gradle build) fails with the following error: Issue with the Gradle daemon. Stopping the daemon using
Gradle Metaspace error
* What went wrong:
Metaspace
./gradlew --stop
and re-launching the build should solve this issue.
Select the next available version and update
load_environment_light accordingly before
sourcing it again. The java version currently listed in the script might have been deprecated
(for security reasons) or might not be available for your operating system
(for example, 8.0.192-zulu wasn’t available for Ubuntu). Run
Java version not available when setting up environment
Stop! java 8.0.192-zulu is not available. Possible causes:
* 8.0.192-zulu is an invalid version
* java binaries are incompatible with Linux64
* java has not been released yet
sdk list java
to find out which versions are available. You will get
this kind of output:================================================================================
Available Java Versions
================================================================================
13.ea.16-open 9.0.4-open 1.0.0-rc-11-grl
12.0.0-zulu 8.0.202-zulu 1.0.0-rc-10-grl
12.0.0-open 8.0.202-amzn 1.0.0-rc-9-grl
12.0.0-librca 8.0.202.j9-adpt 1.0.0-rc-8-grl
11.0.2-zulu 8.0.202.hs-adpt
11.0.2-open 8.0.202-zulufx
11.0.2-amzn 8.0.202-librca
11.0.2.j9-adpt 8.0.201-oracle
11.0.2.hs-adpt > + 8.0.192-zulu
11.0.2-zulufx 7.0.211-zulu
11.0.2-librca 6.0.119-zulu
11.0.2-sapmchn 1.0.0-rc-15-grl
10.0.2-zulu 1.0.0-rc-14-grl
10.0.2-open 1.0.0-rc-13-grl
9.0.7-zulu 1.0.0-rc-12-grl
================================================================================
+ - local version
* - installed
> - currently in use
================================================================================
A
BUILD FAILED with message
Execution failed for task ':ui:main-user-interface:npmInstall'.
FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':ui:main-user-interface:npmInstall'.
sudo
has been used before the ./gradlew assemble
.
Don’t use sudo to build OperatorFabric otherwise unexpected problems could arise.
When using the following command line: The
curl get Failed to connect to localhost:2002: Connection refused
curl http://localhost:2002/
curl: (7) Failed to connect to localhost port 2002: Connexion refused
web-ui
docker container stops running. Check its configuration.
When using the following command line: The following error appears: The requested page is not or no more mapped by the For this example,
curl 404 status return by ngnix
curl http://localhost:2002/thirds/
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.17.10</center>
</body>
</html>
nginx.conf
of web-ui
.
Update it or check for the new end point of the desired page.businessconfig
replaces now the former thirds
end-point.
When using the following command line: where The following error appears: where The requested end-point is not or no more valid in For this example,
curl 404 status return by OperatorFabric
curl http://localhost:2002/businessconfig/ -H "Authorization: Bearer ${token}"
${token}
is a valid OAuth2 JWT.{"timestamp":"XXXX-XX-XXTXX:XX:XX.XXX+00:00","status":404,"error":"Not Found","message":"","path":"/businessconfig"}
XXXX-XX-XXTXX:XX:XX.XXX+00:00
is a time stamp corresponding to the moment when the request has been sent.OperatorFabric
.
Check the API documentation for correct path.businessconfig/processes
is a correct end-point whereas businessconfig
alone is not.
When using the following commands: The following error appears: where There is no A first run of OperatorFabric If docker compose has created a Once this
ERROR: for web-ui
when running docker compose in ${OF_HOME}/config/dev
cd ${OF_HOME}/config/dev
docker compose up -d
ERROR: for web-ui Cannot start service web-ui: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/home/legallron/projects/operatorfabric-core/config/dev/nginx.conf\\\" to rootfs …
…
is specific to the runtime environment.nginx.conf
file in the ${OF_HOME}/conf/dev
directory.docker compose
in dev config needs a nginx.conf
file.
To create it, and run a docker compose environment use:cd ${OF_HOME}/config/dev
./docker-compose.sh
nginx.conf
directory, delete it before running the previous commands.nginx.conf
file created a simple docker compose up -d
is enough to run a dev docker compose environment.
Sometimes a nginx.conf
has been created as an attempt to launch the web-ui
docker.
See the following section to resolve this.
When using the following commands: The following error appears: A You have rights to delete the folder: You don’t have the rights to delete the folder:
/docker-compose.sh: ligne 7: ./nginx.conf: is a folder
when running ${OF_HOME}/config/dev/docker-compose.sh
cd ${OF_HOME}/config/dev
./docker-compose.sh
./docker-compose.sh: ligne 7: ./nginx.conf: is a folder
docker compose up
has been run previously without nginx.conf
.
A folder named nginx.conf
has been created by docker-compose
.cd ${OF_HOME}/config/dev
rm -rf nginx.conf
./docker-compose.sh # if you want to run OperatorFabric directly after.
cd ${OF_HOME}
bin/run_all.sh start
cd ${OF_HOME}/config/dev
docker run -ti --rm -v $(pwd):/current alpine # if there is no `alpine` docker available it will pull it from dockerHub
# your are now in the alpine docker container
cd /current
rm -rf nginxconf
<ctrl-d> # to exit the `alpine` container bash environement
./docker-compose.sh # if you want to run OperatorFabric directly after.
cd ${OF_HOME}
bin/run_all.sh start
When using the following command line: The following error appears: where There is not enough allocated memory space to build the front-end. Use the following command to solve the problem:
An unhandled exception occurred: Call retries were exceeded
occurs when using ng build
cd ${OF_HOME}/ui/main
ng build
Generating ES5 bundles for differential loading...
An unhandled exception occurred: Call retries were exceeded
See "/tmp/ng-<random-string>/angular-errors.log" for further details.
ng-<random-string>
is a temporary folder created by Angular to build the front-end.node --max_old_space_size=4096 node_modules/@angular/cli/bin/ng build
9. Keycloak Configuration
The configuration needed for development purposes is automatically loaded from the dev-realms.json file. However, the steps below describe how they can be reproduced from scratch on a blank Keycloak instance in case you want to add to it.
The Keycloak Management interface is available here: [host]:89/auth/admin Default credentials are admin/admin.
9.2. Setup at least one client (or best one per service)
9.2.1. Create client
-
Click Clients in left menu
-
Click Create Button
-
Set client ID to "opfab-client" (or whatever)
-
Select Openid-Connect Protocol
-
Click Next
-
Enable client authentication
-
Enable authorization
-
Select Authentication flows: Standard flow, Direct access grants, Implicit flow
-
Click Next
-
Enter Valid redirect URIs : localhost:2002/*
-
Add Valid redirect URIs : localhost:4200/*
-
Click Save
-
Remove Web origins settings
-
Click Save
-
Select Client scopes tab
-
Click on opfab-client-dedicated
-
From Mappers tab click Add Mapper
-
Select by configuration
-
Select User Property
-
name it sub
-
set Property to username
-
set Token claim name to sub
-
enable add to access token
-
save
-
From Mappers tab click Add Mapper
-
Select by configuration
-
Select User Attribute
-
name it groups
-
set Property to groups
-
set User attribute to groups
-
set Token claim name to groups
-
enable add to access token
-
save
-
From Mappers tab click Add Mapper
-
Select by configuration
-
Select User Attribute
-
name it groups
-
set Property to entitiesId
-
set User attribute to entitiesId
-
set Token claim name to entitiesId
-
enable add to access token
-
save
9.3. Create Users
-
Click Users in left menu
-
Click Add User button
-
Set username to admin
-
Create
-
Select Credentials tab
-
set password and confirmation to "test"
-
disable Temporary flag
-
Select Attributes tab
-
add groups or entitiesId attributes if needed
repeat process for other users: operator3_fr, operator1_fr, operator2_fr, etc ..
9.3.1. Development-specific configuration
To facilitate development, in the configuration file provided in the git (dev-realms.json) ,session are set to have a duration of 10 hours (36000 seconds) and SSL is not required. These parameters should not be used in production.
The following parameters are set : accessTokenLifespan : 36000 ssoSessionMaxLifespan : 36000 accessCodeLifespan" : 36000 accessCodeLifespanUserAction : 36000 sslRequired : none
10. Using OAuth2 token with the CLI
10.1. Get a token
End point: localhost:2002/auth/token
Method: POST
Body arguments:
-
grant_type:
string
constant=password
;-
username:
string
any value, must match an OperatorFabric registered user name;
-
-
password:
string
any value;
The following examples will be for admin
user.
10.1.1. Curl
command:
curl -s -X POST -d "username=admin&password=test&grant_type=password" http://localhost:2002/auth/token
example of expected result:
{"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cC I6MTU1MjY1OTczOCwiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOi IwMmQ4MmU4NS0xM2YwLTQ2NzgtOTc0ZC0xOGViMDYyMTVhNjUiLCJjbGllbnRfaWQiOiJjbGllbnRJZF Bhc3N3b3JkIiwic2NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.SDg-BEzzonIVXfVBnnfq0oMbs_0 rWVtFGAZzRHj7KPgaOXT3bUhQwPOgggZDO0lv2U1klwB94c8Cb6rErzd3yjJ8wcVcnFLO4KxrjYZZxdK VAz0CkMKqng4kQeQm_1UShsQXGLl48ezbjXyJn6mAl0oS4ExeiVsx_kYGEdqjyb5CiNaAzyx0J-J5jVD SJew1rj5EiSybuy83PZwhluhxq0D2zPK1OSzqiezsd5kX5V8XI4MipDhaAbPYroL94banZTn9RmmAKZC AYVM-mmHbjk8mF89fL9rKf9EUNhxOG6GE0MDqB3LLLcyQ6sYUmpqdP5Z94IkAN-FpC7k93_-RDw","to ken_type":"bearer","refresh_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWI iOiJhZG1pbiIsInNjb3BlIjpbInJlYWQiLCJ1c2VyX2luZm8iXSwiYXRpIjoiMDJkODJlODUtMTNmMC0 0Njc4LTk3NGQtMThlYjA2MjE1YTY1IiwiZXhwIjoxNTUyNzAxMTM4LCJhdXRob3JpdGllcyI6WyJST0x FX0FETUlOIiwiUk9MRV9VU0VSIl0sImp0aSI6IjMwOWY2ZDllLWNmOGEtNDg0YS05ZjMxLWViOTAxYzk 4YTFkYSIsImNsaWVudF9pZCI6ImNsaWVudElkUGFzc3dvcmQifQ.jnZDt6TX2BvlmdT5JV-A7eHTJz_s lC5fHrJFVI58ly6N7AUUfxebG_52pmuVHYULSKqTJXaLR866r-EnD4BJlzhk476FtgtVx1nazTpLFRLb 8qDCxeLrzClQBkzcxOt6VPxB3CD9QImx3bcsDwjkPxofUDmdg8AxZfGTu0PNbvO8TKLXEkeCztLFvSJM GlN9zDzWhKxr49I-zPZg0XecgE9j4WITkFoDVwI-AfDJ3sGXDi5AN55Sz1j633QoqVjhtc0lO50WPVk5 YT7gU8HLj27EfX-6vjnGfNb8oeq189-NX100QHZM9Wgm79mIm4sRgwhpv-zzdDAkeb3uwIpb8g","exp ires_in":1799,"scope":"read user_info","jti":"02d82e85-13f0-4678-974d-18eb06215a65"}
10.1.2. Httpie
http --form POST http://localhost:2002/auth/token username=admin password=test grant_type=password
example of expected result:
.HTTP/1.1 200 OK Cache-Control: no-store Content-Type: application/json;charset=utf-8 Date: Fri, 15 Mar 2019 13:57:19 GMT Pragma: no-cache X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block transfer-encoding: chunked { "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY2MDAzOS wiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiI2MjQzMDliMS03Yz g3LTRjZGMtODQ0My0wMTI0NTE1Zjg3ZjgiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkIiwic2 NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.VO4OZL7ycqNez0cHzM5WPuklr0r6SAOkUdUV2qFa5Bd 3PWx3DFHAHUxkfSX0-R4OO6iG2Zu7abzToAZNVLwk107LH_lWXOMQBriGx3d2aSgCf1yx_wI3lHDd8ST 8fxV7uNeolzywYavSpMGfgz9GXLzmnyeuPH4oy7eyPk9BwWVi0d7a_0d-EfhE1T8eaiDfymzzNXJ4Bge 8scPy-93HmWpqORtJaFq1qy4QgU28N2LgHFEEEWCSzfhYXH-LngTCP3-JSNcox1hI51XBWEqoeApKdfD J6o4szR71SIFCBERxCH9TyUxsFywWL3e-YnXMiP2J08eB8O4YwhYQEFqB8Q", "expires_in": 1799, "jti": "624309b1-7c87-4cdc-8443-0124515f87f8", "refresh_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsInNjb3BlIjpbInJlYWQiLC J1c2VyX2luZm8iXSwiYXRpIjoiNjI0MzA5YjEtN2M4Ny00Y2RjLTg0NDMtMDEyNDUxNWY4N2Y4IiwiZX hwIjoxNTUyNzAxNDM5LCJhdXRob3JpdGllcyI6WyJST0xFX0FETUlOIiwiUk9MRV9VU0VSIl0sImp0aS I6ImRiYzMxNTJiLTM4YTUtNGFmZC1hY2VmLWVkZTI4MjJkOTE3YyIsImNsaWVudF9pZCI6ImNsaWVudE lkUGFzc3dvcmQifQ.Ezd8kbfNQHOOvUCNNN4UmOOkncHiT9QVEM63FiW1rq0uXDa3xfBGil8geM5MsP0 7Q2He-mynkFb8sGNDrAXTdO-8r5o4a60zWrktrMg2QH4icC1lyeZpiwZxe6675QpLpSeMlXt9PdYj-pb 14lrRookxXP5xMQuIMteZpbtby7LuuNAbNrjveZ1bZ4WMi7zltUzcYUuqHlP1AYPteGRrJVKXiuPpoDv gwMsEk2SkgyyACI7SdZZs8IT9IGgSsIjjgTMQKzj8P6yYxNLUynEW4o5y1s2aAOV0xKrzkln9PchH9zN qO-fkjTVRjy_LBXGq9zkn0ZeQ3BUe1GuthvGjaA", "scope": "read user_info", "token_type": "bearer" }
10.2. Extract token
From the previous results, the data needed to be considered to be authenticated by OperatorFabric services is the content of the "access_token"
attribute of the body response.
Once this value extracted, it needs to be passed at the end of the value of the http HEADER of type Authorization:Bearer
. Note that a space is needed between Bearer
and token actual value.
example from previous results:
10.2.1. Curl
Authorization:Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY1OTczOCw iYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiIwMmQ4MmU4NS0xM2Y wLTQ2NzgtOTc0ZC0xOGViMDYyMTVhNjUiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkIiwic2N vcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.SDg-BEzzonIVXfVBnnfq0oMbs_0rWVtFGAZzRHj7KPga OXT3bUhQwPOgggZDO0lv2U1klwB94c8Cb6rErzd3yjJ8wcVcnFLO4KxrjYZZxdKVAz0CkMKqng4kQeQm _1UShsQXGLl48ezbjXyJn6mAl0oS4ExeiVsx_kYGEdqjyb5CiNaAzyx0J-J5jVDSJew1rj5EiSybuy83 PZwhluhxq0D2zPK1OSzqiezsd5kX5V8XI4MipDhaAbPYroL94banZTn9RmmAKZCAYVM-mmHbjk8mF89f L9rKf9EUNhxOG6GE0MDqB3LLLcyQ6sYUmpqdP5Z94IkAN-FpC7k93_-RDw
10.2.2. Httpie
Authorization:Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY2MDAzOSw iYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiI2MjQzMDliMS03Yzg 3LTRjZGMtODQ0My0wMTI0NTE1Zjg3ZjgiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkIiwic2N vcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.VO4OZL7ycqNez0cHzM5WPuklr0r6SAOkUdUV2qFa5Bd3 PWx3DFHAHUxkfSX0-R4OO6iG2Zu7abzToAZNVLwk107LH_lWXOMQBriGx3d2aSgCf1yx_wI3lHDd8ST8 fxV7uNeolzywYavSpMGfgz9GXLzmnyeuPH4oy7eyPk9BwWVi0d7a_0d-EfhE1T8eaiDfymzzNXJ4Bge8 scPy-93HmWpqORtJaFq1qy4QgU28N2LgHFEEEWCSzfhYXH-LngTCP3-JSNcox1hI51XBWEqoeApKdfDJ 6o4szR71SIFCBERxCH9TyUxsFywWL3e-YnXMiP2J08eB8O4YwhYQEFqB8Q
10.3. Check a token
10.3.1. Curl
from previous example
curl -s -X POST -d "token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY1 OTczOCwiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiIwMmQ4MmU4 NS0xM2YwLTQ2NzgtOTc0ZC0xOGViMDYyMTVhNjUiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3Jk Iiwic2NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.SDg-BEzzonIVXfVBnnfq0oMbs_0rWVtFGAZzR Hj7KPgaOXT3bUhQwPOgggZDO0lv2U1klwB94c8Cb6rErzd3yjJ8wcVcnFLO4KxrjYZZxdKVAz0CkMKqn g4kQeQm_1UShsQXGLl48ezbjXyJn6mAl0oS4ExeiVsx_kYGEdqjyb5CiNaAzyx0J-J5jVDSJew1rj5Ei Sybuy83PZwhluhxq0D2zPK1OSzqiezsd5kX5V8XI4MipDhaAbPYroL94banZTn9RmmAKZCAYVM-mmHbj k8mF89fL9rKf9EUNhxOG6GE0MDqB3LLLcyQ6sYUmpqdP5Z94IkAN-FpC7k93_-RDw" http://localhost:2002/auth/check_token
which gives the following example of result:
{ "sub":"admin", "scope":["read","user_info"], "active":true,"exp":1552659738, "authorities":["ROLE_ADMIN","ROLE_USER"], "jti":"02d82e85-13f0-4678-974d-18eb06215a65", "client_id":"clientIdPassword" }
10.3.2. Httpie
from previous example:
http --form POST http://localhost:2002/auth/check_token token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MjY2M DAzOSwiYXV0aG9yaXRpZXMiOlsiUk9MRV9BRE1JTiIsIlJPTEVfVVNFUiJdLCJqdGkiOiI2MjQzMDliM S03Yzg3LTRjZGMtODQ0My0wMTI0NTE1Zjg3ZjgiLCJjbGllbnRfaWQiOiJjbGllbnRJZFBhc3N3b3JkI iwic2NvcGUiOlsicmVhZCIsInVzZXJfaW5mbyJdfQ.VO4OZL7ycqNez0cHzM5WPuklr0r6SAOkUdUV2q Fa5Bd3PWx3DFHAHUxkfSX0-R4OO6iG2Zu7abzToAZNVLwk107LH_lWXOMQBriGx3d2aSgCf1yx_wI3lH Dd8ST8fxV7uNeolzywYavSpMGfgz9GXLzmnyeuPH4oy7eyPk9BwWVi0d7a_0d-EfhE1T8eaiDfymzzNX J4Bge8scPy-93HmWpqORtJaFq1qy4QgU28N2LgHFEEEWCSzfhYXH-LngTCP3-JSNcox1hI51XBWEqoeA pKdfDJ6o4szR71SIFCBERxCH9TyUxsFywWL3e-YnXMiP2J08eB8O4YwhYQEFqB8Q
which gives the following example of result:
HTTP/1.1 200 OK Cache-Control: no-cache, no-store, max-age=0, must-revalidate Content-Type: application/json;charset=utf-8 Date: Fri, 15 Mar 2019 14:19:31 GMT Expires: 0 Pragma: no-cache X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block transfer-encoding: chunked { "active": true, "authorities": [ "ROLE_ADMIN", "ROLE_USER" ], "client_id": "clientIdPassword", "exp": 1552660039, "jti": "624309b1-7c87-4cdc-8443-0124515f87f8", "scope": [ "read", "user_info" ], "sub": "admin" }
10.4. Extract token
The utility jq
, sadly not always available on some Linux distro, parses json input and extracts requested json path value(access_token
here).
Here is a way to do so.
curl -d "username=${user}&password=${password}&grant_type=password" "http://localhost:2002/auth/token" | jq -r .access_token
where:
-
${user}
: login existing on keycloak for operatorfabric; -
${password}
: password for the previous login in keycloak; -
opfab-client
: is the id of the client for OperatorFabric associated to thedev
realm in Keycloak in adev
(${OF_HOME/config/dev
) ordocker
(${OF_HOME/config/docker
) configuration of operatorFabric.
The -r
option, for raw, leaves the output without any quotes.
11. Kafka Implementation
Next to publishing cards to OperatorFabric using the REST API, OperatorFabric also supports publishing cards via a Kafka Topic. In the default configuration Kafka is enabled.
11.1. Setup Kafka environment
Opfab by default starts a bitnami Kafka docker container and a zookeeper container as defined in docker-compose.yml
.
If you have a Kafka environment running you need to configure Opfab to connect to your Kafka server by setting spring.kafka.bootstrap-servers
property in card-publication.yml
:
spring:
kafka:
bootstrap-servers: <kafka-server-host>:<port>
If you want to setup an easy-to-use broker with a graphical interface, you can for example download lenses.io.
11.2. Disabling Kafka
To disable Kafka support you need to comment the kafka.*
properties in the cards-publication.yml
file:
# kafka:
# consumer:
# group-id: opfab-command
# bootstrap-servers: kafka:9092
11.3. Configuration
The default topic from which the messages are consumed is called opfab
. This setting can be modified by setting opfab.kafka.card.topics.topicname
. Messages are encoded in the CardCommand.card
field.
The default topic to which messages are produced is called opfab-response
. This setting can be modified by setting the opfab.kafka.topics.response-card
, see below. Messages produced by Operator Fabric are encoded in the CardCommand.responseCard
field
By default Opfab use the provided KafkaAvroWithoutRegistrySerializer
and
KafkaAvroWithoutRegistryDeserializer
, so no schema registry setting is needed.
If you want to use a schema registry you need to configure opfab to use standard Kafka Avro serializers and deserializers and make sure the registry service setting is provided in
the card-publication.yml
file.
Example settings for the cards-publication.yml file:
spring:
deserializer:
value:
delegate:
class: io.confluent.kafka.serializers.KafkaAvroDeserializer
serializer:
value:
delegate:
class: io.confluent.kafka.serializers.KafkaAvroSerializer
operatorfabric:
cards-publication:
kafka:
topics:
card:
topicname: opfab
response-card:
topicname: opfab-response
schema:
registry:
url: http://localhost:8081
Cards-publication service for more settings.
See Schema management for detailed information on using and benefits of a schema registry.
11.5. Listener / deserializer
Most of the OperatorFabric Kafka implementation can be found at
org.opfab.cards.publication.kafka
-
for the implementation of the deserializers and mapping of Kafka topics to OperatorFabric cards and
org.opfab.autoconfigure.kafka
-
for the various Kafka configuration options.
11.5.1. Kafka OperatorFabric AVRO schema
The AVRO schema, the byte format in which messages are transferred using Kafka topics, can be found at client/src/main/avro
.
Message are wrapped in a CardCommand object before being sent to a Kafka topic. The CardCommand consists of some additional information and the
OperatorFabric card itself, see also [card_structure]. The additional information, for example CommandType, consists mostly of the information
present in a REST operation but not in Kafka. For example the http method (POST, DELETE, UPDATE) used.
11.6. Configure Kafka
11.6.1. Setting a new deserializer
By default, OperatorFabric uses the provided org.opfab.cards.publication.kafka.consumer.KafkaAvroWithoutRegistryDeserializer
from Confluent. However, you can use a standard deserializer io.confluent.kafka.serializers.KafkaAvroSerializer
from Confluent or write your own deserializer. To use your own deserializer, make sure spring.deserializer.value.delegate.class
points to your deserializer.
11.6.2. Configuring a broker
When you have a broker running on localhost port 9092, you do not need to set the bootstrap severs. If this is not the case, you need tell
Operator Fabric where the broker can be found. You can do so by setting the bootstrap-servers property in the cards-publication.yml
file:
spring:
kafka:
bootstrap-servers: 172.17.0.1:9092
11.7. Kafka card producer
To send a CardCommand to OperatorFabric, start by implementing a simple Kafka producer by following for example Spring for Apache Kafka. Note that some properties of CardCommand or its embedded Card are required. If not set, the card will be rejected by OperatorFabric.
When you dump the card (which is about to be put on a topic) to stdout, you should see something like the line below. Do ignore the actual values from the dump below.
{
"command": "CREATE_CARD",
"process": "integrationTest",
"processInstanceId": "fa6ce61f-192f-11eb-a6e3-eea952defe56",
"card": {
"parentCardUid": null,
"publisher": "myFirstPublisher",
"processVersion": "2",
"state": "FirstUserTask",
"publishDate": null,
"lttd": null,
"startDate": 1603897942000,
"endDate": 1604070742000,
"severity": "ALARM",
"tags": null,
"timeSpans": null,
"details": null,
"title": {
"key": "FirstUserTask.title",
"parameters": null
},
"summary": {
"key": "FirstUserTask.summary",
"parameters": null
},
"userRecipients": [
"tso1-operator",
"tso2-operator"
],
"groupRecipients": null,
"entitiesAllowedToRespond": [
"ENTITY1_FR"
],
"entityRecipients": null,
"hasBeenAcknowledged": null,
"data": "{\"action\":\"Just do something\"}"
}
}
11.8. Response Cards
OperatorFabric
response cards
can be sent by REST of put on a Kafka topic. The Kafka response card configuration follows the
convention to configure a REST endpoint. Instead of setting the 'http://host/api' URL, you set it to 'kafka:response-topic' in the external-recipients:
section from the cards-publication.yml file:
operatorfabric:
cards-publication:
external-recipients:
recipents:
- id: "processAction"
url: "http://localhost:8090/test"
propagateUserToken: false
- id: "mykafka"
url: "kafka:topicname"
propagateUserToken: false
Note that topicname
is a placeholder for now. All response cards are returned via the same Kafka response topic, as specified in the opfab.kafka.topics.response-card
field.
12. MailHog SMTP server
MailHog is an SMTP mail server suitable for testing. Opfab uses MailHog to test mail notification service. MailHog allows to view sent messages in the web UI, or retrieve them with the JSON API.
12.1. MailHog docker container
A MailHog docker container is configured in Opfab docker-compose.yaml:
mailhog:
image: mailhog/mailhog:v1.0.1
ports:
- 1025:1025
- 8025:8025
The container exposes port 1025 for SMTP protocol and port 8025 for web UI and HTTP REST API.
Mailhog web interface is accesible at localhost:8025
.
Mailhog REST API allows to list, retrieve and delete messages.
For example, to retrieve the list of received messages, send an http GET request to: localhost:8025/api/v2/messages
13. Dependency analysis
Inside the bin/dependencies directory, you can find scripts to help analyze the dependencies used by the project. These scripts should be executed from within the same directory.
The first script, named generateDependencyReport.sh, aggregates all dependency trees (including Java and npm dependencies) into a single file. The generated file includes the name of the current branch, making it easier to compare dependencies between branches.
The second script, searchForDependency.sh, allows you to search for dependencies based on the generated file. For example, if you are searching for the "rabbit" dependency:
cd bin/dependencies
./searchForDependency.sh rabbit
The analysis is conducted on the current branch by default. If you wish to utilize a report from another branch, you can specify it as follows:
./searchForDependency rabbit myBranch