User's Guide


Installation and Deployment (via Docker)


Requirements

The Operational Tools have been tested on Linux Ubuntu Server 18.04 LTS. Basically, you will need to install docker and docker-compose in your HOST operating system. Probably it should also work under Windows if you have Docker (WSL 2) installed.


Installation

Download the installation code, which is composed of a docker-compose.yaml file and a configuration directory with 3 files:

OT_DOCKER_1

The docker-compose.yaml files just uses 2 images: a Mongo database and the OT-core deployed on a tomcat8 image:

ot-mongo:
   image: mongo:3.6
   container_name: ot-mongo
   command: --nojournal
   volumes:
     - ./ot-mongodata:/data/db
 ot-core:
   image: pixelh2020/ot:0.2
   container_name: ot-core
   links:
     - ot-mongo
   volumes:
     - /var/run/docker.sock:/var/run/docker.sock
     - ./ot-config/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml:ro
     - ./ot-config/context.xml:/usr/local/tomcat/conf/Catalina/localhost/manager.xml:ro
     - ./ot-config/context.xml:/usr/local/tomcat/conf/Catalina/localhost/host-manager.xml:ro
     - ./ot-config/default.configuration.xml:/usr/local/tomcat/default.configuration.xml
     - ./ot-logs:/usr/local/tomcat/logs
   ports:
     - "8080:8080"

The ot-mongo docker instance will persist the data in the ot-mongodata folder. The logging of the ot-core instance is persisted in the ot-logs folder. Therefore, you will have to create it in your intallation directory.

OT_DOCKER_2

Before running the model, you will have to edit the configuration files under the ot-config directory. The files tomcat-users.xml and context.xml can be left as they are if you intend to run everything via Docker. It is intended only if you plan to update applications (WAR files) via the tomcat management interface. Therefore, only the default.configuration.xml file needs to be edited:

OT_DOCKER_3

Here you will have to edit/change some parameters, such as the location of the Elasticsearch server (elastic element) and some server elements:

  • frontHost: this is the host giving access to the service from the outside. This may represent the host running the Docker daemon or the proxy endpoint in case there is one.

  • frontPort: associated port to the frontHost.

  • dockerSubnet: As the OT are typpically running within the PIXEL platform, several internal (Docker) networks are created to isolate management and increase security. Therefore, all models and predictive algoritms that are launched from the OT as Docker instances will run under this network.

  • createDockerSubnet: You may leave as it is. It allows to create the (docker) network, but this task is usually created by the PIXEL platform installation scripts in charge of deploying the whole platform. Anyway, if you set it to no, make sure that the network is created before launching the service. You can do that via the command docker network create .

The location of the MongoDB server (datasource element) maps with the docker-compose.yaml file, so you can leave it as it is. You can leave also the other parameters as they are (e.g. frequency).

After the proper configuration you are able to run the service:

OT_DOCKER_4

If everything goes well, you should be able to access with your browser:

  • Swagger UI: http://frontHost:frontPort/otpixel/doc

  • Basic UI: http://frontHost:frontPort/otpixel/ui

Note: The current code in Docker container (pixelh2020:ot:0.2) is intended to be launched within PIXEL, which includes a proxy that includes '/otpixel' path in the translation. Therefore, you might check the eu.pixel.otpixel.api.utils.DockerUtils java class to see the adaptations made. Typically you will access this tools via:

  • Swagger UI: https://pixel-frontHost/doc

  • Basic UI: https://pixel-frontHost/ui



Initial Check and Validation

  • Tomcat OT application - UI: Open a web browser and go to http://your-server-ip:8080/otpixel/ui You should be able to see the UI of the application. Even if you cannot see neither models nor predictive algorithms (not yet deployed), you should not see any error in the Developer's panel of the browser.

OT_UI_CHECK

If you want to perform a basic test, then go to Models on the Left Menu and click on the Add a new Model button. Just enter the following:

Docker name Label
pixelh2020/dummypas:0.1 getInfo

OT_UI_CHECK2

You will see that the new model should have been entered in the list of models, with a status of created. The status changes to pulling. Just wait a couple of minutes (the Docker image needs to be pulled from the Dockerhub repository and this could take a while) and refresh the screen. Now the status should have change to one of:

Status Description
deployed this means that everything went properly. By clicking on the 'Edit' icon of this model, you may see the details.
error there has been an error. More information may be obtained by checking the log file (otpixelEngineCreateModel.log); this is commented in the next section.

  • Tomcat OT application - Swagger: Open a web browser and go to http://your-server-ip:8080/otpixel/doc You should be able to see the Swagger UI of the application. You can click on Authorize, enter your apiKey and start testing the API. You can use the GET /models/list to retrieve all available published models. Currently only the dummypas model will appear, if you did the actions in the previous step. As there are no instances, you should get an empty array here

OT_SWAGGER_CHECK



Redeployments and Monitoring


Deploy/update a new OT version

In order to generate a new version, you will need to perform 2 steps: - Generate the tomcat application (otpixel.war) from the pom.xml in the code - Copy the WAR file under the docker-build folder and regenerate the image (docker-compose up --build). There is a README.md file in the code


Logs and Monitoring

The Operational Tools includes a series of different log files to monitor the activity of different tasks independently:

  • otpixelAPI.log: general log file for OT.
  • otpixelEngineCreateModels.log: management thread of the OT Engine to manage the creation of models and predictive algorithms.
  • otpixelEngineDeleteModels.log: management thread of the OT Engine to manage the deletion of models and predictive algorithms.
  • otpixelEngineCreateInstances.log: management thread of the OT Engine to manage the creation of instances.
  • otpixelEngineCreateScheduledInstances.log: management thread of the OT Engine to manage the creation of scheduled instances.

OT_LOGS_CHECK

Since the new version (0.2) logs are also written into the console, so that you can get all information from a docker logs command. Additionaly the API incorporates a functionality to show the logs from the executions of instances and scheduledinstances (in this last case from the last execution)



API - Swagger Interface

The Operational Tools are able to publish models, predictive algorithms and schedule them. Furthermore, there is also support for KPIs and events. The API has been specified as a REST API that includes a Swagger (Open API) interface to be tested. You can also use other developer tools such as Postman. The Swagger UI is very user friendly and allows to easily check all possible requests, its input parameters and the outputs. We will just provide a basic example for a dummy model in order to highlight the process, which should be considered as a scheme for all other requests (analogous process).

Open a web browser and go to http://your-server-ip:8080/otpixel/doc You should be able to see the Swagger UI of the application. Click first on Authorize, and enter your apiKey.

ot-user-swagger-auth

At the very beginning after installing the OT component, there is no data in Mongo (database), therefore any request will return an empty response. Let's check. We will take as example the 'models' resource. Click on /models/list and the options will expand.

ot-user-swagger-list1

Note here some optional parameters to be included in the request:

Parameter Description
otStatus status of the models to be retrieved, which can be one of: created, pulling, deployed, error, deleted. If not given, all are provided
type type to be considered: model,pa. If not given, all are provided

Note also that you have an example of a CURL request Finally, note that the response is an empty array as there are (yet) no models there.

Let's create a new one. Click on /models/create and the options will expand.

ot-user-swagger-create1

You can see a really complex body, but don't worry because there is no need to understand all info. You can just insert as body the following JSON (we will use a dummy model):

{  
  "dockerInfo": {
    "dockerName": "pixelh2020/dummysei:0.1",
    "label": "getInfo"    
  }  
}

After pressing the Execute button, you should see the following response:

{
  "id": "5ed7784971409d0623b6c57a",
  "generalInfo": null,
  "dockerInfo": {
    "dockerName": "pixelh2020/dummysei:0.1",
    "label": "getInfo",
    "dockerRepo": null
  },
  "creation": 1591179337033,
  "otStatus": "created"
}

Right now the model has been created in the Operational Tools. A backend process will retrieve the Docker image from Dockerhub and extract all description information. We can see this if we list the models again:

ot-user-swagger-list1

You can see now all information related to this model, that has been imported through Dockerhub. Other additional CRUD operations related to models are straightforward: deleting a model, updating a model, getting a model (by UUID). The process with other resources (instance, scheduledInstance and KPI) are also straightforward in terms of CRUD operations. The KPI includes two additional functions:

  • /kpis/get/{id}/lastKPI: Gets the last value of a KPI by id. It is supposed that the KPI is a time series changing throughout time. This data is stored in the Information Hub (Elastisearch).
  • /kpis/get/{id}/stats: Gets statistical info from a KPI between a given time interval (optional), such as: min, max, average and std. It also includes an array of KPI values (this is useful for the dashboard to print them on a graph).



Graphical User Interface

Models

The Operational Tools include a small basic UI that supports most of the functionalities of its API. It may serve as basis for your own development in case you intend to make your own project only considering this component of the PIXEL architecture, though the PIXEL Dashboard is intended to provide much more options and functionality.

  • Creating a model: If you want to create a new model, just click on the main (left) panel on Models. You should see a list of already published models, unless it is a fresh installation.

ot-user-crM1

Just click on Add a New Model. A basic form will appear asking for the name of the model in your Docker repository as well as the label where all descriptive information is included (also in the Docker image). If you don't have one by your side available, let's follow the process with a dummy example. Just enter the following values:

ot-user-crM2

As you may deduce, pixelh2020 is an open (public) repository in Dockerhub, dummysei:01 is the name and version of the Docker image to be used, and getInfo is the included label in the Docker image that described the model with a specific format defined in PIXEL. In a certain way, it is similar to a WSDL for web services. The web form also includes the option to point to a private Docker repository, in that case, you will have to enter the credentials to access. After clicking the Save button on the top right corner, you will see the model on the list as created:

ot-user-crM3

Note that there is still no name nor category for the model, as it needs to be first otained (pulled) from the Docker repository. You can track this activity by monitoring the otpixelEngineCreateModel.log file:

ot-user-crM4

If you refresh now your browser, you should see that model has changed its status to deployed. Now there is a name and a category, which has been extracted from the given label of the Docker image.

ot-user-crM5

You should have noticed a list of actions represented by 4 icons: edit, delete, run and schedule. Clicking on the Edit Model icon will allow you to see the complete description of the model. We will not discuss the format, but basically it describes basic fields, connectors, inputs, outputs and logging configuration.

ot-user-crM6

By clicking on the Delete Model icon, the model enters a deleted status. After a short while, if you refresh the browser the model will have disappeared. The other options (run,schedule) are commented on the next subsections.

  • Running a model (creating an instance): Once you have published and deployed a model (see previous step), you should be able to run the model. For that, just click on the Run model action button, and you should see a new page with a list of executions associated to that model. After a fresh installation, there will be no item in the list.

ot-user-rM1

Let's create a new execution by clicking on the New Instance button. A modal dialog appears where you will have to enter a JSON file describing the details of the execution.

ot-user-rM2

The introduction of data here is a particularization of the description of the model, with specific inputs and outputs, and varies from model to model. You should look at the specific model to enter valid data here. Once you do, just press the Save button in the modal. The new instance appears in the Menu with status created.

ot-user-rM3

There is a backend process that periodically reads this table and runs the pending instances. You can track this activity by monitoring the otpixelEngineCreateInstances.log:

ot-user-rM4

After the execution, if you refresh your browser, you will see the details of the execution (instance) in the list.

ot-user-rM5

Here you have two action icons. The Delete instance is obvious, whereas the View instance allows to visualize the details of the instance. It is pretty much the same as the input data provided when the instance was created, with some additional information added by the backend process (creation time, start, otStatus, dockerId). Note that the result of the execution is stored in Elasticsearch; the visualization of such result is model dependent as is provided by the PIXEL Dashboard.

ot-user-rM6


  • Schedule a model (creating an scheduledInstance): Some models are useful every day, every week, etc., and can be run automatically (scheduled), without any reason for user presence. The process of scheduling a model is analogous to the previous one (running a model), just click on the Schedule model action icon of the models list and you should be able to follow a similar process.

ot-user-schM1


The only difference here resides in the fact that the model is going to be launched periodically in this case, not just once. Therefore, when we enter the JSON data of a scheduled instance, we need to include such data, which follows the structure:

"scheduleInfo": {
        "start": "2021-01-20T11:11:11+02:00",
        "unit": "minute",
        "value": 1
}

The start field indicates (ISO 8601) when the model must be first launched, the unit filed represents the possible units (second, minute, hour, day) and the value field represents the amount of units to wait between consecutive executions. In the example above, model will be run every minute. The given start time should typically represent one timestamp in the future. However, if the given start time is any time in the past, the OT engine will recalculate the nearest point of time in the future as result of the N-th multiple of the given amount of time (here multiples are count every minute). You can trace the backend process that periodically reads the corresponding table and runs the pending scheduled instances. The log is on otpixelEngineCreateScheduledInstances.log:

ot-user-schM2


Now in the list of scheduled instances you should see the added scheduled instance. The Last status column should say running, unless there is an error (error trying to execute the Docker instance) in any of the executions.

ot-user-schM3


There is one final comment and relates to timing issues. If one of the inputs for the execution of the model is a time dependent parameter, e.g. current day of the execution, then this should be parametrized and interpreted by the OT engine. The user cannot provide here a fixed timestamp (otherwise this would provide the same result continuously). As example, let’s suppose a model that requires as inputs a start time and an end time to make its internal calculation; this could be the case of getting vessels calls in a time window. If we want to run the model every day, then we need to parametrize this somehow in the JSON data structure. An example could be:

{
        "name": "start",
        "type": "date-time (ISO 8601)",
        "description": "start of calculation period",
        "value": "${DATE_DAY_INIT}"
}, {
        "name": "end",
        "type": "date-time (ISO 8601)",
        "description": "end of calculation period",
        "value": "${DATE_DAY_INIT}"
}

Here, every time the model is executed, the OT engine previously interprets the parametrized date values (${}) and changes it with the corresponding operation. Currently the OT engine supports the following ones:

Format Description(ISO format -) Potential Use
${DATE_current} Current date Models started by triggers?
${DATE_MINUTE_INIT} Date of the first second of the current minute test,RT data
${DATE_MINUTE_LAST} Date of the last second of the current minute test,RT data
${DATE_HOUR_INIT} Date of the first second of the current hour traffic,weather
${DATE_HOUR_LAST} Date of the last second of the current minute traffic,weather
${DATE_DAY_INIT} Date of the first second of the current day PAS
${DATE_DAY_LAST} Date of the last second of the current day PAS
${DATE_WEEK_INIT} Date of the first second of the current week PEI
${DATE_WEEK_LAST} Date of the last second of the current week PEI
${DATE_WEEK_AGO_INIT} Date of the first second of the last week (starts in Sunday) PEI,PAS
${DATE_WEEK_AGO_LAST} Date of the last second of the last week (ends in Saturday) PEI,PAS
${DATE_MONTH_INIT} Date of the first second of the current month PEI
${DATE_MONTH_LAST} Date of the last second of the current month PEI