Add an Infra
This section details how to add an infra to an existing UI deployment. You will need to do the following steps for each UI deployment in which you want the infra to appear.
Current standard deployments are cloud and staging.
Adding an Infra involves deploying two new services: an Infra Api and a Dashboard Generator, and redeploying the UI and API Gateway.
| This only details how to add an infra to a UI stack. If you need to create the infra as well, first go through Creating a New Infra Platform. |
References:
-
<infra_id>is the id of your infra -
<hostname>is the hostname of a server
Create an Infra
If you are building a completely new Infra, you will first need to create it.
Grant Users Access
Users are created in Keycloak and granted permission to access an infra via an infra key, detailed in the Grant User Access docs.
You will need the infraKey from Keycloak in the sections below.
Create the Grafana Organisations
If you do not already have a grafana organisation for this Infra, see the Grafana Add An Infra docs
You will need the organisation IDs and api keys for the steps below.
Adding an Infra API and Dashboard Generator service to the UI Deployment
First decide which server you will deploy your Infra API and Dashboard Generator to.
You may wish to check the Netdata panel at netdata.<hostname>.dgcsdev.com to check current resource availability.
Now in the Ansible hosts file add the following to each section:
[<Hostname>>]
<hostname>.dgcsdev.com
...
; Infra Services
[<infra_id>_dashboard_generator_server:children]
<Hostname>
[<infra_id>_api_server:children]
<Hostname>
...
; Services
...
[api_server:children]
<infra_id>_api_server
...
[dashboard_generator_server:children]
<infra_id>_dashboard_generator_server
...
; Infras
[<infra_id>:children]
<infra_id>_dashboard_generator_server
<infra_id>_api_server
Once your hosts file has been set up, you will need to add the details to Ansible Vars.
Update Ansible vars with the infra details
You should only need to add your infra declarations in the group_vars files.
There are tasks and scripts which loop through these infra declarations and generate the relevant configurations for specific services automatically.
Create the group_vars file
Each infra has its own group_vars file for convenience, conventionally group_vars/<infra_id>.yml.
Here is an example file, which we’ll break down after. Once you’ve created this file, you’ll also need to update hte deployment.yml as detailed below.
<infra_id>_infra:
mqtt:
host: mqtt.<infra_id>.<deployment_id>.smartermicrogrid.com
port: 20287
username: "<infra_id>-mqtt"
password: "******"
grafana:
id: '<infra_id>'
custom:
org_id: 1234
api_key: *******
default_time_window: '7d'
auto:
automagic: 'true'
org_id: 1234
api_key: *******
default_time_window: '7d'
<infra_id>_api:
id: '<infra_id>'
infra: '<infra_id>'
deploy_level: "prod"
label: '<infra_id>'
ports:
container: 8085
inspect: 18085
healthcheck: 18485
mqtt: "{{<infra_id>_infra.mqtt}}"
infraKey: ABCDEF123456
grafana: "{{<infra_id>_infra.grafana}}"
feature_flags:
exclude: ["monitoring", "policy"]
<infra_id>_dashboard_generators:
- id: "{{<infra_id>_infra.grafana.id}}.auto"
grafana: "{{<infra_id>_infra.grafana.auto}}"
mqtt: "{{<infra_id>_infra.mqtt}}"
ports:
inspect: 19232
healthcheck: 19432
- id: "{{<infra_id>_infra.grafana.id}}.custom"
grafana: "{{<infra_id>_infra.grafana.custom}}"
mqtt: "{{<infra_id>_infra.mqtt}}"
ports:
inspect: 19239
healthcheck: 19439
Let’s look at each section:
Broker Details
<infra_id>_infra:
mqtt:
host: mqtt.<infra_id>.<deployment_id>.smartermicrogrid.com
port: 20287
username: "<infra_id>-mqtt"
password: "******"
This quite simply defines the connection to the mqtt broker that you set up above. In this example, we are following a conventional dns setup for the host.
Grafana Details
<infra_id>_infra:
grafana:
id: '<infra_id>'
custom:
org_id: 1234
api_key: *******
default_time_window: '7d'
auto:
automagic: 'true'
org_id: 1234
api_key: *******
default_time_window: '7d'
The org_id and api_key come from the organisations you set up earlier.
You need both custom and auto details for all UI embeds to work, and the auto section must have automagic: 'true'.
Infra API
<infra_id>_api:
id: '<infra_id>'
infra: '<infra_id>'
deploy_level: "prod"
label: '<infra_id>'
infraKey: ABCDEF123456
ports:
container: 8085
inspect: 18085
healthcheck: 18485
mqtt: "{{<infra_id>_infra.mqtt}}"
grafana: "{{<infra_id>_infra.grafana}}"
feature_flags:
exclude: ["monitoring", "policy"]
| Var | Details | ||
|---|---|---|---|
|
The service id for the api. Conventionally the `<infra_id> |
||
|
The infra id |
||
|
Human readable name |
||
|
|
||
|
The key from Keycloak created earlier, granting access to auth-restricted endpoints. |
||
|
The docker container port for the api |
||
|
The nodejs inspector port |
||
|
The healthcheck port for the Uptime Robot monitor |
||
|
Reference the mqtt vars object created earlier |
||
|
Reference the grafana vars object created earlier |
||
|
Intended to be passed to the UI to enable/disable UI sections.
|
DNS
The nginx config for this service - if following conventions - will route requests to <infra_id>.api.<deployment_id>.<tld>.
For the time being, you will need to create the DNS records to point this url at the right server manually. If you are deploying multiple infra apis on the same server, you could create one wildcard record to support them all.
Dashboard Generators
<infra_id>_dashboard_generators:
- id: "{{<infra_id>_infra.grafana.id}}.auto"
grafana: "{{<infra_id>_infra.grafana.auto}}"
mqtt: "{{<infra_id>_infra.mqtt}}"
ports:
inspect: 19232
healthcheck: 19432
- id: "{{<infra_id>_infra.grafana.id}}.custom"
grafana: "{{<infra_id>_infra.grafana.custom}}"
mqtt: "{{<infra_id>_infra.mqtt}}"
ports:
inspect: 19239
healthcheck: 19439
You will want to create a custom and an auto if you want to have UI embeds work properly.
The former picks up dashboards based on custom configuration in MQTT, and the latter auto-generates dashboards based on signal properties.
This configuration should be self-explanatory enough to just use the template above as-is.
Update the deployment.yml
Once you have your infra’s group_vars file set up to your satisfaction, you will need to add it to the deployment.yml for this deployment.
First, find the use_infras variable, and add your infra id to the list. e.g.
use_infras: [..., <infra_id>]
Next, add your infra details to the infras, infra_apis and dashboard_generators objects.
These are the looped variables that will automatically generate the services and configuration from your infra details.
infras:
...
<infra_id>: "{{<infra_id>_infra}}"
...
infra_apis:
...
<infra_id>: "{{<infra_id>_api}}"
...
dashboard_generators:
...
<infra_id>: "{{<infra_id>_dashboard_generator}}"
Uptime Robot monitor
You can add some automatic healthcheck watchers to Uptime Robot using the above.
If you have followed the conventions above, you should be able to use the following urls:
-
http(s)://healthcheck.<infra_id>.custom.dashboard-generator.<deployment_id>.smartermicrogrid.com
-
http(s)://healthcheck.<infra_id>.auto.dashboard-generator.<deployment_id>.smartermicrogrid.com
-
http(s)://healthcheck.<infra_api>.api.<deployment_id>.smartermicrogrid.com
The NGINX configuration that honours these urls is created automatically,
but you may have to update the Route53 DNS records to ensure that they point to the right server in the first place.
If you have colocated services on the same server by deployment_id or service_id, you should be able to add just the one wildcard DNS entry.
|
Redeploy the UI
Check the Deploy a UI Stack docs.