Integration Hub

📘

Information

This document helps developers of LeanIX partners or customers to understand the concept of Integration Hub (iHub) and walks through the steps to get a first connector registered using the iHub API.

Walking through the guide requires technical knowledge to the point of being able to communicate with REST APIs in any programming language.

What is the Integration Hub?

Integration Hub or iHub is a central functionality of the LeanIX platform to manage custom integrations from inside the LeanIX platform. iHub reduces the amount of work to spend for each custom integration by taking care of many cross-cutting functionalities all integrations have to provide to an administrator to operate the integrations.

iHub orchestrates the data transfer between other systems and LeanIX. To achieve this, iHub calls an external software component responsible to connect to a specific other system and ensures data read from it, is properly transferred to the LeanIX platform using Integration API (Load data into the LeanIX platform). Information from the LeanIX system can be moved to an external system using the same mechanisms. Both directions can be used separately or in a single execution.

Please read the documentation of the Integration API on how to create a configuration on a workspace that can be used to process an incoming LDIF data file (see Integration API documentation for format description). A configuration can be created from the LeanIX admin UI or using the Integration API REST endpoints.

📘

What is not part of iHub functionality:

Hub does not reach out to external systems directly but always uses the capability of specific code knowing about the details of the external system.

How does iHub reduce the cost to build and maintain custom integrations with the LeanIX platform?

With iHub, administrators can manage all custom integrations (may they run on-premise or in any cloud environments) from a central place.

iHub offers the following features to the administrator and removes the need to build this for each integration separately:

  • Configuration management for parameters the connector needs like credentials or filter options to read a sub-set of data from the external system or define a location in the external system to write LeanIX data to.
  • Manually triggered and scheduled executions of data exchange in both ways
  • Tracking the progress and status of such executions end to end
  • Provide central access to log files
  • Auditing and alerting of executions
  • Provide a run time environment to execute custom code, built to communicate with an external system (Q4/2021)
  • Allow to publish Integrations in the LeanIX Store and offer to other users of the LeanIX platform

Glossary

  • iHub: Integration Hub. The LeanIX platform capability to simplify data exchange with external systems
  • Connector: The piece of software that knows how to exchange data with a specific other systems. A connector needs to be provided separately for each integration of a system with the LeanIX platform.
  • Integration: The iHub plus the connector build the “Integration” with an external system.
  • Data Source: A description how and when to start a Connector from iHub. The data source contains all detailed configuration information to send to the connector and what Integration API configuration to use after the connector provided data from the other system. Multiple data sources can be created for each connector on a workspace. Each Data Source is configured to read exactly the needed information from the external system and write it to LeanIX or the other way round. Data Sources are fully managed by iHub.
  • Data Source Run: An execution of a Data Source transferring data from and/or to LeanIX
  • iAPI: Integration API. The Integration API is a separate LeanIX offering and used by iHub to process the data that was sent by the connector and ensure the data is pushed into the LeanIX workspace.

How Integration Hub compares to direct Integration API usage

Integration Hub makes accessing the LeanIX platform even more simple than just using Integration API. You can benefit from all features of Integration API for any data transfer in batches and further reduce connector complexity by using Integration Hub. This covers build time, maintenance and operations.
The following table provides an overview of Integration API and Integration Hub features from a use case perspective.

Use case support for administrators using an integration based on a custom connector:

Use case

iAPI

iHub

See all incoming data

:heavy-check-mark:

:heavy-check-mark:

Change mapping and processing default to specific needs

:heavy-check-mark:

:heavy-check-mark:

Full Auditing of all data processed into LeanIX and exported from LeanIX

:heavy-check-mark:

:heavy-check-mark:

Full audit of process reading data from foreign system or writing data to it

:x:

:heavy-check-mark:

Ability to see all integrations on my workspace in the admin UI

:x:

:heavy-check-mark:

Configure the connector from the admin UI with out having to change the connector code

:x:

:heavy-check-mark:

Set the schedule for integration runs in the Admin UI

:x:

:heavy-check-mark:

See detailed logs and pass them to the team creating the integration instead of just reaching out to them telling "Some runs did not work at 8am"

:x:

:heavy-check-mark:

Be able to test and change credentials needed to load data from an internal system with the integration in the admin UI

:x:

:heavy-check-mark:

See documentation for my integration linked right from the admin UI

:x:

:heavy-check-mark:

Use case support for developers creating an integration based on a custom connector:

Use case

iAPI

iHub

Work with an API to send and receive Data without having to talk to different APIs

:heavy-check-mark:

:heavy-check-mark:

Not care about short outages as jobs sent to LeanIX are processed in a robust and fail save way

:heavy-check-mark:

:heavy-check-mark:

Not having to implement any mapping and processing logic for each connector but communicate plain data JSON

:heavy-check-mark:

:heavy-check-mark:

Easy debugging of mapping and processing with UI support in LeanIX admin area

:heavy-check-mark:

:heavy-check-mark:

Easy API with few calls to load or retrieve data

:heavy-check-mark:

:heavy-check-mark:

No knowledge about LeanIX data model required in the connector

:heavy-check-mark:

:heavy-check-mark:

Connectors can call LeanIX and do not require an open HTTPS endpoint to be called from LeanIX

:heavy-check-mark:

:heavy-check-mark:

Store configuration for mapping and processing safely in LeanIX and always process reliably following same rules

:heavy-check-mark:

:heavy-check-mark:

Use Self Service UI connector configuration management provided by LeanIX for my connector instead of writing it on my own or having to work on requests to change configuration for users

:x:

:heavy-check-mark:

Rely on monitoring, logging and auditing of the LeanIX platform instead of having it to build and maintain

:x:

:heavy-check-mark:

Write a full custom integration with one to a few pages of own code but still call it production quality code

:x:

:heavy-check-mark:

Use the exact same code logic in my connector no matter if I need batch or live updates of my data

:heavy-check-mark:

:heavy-check-mark:

Use an API where I have to provide only a single REST endpoint and do not need to know any LeanIX endpoints but get all information with full URLs and tokens provided when my connector is called (iHub follows HATEOAS and not even logins need to be performed on the connector side)

:x:

:heavy-check-mark:

I can write a connector where I can call LeanIX and do not have to provide a REST endpoint (using the iHub Remote Connector concept)

:heavy-check-mark:

:heavy-check-mark:

Use LeanIX scheduling logic and settings in a UI to configure when to run my connector

:x:

:heavy-check-mark:

What makes connectors build for iHub so simple?

iHub focuses on taking over all potential complexity from the connector. It was built to support connectors that only provide a single HTTP endpoint and do not have to implement complex interfaces in order to be integrated into the LeanIX platform.
If remote access from LeanIX to the connector is not wanted or not allowed, every connector can still call iHub to trigger a start. In this setup, all features except scheduling are available.

If remote access is allowed, deploying some code as a simple “Azure Function” is typically enough to serve all the needs. There is no need and requirement to have an Azure account or any cloud infrastructure.

📘

No need for any cloud account. Connectors will run on any host

Connectors may call iHub and not provide any endpoint
or
provide a single REST endpoint to be called by iHub.

One part of this paradigm: A connector only needs to provide a single HTTPS endpoint. iHub calls this endpoint together with all information the connector needs to run in the payload:

  • Configuration settings the admin defined for this run (standard and secret configuration separated
  • Callback-URL to provide updates on progress and status
  • URL of an Azure Blob-store with included write access token to write the data read from the external system in LDIF format (see Integration API documentation)
  • URL of an Azure Blob-store with included write access token to push all kinds of log messages to allow administrators to analyze in case of issues on the connector side occur.
The connector does not need to make any calls to any LeanIX API. All Endpoints the connector needs are passed with the start call. There is no need for authentication as all access keys are generated by iHub upfront and passed in with the start call. In case of a connector not providing any endpoint, the connector calls iHub and receives exactly the same payload as a result. This allows to write connectors to even support both deployment cases: With and without endpoint to be called from iHub.

The connector code can fully focus on reading the content from the external system (or write to in case this is needed for the use case)

General Workflow

Preliminary Requirements

To work with iHub, you need to have a user with ADMIN privileges on the workspace you are working in.

To access iHub, you first need to have an access token. This can be obtained by MTM and is documented in the general API documentation in more detail.

Overview

The necessary steps to get a connector running are:

  1. Register a connector: This is required to make iHub aware that there is a connector ready to be used in a specific workspace. Registration of a connector is done a single time or to update a connector definition, maybe to change some defaults or offer a new connector parameter to the administrators.
    1. The registration API (POST workspaceConnectors) call expects the HTTPS URL, the connector can be reached from iHub plus some default configurations that allow administrators to have an easy start and only change some custom parameters like access credentials to the external system later on.
    2. As part of the registration, the caller needs to ensure the default Integration API configuration is available on the workspace as well. This reduces admin work and ensures to get a first data source running very quickly end-to-end. Integration API provides an endpoint to deploy a configuration as referenced in the Integration API documentation.
  2. Create a datasource: Done by the administrator of the workspace. A Data Source represents a standalone entity used to manage concrete connector configurations, a concrete link to Integration API processing information and define, if and at what schedule an execution should happen should execute automatically.
    1. Please note that connectors running remotely and cannot be reached from the Cloud by iHub (security restrictions, firewalls) need to take care of starting on their own. They can still make use of all configuration and scheduling settings a user may centrally do and use central monitoring and logging capabilities of iHub. iHub provides a specific end point for this use case sending over all configuration data on request. After a data source has been started, there is no difference between a cloud connector and an on premise connector anymore.
  3. Start a datasource: This triggers the execution of the Data Source, which then invokes the connector application and after that the Integration API to process the collected data into the LeanIX platform
    1. In case of a connector capable writing data back, Data is prepared to be polled by the connector and can then be used by the connector to write back to the external system. (Write back planned for Q4)
  4. Monitor status for a data source run: Optionally, the status of progress can be monitored.
  5. View audit results: Results can be obtained in Sync Log if needed.

Please note that you as the developer of a connector provide the code to allow execution of step 1. All the other steps are executed by an administrator of the workspace when working with your connector to load data into the workspace.

Register a new Connector

📘

Information

It is very helpful to think about proper defaults for the settings and provide them with the connector registration. This reduces administrators work and errors trying to initially configure. Even empty values can be a good idea to make the admin aware that there is a parameter to fill and how it is named.

Please consider uploading an icon for the connector as well. It increases UX for the administrator working with the Integration Hub

Registering a workspace connector

The endpoint POST workspaceConnectors allows to register a connector on a single workspace.

### create
# @name workspaceConnectors
POST {{baseUrl}}/workspaceConnectors
Content-Type: application/json
Authorization: Bearer {{fetchJwt.response.body.$.access_token}}

{
    "name": "example-hub",
    "description": "",
    "connectorUrl": "{{connectorUrl}}",
    "documentationUrl": "{{optionalHttpLinkToConnectorDocumentation}}",
    "connectorConfiguration": {},
    "secretsConfiguration": {
    },
    "bindingKey": {
      "connectorType": "example-hub",
      "connectorId": "default",
      "processingDirection": "INBOUND",
      "processingMode": "FULL",
      "lxVersion": "1.0.0"
    }
}

You should add an icon using the id you receive in the response. The icon needs to be JPG format and 300x100 pixel or any 3:1 format from 210/70 to 840/280 with maximum size 512KB:
[PUT workspaceConnector/{connectorId}/icons]

How a connector has to act on a start call by iHub

When called from iHub at the URL provided when registering the connector, the connector may start the processing.

But before any processing starts, the connector needs to respond to the call (details below). iHub will wait no longer than 10 seconds for the connector to confirm the start call. Processing must not start before sending back the confirmation.

While processing, the connector sends progress updates to the iHub using the URL provided in the payload when being started. These update calls may be done every few seconds to maximum 10 minutes.

📘

Information

If the iHub does not receive any progress updates for more than 10 minutes, iHub considers the run to be failed and will no longer accept calls nor continue processing.

The connector have to implement the following sequence of calls:

  • Start Request: Issued by Integration Hub to the connector, it is an Http POST with a payload containing all the information configured in Integration Hub, bellow there are examples of such a payload.
    • Connectors located behind a customer firewall that cannot be called from the Cloud can call iHub and trigger a start on their own. Response will be exactly the same content the connectors receive when started by iHub. This allows to implement same logic for connectors no matter where they run. Please note that the active start call by the connector requires an access token that can be obtained from the LeanIX MTM service.
  • Response to Start Request: if started by iHub, the expected response to the start request, it is the confirmation the connector has received the request and it is working on it, the body of this response is a simple object with the Id of the run (repeated value from Integration Hub) and the status IN_PROGRESS. Response should be returned to iHub as fast as possible and before starting any processing of the request. The connector needs to respond to the call as fast as possible as users will potentially wait in the UI. Maximum wait time on iHub side is 15 seconds.
  • Callback Progress: Issued by the connector to inform the Job is being executed, these calls should happen at least one per 10 min to keep the data source Run alive in Integration Hub. The connector can send as many requests of this type as needed while the Job is running. See bellow examples of the payload for this request.
  • Finish Callback: Last progress update issued by the connector to flag that all the processing is completed, the payload must set the status field as FINISHED.

      After the Finish Callback is sent by the connector, no more calls are necessary and the processing will continue controlled by Integration Hub. Here are samples of the payloads for each of the requests between the connector and Integration Hub:

      Name/Type Owner Side Payload
      Start Request Integration Hub
      {
        "runId": "e01b640c-fa7f-4d55-815b-af2d322ab40a",
        "connectorConfiguration": {},
        "secretsConfiguration": {},
        "bindingKey": {
          "connectorType": "local-example-01",
          "connectorId": "default",
          "connectorVersion": "1.0.0",
          "processingDirection": "inbound",
          "processingMode": "full",
          "lxVersion": "1.0.0"
        },
        "ldifResultUrl": "https://...",
        "progressCallbackUrl": "https://...",
        "connectorLoggingUrl": "https://...",
        "runStatusUrl": "https://...",
        "integrationApiResultUrl": "https://..."
      }
      
      Response to Start Request Connector
      {
        "runId": "e01b640c-fa7f-4d55-815b-af2d322ab40a",
        "status": "IN_PROGRESS"
        }
      
      Callback Progress Connector
      {
        "status": "IN_PROGRESS", 
        "message": "Processing 90%"
        }
      
      Finish Callback (sample 1) Connector
      {
        "status": "FINISHED", 
        "message": "Done 100%"
        }
      
      Finish Callback (sample 1) Connector
      {
        "status": "FAILED", 
        "message": "Error on ... "
        }
      

      Advanced Options

      Central management of connector logs

      Integration Hub sends a link to an Azure storage where to write the data to, that Integration API will process into LeanIX.

      In additional to this link, iHub provides a second storage link that can be used by connectors optionally.

      connectorLoggingUrl" allows to the connector to just dump all important log lines that may help administrators to analyze potential issues with the connector.

      Authorization when integration hub calls the connector

      The data source configuration may contain an API access token as part of the secretConfiguration. This could allow iHub to just reach out to the connector without additional authentication.

      In case an additional authorization is required when calling the connector, this can be defined as a template in the connector when registering. This allows administrators to start with a proper default and to be aware of the required authentication mechanism. When creating a Data Source for the connector, some defaults would already be available to be changed.

      Authentication configuration is done by setting up the optional authConfiguration field different to null in the Data Source configuration.

      The following authorization methods are supported:

      Authorization Example Result
      header
      "authConfiguration":
      	{
      	    "customHeaders": {
          		"key-storage": "${secretsConfiguration.KEY_STORAGE}"
      	    },
      	}
      
      adds arbitrary fields into header
      basic auth
      "authConfiguration":
      	{
      	    "basicConfiguration": {
      			"username": "${secretsConfiguration.BASIC_USERNAME}",
      			"password": "${secretsConfiguration.BASIC_PASSWORD}"
              }
      	}
      
      adds authorization including a Basic credentials into the header
      oauth2
      "authConfiguration": {
            "oauth2Configuration": {
              "grantType": "CLIENT_CREDENTIALS",
              "accessTokenUrl": "https://test-svc-flow-2.leanix.net/services/mtm/v1/oauth2/token/",
              "clientId": "apitoken",
              "clientSecret": "${secretsConfiguration.OAUTH_CLIENT_SECRET}"
            }
          }
      
      adds authorization including a Bearer token into the header

      It is possible to specify placeholders inside the configuration which resolves the concrete content from the secretsConfiguration map to protect sensible data from the common configuration parts. A placeholder uses this pattern: ${secretsConfiguration.. If the specified <field> is not part of secretsConfiguration the configuration uses exactly the specified string.

      📘

      Information

      customHeaders can be defined for the basic auth and the oauth2 type as well just by adding the key with required values to the configuration.

      Support for “on premises” connectors

      In some cases, it may not be possible that iHub can call a connector to trigger the start. Reasons may be network restrictions like firewalls not permitting access of the customer network from the cloud.

      Integration Hub does support such scenarios.

      In such a setup, a connector would be registered and the connectorUrl would be sent as an empty string when doing the registration.

      The LeanIX admin UI will show the “run”, “test” and “test connection” buttons in disabled state for data sources. A tool tip shows the user the reason.

      Instead, the connector needs to be started externally, typically from inside the customer network, where the connector was deployed.

      Integration Hub will be fully functional and all features, except starting, are available.

      Connectors may call GET /datasourceRuns/name/{name}/selfStart. The response will be the same payload that is sent by iHub when iHub is starting a connector. This allows to create connectors that support both setup scenarios and register them with different names to make the user aware of the difference when using it. The workflow is always the same: The connector receives the configuration the administrator configured, works with this and makes use of iHub managing all Integration API loading, logging, monitoring, auditing.

      Using “executionGroup” functionality of Integration API

      iAPI can merge multiple configurations to one and execute as if the admin had configured all processors in one configuration. This allows to flexibly extend a default configuration with custom parts just by using same tags in each part of the configuration (see iAPI documentation for “executionGroup” functionality).

      Integration Hub can use this functionality as well. At any place where a “bindingKey” can be defined, the alternative definition can be a single tag “executionGroup”: “myGroupName”. Where “myGroupName” matches the executionGroup definition in the Integration API configurations. iHub will not allow inconsistent configurations and fail if there is a definition of both options.

      Please note that iHub will always send a “bindingKey” to the connector when starting. This binding key will be filled with dummy content and not be used while processing. The binding key can be used to generate a valid LDIF JSON to be returned with the data collected by the connector.

      Example call how to register a connector using an executionGroup (please note that the mode can always be changed for every data source. The registration only defines the default when creating a data source)

      ### create
      # @name workspaceConnectors
      POST {{baseUrl}}/workspaceConnectors
      Content-Type: application/json
      Authorization: Bearer {{fetchJwt.response.body.$.access_token}}
      
      {
          "name": "example-hub",
          "description": "",
          "connectorUrl": "{{connectorUrl}}",
          "connectorConfiguration": {},
          "secretsConfiguration": {
          },
          "executionGroup": "myGroup"
      }
      

      To learn more about the “executionGroup” concept of Integration API, please look up the documentation here: Execution Groups in Integration API

      Connection test

      iHub supports executing a quick test if the administrator provided correct parameters to be ready for processing. This option is typically used by the connector to check access to the external system where data is collected from. It might be used for some other quick checks of the configured parameters. Expectation is that the result is directly sent back by the connector.

      The following code shows the Connector Test interaction:

      Payload will contain a test flag if connector is called in the test mode. All other payload:
      
      {
        "connectorConfiguration": {},
        "secretsConfiguration": {},
        "bindingKey": {
          "connectorType": "local-example-01",
          "connectorId": "default",
          "connectorVersion": "1.0.0",
          "processingDirection": "inbound",
          "processingMode": "full",
          "lxVersion": "1.0.0"
        },
        "testConnector": true
      }
      
      Expected response (status=200):
      {
        "message": "Test successful. Connector is ready to create a LDIF."
      }
      (Content-Type= application/json)
      

      All other responses than status 200 will be considered as a test failure and the message will be provided to the user.

      Write data to external systems

      Using Integration Hub, you can not only load data into LeanIX but write data from LeanIX to external systems as well. In a single execution, both steps can be combined by first reading data into LeanIX and take a result LDIF created by Integration API and write the data to the external system by the connector.

      When starting an iHub run, connectors are provided with two additional URLs. One is to read the status of iHub processing and another one to read the URL of the result LDIF that will contain the data to be written to the external system.

      While writing to the external systems, the connector again has to push progress information to iHub and tell iHub if processing is still running, failed or succeeded. Same rules apply as we have when reading content from external systems.

      How does the connector poll for the result LDIF?

      • Here an overview of all phases if the connector supports reading and writing
      • iHub starts the connector (or connector calls /datasourceRuns/name/{name}/selfStart)
      • The connector does all work to read from the external system and converts the data to LDIF
      • While doing this, the connector sends progress information to iHub
      • The connector tells iHub that processing finished
      • iHub processes the data using Integration API
      • While iHub processes, the connector polls for the status of the run
      • If iAPI provides a result LDIF with data from the workspace, the state of the run turns to “INTEGRATION_API_RESULT_URL_READY”
      • The connector detects this status and reads the LDIF from the blob storage
      • The connector processes the LDIF and sends the data to the external system
      • While sending, the connector constantly updates progress to iHub
      • When the connector is done processing, it sends “FINISHED” status to iHub

      While the connector wants to detect if there is data available to write back to the external system, the connector polls iHub for a status every minute, maximum every 9 minutes.
      The connector will call the “status” endpoint and wait for a state “FINISHED”, “STOPPED”, “FAILED” or “INTEGRATION_API_RESULT_URL_READY”.

      In case of the last status, the connector can now retrieve the Link to the LDIF on an Azure blob storage including a SAS token allowing to read from it by calling the endpoint “/IntegrationApiResultUrl“. Any other above state indicates that there is no data available to write back and the connector process may just terminate.

      The connector does not have to authenticate or know where to call the endpoints as both full URLs will be part of the meta data, the connector receives from iHub at start time:

      "runStatusUrl": "https://..."
      "integrationApiResultUrl": "https://..."
      

      API Documentation

      Currently the API documentation is reachable in the Open Api explorer in LeanIXwhere you can try the Api directly following the sample requests. In fact, you may not need more than a single end point when working with iHub:
      You may very likely only need to register your connector in the workspace (POST workspaceConnectors). This registration plus pushing a default Integration API configuration to help administrators and allow them a most easy and successful start when working with the connector.
      All other endpoints that impact a single workspace are available to workspace administrators but mostly serving the LeanIX admin UI.

      Get a sample implementation of a connector

      LeanIX built a connector, capable reading data from an excel file and converts this into the expected LDIF format. The connector is written in Python and prepared to run as an Azure Function. Any partner and customer of LeanIX can request this as example code to start own connector development from. The Excel connector is for educational purposes only and must not be used in any production environment. The connector was created to demonstrate the functionality of iHub and does not reflect state-of-the-art Python programming. Ask your Customer Success Manager to retrieve the sample code.

      Creating a default Integration API configuration

      Registering a connector in a LeanIX workspace is easy and can be done following the above documentation. A configuration for Integration API is not mandatory for this to work. For an administrator, it is however required to always link a data source to an existing configuration of Integration API. Integration Hub will take the data provided by the connector and pass it on to Integration API to load it into the workspace. Administrators will appreciate to work off from an already working default Integration API configuration for a connector. Administrators may always change this config to match it exactly to the LeanIX data model on the workspace or to specific requirements.

      A default configuration will be deployed using the PUT /Configurations endpoint of Integration API (see REST API documentation). The Integration API identifiers provided as part of the connector registration (bindingKey) needs to match exactly the identifiers used when deploying the Integration API configuration to the workspace. This allows administrators to start the first Data Source without having to deal with any Integration API configuration settings.

      Connectors may provide a simple configuration that already maps some standard meta data the connector delivers for each data object.
      Please use the below example a a starting point.

      The connector may delivers a JSON LDIF data file like the below with only two meta data fields: app and version.

      {
          "connectorType": "Kubernetes",
          "connectorId": "Kub Dev-001",
          "lxVersion": "1.0.0",
          "description": "Imports Kubernetes data into LeanIX",
          "processingDirection": "inbound",
          "processingMode": "full",
          "content": [
              {
                  "type": "Deployment",
                  "id": "634c16bf-198c-1129-9d08-92630b573fbf",
                  "data": {
                      "app": "HR Service",
                      "version": "1.8.4"
                  }
              },
              {
                  "type": "Deployment",
                  "id": "784616bf-198c-11f9-9da8-9263b0573fbe",
                  "data": {
                      "app": "Finance Service",
                      "version": "10.5"
                  }
              }
          ]
      }
      

      The default configuration added to the workspace together with the connector may look like the below. It will already allow an administrator to see some applications being generated with name and description set. Relevant for changing is the “updates” section where all the input fields are mapped to fields on a fact sheet.

      {
          "processors": [
              {
                  "processorType": "inboundFactSheet",
                  "processorName": "Apps from Deployments",
                  "processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
                  "type": "Application",
                  "filter": {
                      "exactType": "Deployment"
                  },
                  "identifier": {
                      "external": {
                          "id": {
                              "expr": "${content.id}"
                          },
                          "type": {
                              "expr": "externalId"
                          }
                      }
                  },
                  "updates": [
                      {
                          "key": {
                              "expr": "name"
                          },
                          "values": [
                              {
                                  "expr": "${data.app}"
                              }
                          ]
                      },
                      {
                          "key": {
                              "expr": "description"
                          },
                          "values": [
                              {
                                  "expr": "${data.app} - ${data.version}"
                              }
                          ]
                      }
                  ]
              }
          ]
      }