Welcome to the LeanIX Product Documentation

LeanIX is the Continuous Transformation Platform made up of high-grade solutions in Enterprise Architecture, SaaS and Value Stream Management. Explore our documentation to learn more about our platform capabilities, product specifics, guides, and best practices. Feel free to spread this product documentation in your company and use it as a common information source to answer your colleagues’ most urgent questions.

Inbound Processors

Overview

The integration API reads provided LDIF and for each data object, it walks over all the configured Data Processors and checks if the filter configured for each Data Processor matches. In case of a match, the Processor executes the transformation of data and writes to a LeanIX entity.

For each data object in the content section of the LDIF, inbound Data Processors, depending on the type of Data Processor, create Factsheets, Relations, Subscriptions, Metrics and Links (Resources tab of Fact Sheets formerly known as Documents). This includes setting and updating values in fields from certain data keys and values in the data object.

Extended to fully support various sources and keep the connector code simple. LeanIX provides a powerful mapping component that allows to (partly) map, add and combine information from multiple metadata elements and/or type or id. Creation of a relation that depends on certain key value pairs or keys (in case of simple tags), as well as combining input from different tags is supported. The configuration even allows to set fix values in certain fields to cover cases where not all data points in the source system have values.

Each Data Object will potentially be processed by multiple Data Processors (in case filters of multiple Data Processors match). To prevent data inconsistency in the processing run like creating a relation to a not yet existing object it is possible to order the execution by assigning each Data Processor a numeric run key, e.g run : 0 when creating a fact sheet, run: 1 when creating a relation to the new fact sheet.

Available Types

Inbound Data Processor Types

General

Details

inboundFactSheet

Is used to manage Fact Sheets (create, update, delete)
Example configuration can be found in the Admin UI

The configuration contains an additional "type" key to define the target Fact Sheet Type to create/update

The configuration needs to provide the name of the fact sheet to create and the external ID in case of updating a Fact Sheet.

The following field types in Fact Sheet fields can be updated (using the update section): STRING, SINGLE_SELECT, MULTIPLE_SELECT, DOUBLE, INTEGER, LOCATION, LIFECYCLE, EXTERNALID, PROJECT

Changing the Data Processor mode to "delete" will mark the FactSheet "archived", so behave the same way as if users selected "Delete" from the UI.

inboundRelation

Is used to manage relations between FactSheets (create, update, delete)
Example configuration can be found in the Admin UI

The configuration needs to provide an external or internal ID of the two fact sheets that need to be connected

The type of the relation needs to be provided

Fields configured for the relation can be updated. Supported types are the same as for the inboundFactSheet data processor. ActiveFrom and activeUntil can as well be updated. The expected date format is following the ISO 8601 format (e.g. "2019-08-02T09:03:49+00:00").

inboundSubscription

InboundSubscription is used to create, update or delete subscriptions on FactSheets

The Processor will add any new subscription unless only one subscribed user is allowed. In such cases, the processor will implicitly remove the old subscribed user and set the new one.

Variables to be set in the output section: "user" (mandatory), "subscriptionType" (mandatory), "subscriptionRoles" (optional, depending on workspace configuration), "comment" (optional, will be ignored if no role given).

inboundDocument

is used to create, update or delete documents linked to fact sheets (create, update, delete)
The structure is the same as for the inboundFactSheet data processor. Same matching logic for fact sheets applies. The found fact sheet will not be modified but a linked document changed according to mode (default is "createOrUpdate")

The updates section must contain a key "name" or the processor will fail. Other potential keys to be set are "description", "url", "origin" to complete the information for a document that is linked to a fact sheet.

inboundTag

Is used to manage Tag Groups (create, update), Tags (create, update, delete) and assignments from Tags to Fact Sheets (create)
Example configuration can be found in the Admin UI

Tags can only be removed completely currently. There is no way to remove tags for given fact sheets only. If removal of flags is wanted, a fact sheet processor in run 0 needs to be created to delete the tags. In run 1 a processor can then add them back in.

If no Tag Group is provided, the processor assumes the default Tag Group ("Other tags")

The processor automatically removes all Fast Sheet assignments if a Tag is deleted

inboundMetrics

InboundMetrics is used to write a single new point to a configured metrics endpoint

Metrics can store time series data in LeanIX and become interactive charts to display data. In addition the data can be linked to fact sheets, presented to the Dashboard or be displayed within the "Reports" area.

inboundImpact

inboundImpact processor is used to write BPM impacts using integration API.

All standard integration API functionality is available. A processor always writes a single impact. In case a list of input data is given where "forEach" can iterate over, multiple impacts can be written by a single processor. Different types of impacts should be split into different processors in order to keep a configuration readable as different impacts have different parameters.

Inbound FactSheet

Example

{
 "processors": [
  {
   "processorType": "inboundFactSheet",
   "processorName": "Create IT Components",
   "processorDescription": "One Processor for IT Components",
   "enabled": true,
   "type": "ITComponent",
   "identifier": {
    "external": {
     "id": {
      "expr": "${content.id.replaceAll('/','_')}"
     },
     "type": {
      "expr": "externalId"
     }
    }
   },
   "filter": {
    "exactType": "ITComponent"
   },
   "updates": [
    {
     "key": {
      "expr": "name"
     },
     "values": [
      {
       "expr": "${data.name}"
      }
     ]
    },
    {
     "key": {
      "expr": "cloudProvider"
     },
     "values": [
      {
       "expr": "${data.provider}"
      }
     ]
    },
    {
     "key": {
      "expr": "category"
     },
     "values": [
      {
       "expr": "${data.category}",
       "regexMatch": "(cloud_service)",
       "regexReplace": {
        "match": "^.*$",
        "replace": "cloudService"
       }
      },
      {
       "expr": "${data.category}",
       "regexMatch": "(sample_software)",
       "regexReplace": {
        "match": "^.*$",
        "replace": "software"
       }
      }
     ]
    }
   ],
   "vars": []
  }
 ]
}

General Structure

Filters

Filter section is where you define if the Data Processor should work on the found data object.

πŸ“˜

Data Processors provide filter capabilities to configure on which Data Object the data processor will work on (match the filter) and what Data Objects to skip (not match the filter)

Types of filters that can be configured:

Filter types

Details

exactType (type)

If "exactType" is configured, the string compared with the "type" field of the data object. The filter passed if the strings are equal.
There is a "type" filter available as well that is interpreted as a regular expression and matched against the "type" field of the Data Object.

id

If configured, the string is interpreted as a regular expression and matched against the "id" field of the Data Object

advanced

If configured, the field contains a JUEL expression that may evaluate to "true" for a match or "false". This filter allows to filter even for combinations of certain key and values in the Data Object

onRead

Behaves like the advanced filter but uses results of eventually configured read sections to filter based on existence of a fact sheet or based on specific values on an existing fact sheet.
The advanced section of the documentation contains an example how to use this filter.

writeToLdif

Using this processor, Administrators can configure an inbound Integration API run to write a new LDIF file. The resulting LDIF will be available using the /results and the /resultsUrl endpoints same as with outbound Integration API runs

🚧

Filters

All configured filters need to match in order to start the Data Processor on the Data Object (AND logic).

Identifier Section

Identifier section defines the pathfinder entity in scope of the processor. Depending on the processor it can be called "identifier" (all processors with one fact sheet in scope) or "from" and "to" for the inboundRelation processor.

Identification of the target Fact Sheet happens by defining the internal ID, the external ID or a "search scope".

Only one field must be filled as a value of key "identifier":

Example of identification by internal ID:

internalId: JUEL expression, replace RegEx

{
 "identifier": {
  "internal": "${content.id}"
 }
}

Example of Identification by External ID:

externalId: JUEL expression, replace RegEx (id/name of Fact Sheet or other entity)

{
 "identifier": {
  "external": {
   "id": {
    "expr": "${content.id}"
   },
   "type": {
    "expr": "externalId"
   }
  }
 }
}

Using the external key, it is possible to create an object in case it is not found. This happens transparently without any need to distinguish between create or update when configuring the processor.
When using the "search" based identification of the Fact Sheet that are supposed to be updated by the incoming data object, then the section may contain a section to limit the scope of searched Fact Sheets and an expression filtering the Fact Sheets that should be updated. Details can be found on the "Advanced" page of this documentation.
The Below Processor will update all descriptions of Application Fact Sheets that have a tag "AsiaPacific" in the tag group "Region". The full example can be found on the "Advanced" page.

{
 "processors": [
  {
   "processorType": "inboundFactSheet",
   "processorName": "Update all Cloud Apps",
   "processorDescription": "Updates all Apps with tag 'Cloud'",
   "type": "Application",
   "filter": {
    "exactType": "AppUpdate"
   },
   "identifier": {
    "search": {
     "scope": {
      "facetFilters": [
       {
        "facetKey": "FactSheetTypes",
        "operator": "OR",
        "keys": [
         "Application"
        ]
       },
       {
        "facetKey": "${integration.tags.getTagGroupId('Region')}",
        "operator": "OR",
        "keys": [
         "${integration.tags.getTagId('Region','AsiaPacific')}"
        ]
       }
      ],
      "ids": []
     },
     "filter": "${true}",
     "multipleMatchesAllowed": true
    }
   },
   "logLevel": "debug",
   "updates": [
    {
     "key": {
      "expr": "description"
     },
     "values": [
      {
       "expr": "External sync executed ${data.dateTime}"
      }
     ]
    }
   ]
  }
 ]
}

Update Section

The Update Section provides the ability to write to fields or further metadata to the targeted entity (depending on the processor).

Multiple values can be written. Each value consists of a JUEL and a regEx for building the name of the target key to be written and a list of potential values to be written.

Some keys might be mandatory depending on the processor (see the processor description for details).

The following field types in FactSheet fields can be updated (using the update section):

STRING

Is a basic text field with no functionality. This field has no configurable formatting like displaying clickable links or bold formatting

SINGLE_SELECT

Allows for the selection of one value from a dropdown list. This list of values can be changed at any point in time without data loss. This attribute can be filtered in the inventory and used as a view in the reports

MULTIPLE_SELECT

Allows for the selection of multiple values from a predefined list. Once defined this list cannot be changed again without incurring data loss

DOUBLE

There is no explicit currency field in LeanIX, but this type can display a currency icon

INTEGER

Represents a numeric value without decimal places

LOCATION

Will be sent to the location service to resolve a valid location from the given input string.
Setting the location will fail if the given data is not specific and results in multiple possible locations. In case a "#" is found as the first one, the API will pick the first result returned by the location service and use this. This is helpful if comma separated coordinates are being provided

LIFECYCLE

Content needs to follow the date format specifications "yyyy-mm-dd". Each field in the life cycle can be addressed with "."-syntax like e.g. lifecycle.active

EXTERNALID

External Ids can only be written if they are not marked readonly in the data model. The following fields can be written using "."-syntax: externalId.externalId, externalId.externalUrl, externalId.comment, externalId.status, externalId.externalVersion. The "externalId" left of the "." may be changed with the name of the external id field.

PROJECT

Project status values will always written as a full set that replaces the currently set project status values. In order to add to existing values, you need to add the field to the read section. This returns an object you would then iterate over using inner forEach (see advanced section for usage). While iterating a filter could be applied to not write back all found status values but selected only. In addition by defining more values new values can be added in the same step.
The structure of the required map can be copied from a read result (e.g. output to a description field for testing):
"updates": [
{
"key": {
"expr": "projectStatus"
},
"values": [
{
"map": [
{
"key": "id",
"value": "myId"
},
{
"key": "date",
"value": "2020-07-25"
},
{
"key": "status",
"value": "green"
},
{
"key": "progress",
"value": "20"
}
]
}
]
}
],

Example of an Inbound Data Processor Update Section

{
 "updates": [
  {
   "key": {
    "expr": "name",
    "regexReplace": {
     "match": "",
     "replace": ""
    }
   },
   "values": [
    {
     "expr": "${data.app}"
    }
   ]
  },
  {
   "key": {
    "expr": "description"
   },
   "values": [
    {
     "expr": "${header.processingMode}",
     "regexMatch": "abc"
    },
    {
     "expr": "${header.processingMode}_2"
    }
   ],
   "optional": true
  }
 ]
}

Modes

Changing the Data Processor mode to "delete" will mark the FactSheet "archived", so behave the same way as if users selected "Delete" from the UI.

In case a Fact Sheet is updated with a standard mode that has been set to "archived", there are two potential behaviors:

Mode

In case the fact sheet was matched using the external ID, then a new Fact Sheet will be created

In case the reference was done by using the internal ID, then the old Fact Sheet will be used and set back to "active"

Manage Lifecycles and Locations

Lifecycle Management

Writing to fields of type lifecycle needs to be split into different write operations (lines in the data processor. The value of the "key" field has to use the "." syntax. E.g. "lifecycle.plan", "lifecycle.phaseIn". Other default values are "phaseOut" and "endOfLife"

{
 "processorType": "inboundFactSheet",
 "processorName": "Lifecycle Example",
 "processorDescription": "Creates an Application with lifecycle information",
 "type": "Application",
 "filter": {
  "exactType": "Application"
 },
 "identifier": {
  "external": {
   "id": {
    "expr": "${content.id}"
   },
   "type": {
    "expr": "externalId"
   }
  }
 },
 "run": 0,
 "updates": [
  {
   "key": {
    "expr": "name"
   },
   "values": [
    {
     "expr": "${data.name}"
    }
   ]
  },
  {
   "key": {
    "expr": "description"
   },
   "values": [
    {
     "expr": "${data.name} is an application that carries lifecycle information"
    }
   ]
  },
  {
   "key": {
    "expr": "lifecycle.plan"
   },
   "values": [
    {
     "expr": "${data.plan == null ? '2014-01-01' : data.plan}"
    }
   ]
  },
  {
   "key": {
    "expr": "lifecycle.phaseIn"
   },
   "values": [
    {
     "expr": "${data.phaseIn}"
    }
   ]
  },
  {
   "key": {
    "expr": "lifecycle.active"
   },
   "values": [
    {
     "expr": "${data.active}"
    }
   ]
  },
  {
   "key": {
    "expr": "lifecycle.phaseOut"
   },
   "values": [
    {
     "expr": "${data.phaseOut}"
    }
   ]
  },
  {
   "key": {
    "expr": "lifecycle.endOfLife"
   },
   "values": [
    {
     "expr": "${data.endOfLife}"
    }
   ]
  }
 ]
}

Location Management

Writing to fields of type "location" will require a single string as input. The string will be sent to the location service (open street map). In case a single result was returned, the location will be written to the field with all meta data returned by "open street map". Providing latitudes and longitude works by simply passing the coordinates in that order, separated by comma: "50.11, 8.682".

In case of no or multiple locations returned, the field will not be populated and an error shown in the log for this field. Other updates by the data processor may still be valid and pass.

πŸ“˜

Writing Locations to LeanIX

When writing locations, the used open street map service may return multiple results. Default behaviour is to not set any location. In case the value provided to the Location starts with a # character, the first result from open street map will be used (same logic as we see when providing coordinates)

Inbound Subscription

Variables to be set in the output section:

Variable

RequiredY/N

Notes

"user"

Y

Users' email (either user or newUser" needs to be present.

"newUser"

Y

Works like "user" but creates a new user if not existing

"subscriptionType"

Y

"subscriptionRoles"

N

It may or may not be required because it is based on the specific configuration set in each workspace

"addSubscriptionRoles"
},

N

Same as "subscriptionRoles" but adds to existing roles instead of completely replacing all existing roles

"comment"

N

Example

{
 "processorType": "inboundSubscription",
 "processorName": "Subscription creation",
 "processorDescription": "Creates subscriptions",
 "filter": {
  "exactType": "ITComponent"
 },
 "identifier": {
  "external": {
   "id": {
    "expr": "${content.id}"
   },
   "type": {
    "expr": "externalId"
   }
  }
 },
 "updates": [
  {
   "key": {
    "expr": "user"
   },
   "values": [
    {
     "expr": "[email protected]"
    }
   ]
  },
  {
   "key": {
    "expr": "subscriptionType"
   },
   "values": [
    {
     "expr": "RESPONSIBLE"
    }
   ]
  },
  {
   "key": {
    "expr": "subscriptionRoles"
   },
   "values": [
    {
     "map": [
      {
       "key": "roleName",
       "value": "Business Owner"
      },
      {
       "key": "comment",
       "value": "This person is the business owner"
      }
     ]
    }
   ]
  },
  {
   "key": {
    "expr": "newUser.userName"
   },
   "values": [
    {
     "expr": "[email protected]"
    }
   ]
  },
  {
   "key": {
    "expr": "newUser.email"
   },
   "values": [
    {
     "expr": "[email protected]"
    }
   ]
  },
  {
   "key": {
    "expr": "newUser.firstName"
   },
   "values": [
    {
     "expr": "Jane"
    }
   ]
  },
  {
   "key": {
    "expr": "newUser.lastName"
   },
   "values": [
    {
     "expr": "Doe"
    }
   ]
  }
 ]
}

Inbound Relation

The "inboundRelation" processor requires identification of two factsheets. In this processor the "identifier" is replaced by two fields named "from" and "to". The potential values of the "from" and the "to" fields are identical with the "identifier" values and can handle internal and external ids as well.

Allowed values: JUEL expression plus optional replace RegEx map (available for each expression, for internal external and from and to in case of the inboundRelation processor).

{
 "identifier": {
  "external": {
   "id": {
    "expr": "${content.id}",
    "regexReplace": {
     "match": "",
     "replace": ""
    }
   },
   "type": {
    "expr": "externalId"
   }
  }
 }
}

Example

Please replace the type "relApplicationToITComponent" with the name of the relation that needs to be created or updated (e.g. "relToParent").

{
 "processorType": "inboundRelation",
 "processorName": "Rel from Apps to ITComponent",
 "processorDescription": "Creates LeanIX Relations between the created or updated Applications and ITComponents",
 "type": "relApplicationToITComponent",
 "filter": {
  "exactType": "Deployment"
 },
 "from": {
  "external": {
   "id": {
    "expr": "${content.id}"
   },
   "type": {
    "expr": "externalId"
   }
  }
 },
 "to": {
  "external": {
   "id": {
    "expr": "${data.clusterName}"
   },
   "type": {
    "expr": "externalId"
   }
  }
 },
 "run": 1,
 "updates": [
  {
   "key": {
    "expr": "description"
   },
   "values": [
    {
     "expr": "Relationship Description"
    }
   ]
  },
  {
   "key": {
    "expr": "activeFrom"
   },
   "values": [
    {
     "expr": "2019-08-02T09:03:49+00:00"
    }
   ]
  },
  {
   "key": {
    "expr": "activeUntil"
   },
   "values": [
    {
     "expr": "2020-08-02T09:03:49+00:00"
    }
   ]
  }
 ],
 "logLevel": "debug"
}

πŸ“˜

Referencing "from" and "to" Fact Sheets for relations by internal IDs

The inboundRelation processor as well supports referencing source and target fact sheets by their internal id as well. The syntax is the same we see for the identifier for inboundFactsheet processor: "internal": "${content.id}"

🚧

ExternalId

Please note: In order to create a Fact Sheet using the inboundFactsheet processor, providing an externalId is mandatory.

Inbound Relations Constraints

The relation processor allows to set constraining relations as well. In order to do so, a target key "constrainingRelations" needs to be defined (in the output section similar to the example target key "description in the example above"). All values of the resulting values list will be written as constraints. Existing ones will be removed. Alternatively the key "addConstrainingRelations" may be used to add constraints to existing ones.

Example

{
 "key": {
  "expr": "constrainingRelations"
 },
 "values": [
  {
   "forEach": {
    "elementOf": "${integration.valueOfForEach.rels.constrainingRelations.relations}"
   },
   "map": [
    {
     "key": "type",
     "value": "${integration.output.valueOfForEach.type}"
    },
    {
     "key": "targetExternalIdType",
     "value": "externalId"
    },
    {
     "key": "targetExternalIdValue",
     "value": "${integration.output.valueOfForEach.target.externalId}"
    }
   ]
  }
 ]
}
{
  "scope": {
    "ids": [
      "7750c7ba-5d24-4849-a1b4-564bc6c874a0"
    ],
    "facetFilters": [
      {
        "keys": [
          "Application"
        ],
        "facetKey": "FactSheetTypes",
        "operator": "OR"
      }
    ]
  },
  "processors": [
    {
      "processorType": "outboundFactSheet",
      "processorName": "Export to LDIF",
      "processorDescription": "This is an example how to use the outboundFactSheet processor",
      "enabled": true,
      "fields": [
        "lifecycle",
        "name",
        "location",
        "createdAt",
        "technicalSuitabilityDescription",
        "description"
      ],
      "relations": {
        "filter": [
          "relApplicationToProcess"
        ],
        "fields": [
          "description"
        ],
        "targetFields": [
          "displayName",
          "externalId"
        ],
        "constrainingRelations": true
      },
      "output": [
        {
          "key": {
            "expr": "content.id"
          },
          "mode": "selectFirst",
          "values": [
            {
              "expr": "${lx.factsheet.id}"
            }
          ]
        },
        {
          "key": {
            "expr": "content.type"
          },
          "mode": "selectFirst",
          "values": [
            {
              "expr": "${lx.factsheet.type}"
            }
          ]
        },
        {
          "key": {
            "expr": "Name"
          },
          "values": [
            {
              "expr": "${lx.factsheet.name}"
            }
          ],
          "optional": true
        },
        {
          "key": {
            "expr": "relations"
          },
          "mode": "list",
          "values": [
            {
              "forEach": {
                "elementOf": "${lx.relations}",
                "filter": "${true}"
              },
              "map": [
                {
                  "key": "relationName",
                  "value": "${integration.output.valueOfForEach.type}"
                },
                {
                  "key": "object",
                  "value": "${integration.output.valueOfForEach}"
                }
              ]
            }
          ]
        }
      ]
    }
  ]
}

Inbound Metrics

InboundMetrics is used to write a single new point to a configured metrics endpoint

The update section of the processor needs to contain the following keys with values:

Required Keys

Details

measurement

Name of the configured metrics measurement a points needs to be added to

time

The date and time of the point in ISO formatting (e.g. 2019-09-09T08:00:00.000000Z)

fieldKey

Name of the field to store the point value in

fieldValueNumber

The value you want to store for that field and point of time

tagKey

Name of the tag

tagValue

Value of the tag. You may want to write the internal ID of a specific fact sheet here to allow assignment of the data to a specific fact sheet as a rule in the created chart for the measurement (go to admin/metrics to configure)

The output section of the inboundMetrics data processor should be configured the same as other inbound processors. The keys will be written to the corresponding variables.

Example

{
 "processorType": "inboundMetrics",
 "processorName": "Metrics data for measurement",
 "processorDescription": "Metrics processor configuration",
 "filter": {
  "exactType": "Metrics",
  "advanced": "${data.measurement.equals('measurement')}"
 },
 "run": 1,
 "updates": [
  {
   "key": {
    "expr": "measurement"
   },
   "values": [
    {
     "expr": "${data.measurement}"
    }
   ]
  },
  {
   "key": {
    "expr": "time"
   },
   "values": [
    {
     "expr": "${data.time}"
    }
   ]
  },
  {
   "key": {
    "expr": "fieldKey"
   },
   "values": [
    {
     "expr": "${data.fieldKey}"
    }
   ]
  },
  {
   "key": {
    "expr": "fieldValueNumber"
   },
   "values": [
    {
     "expr": "${data.fieldValueNumber}"
    }
   ]
  },
  {
   "key": {
    "expr": "tagKey"
   },
   "values": [
    {
     "expr": "${data.tagKey}"
    }
   ]
  },
  {
   "key": {
    "expr": "tagValue"
   },
   "values": [
    {
     "expr": "${data.tagValue}"
    }
   ]
  },
  {
   "key": {
    "expr": "tags"
   },
   "values": [
    {
     "map": [
      {
       "key": "key",
       "value": "${data.tagKey}_1"
      },
      {
       "key": "value",
       "value": "${data.tagValue}_1"
      }
     ]
    },
    {
     "map": [
      {
       "key": "key",
       "value": "${data.tagKey}_2"
      },
      {
       "key": "value",
       "value": "${data.tagValue}_2"
      }
     ]
    }
   ]
  }
 ],
 "logLevel": "debug"
}

Inbound Document

inboundDocument is used to create, update or delete documents linked to fact sheets (create, update, delete).

The structure is the same as for the inboundFactSheet data processor. Same matching logic for fact sheets applies. The found fact sheet will not be modified but a linked document changed according to mode (default is "createOrUpdate")

Keys specific to inboundDocument

Details

"description"

Description of the document

"origin"

From what department or person does this originate from

"url"

Link to the document

"documentType"

A string containing information how to display the link on the Resource tab. Values are dynamic. It is suggested to first read the links for an item, then copy the values for writing similar links

"metadata"

A string containing information how to display the link on the Resource tab. Values are dynamic. It is suggested to first read the links for an item, then copy the values for writing similar links

🚧

inboundDcoument Data Processor

The updates section must contain a key "name" or the processor will fail.

Example

{
 "processorType": "inboundDocument",
 "processorName": "My link to Integration API docs",
 "processorDescription": "Contains the link that will point to the documentation for the LeanIX Integration API",
 "filter": {
  "exactType": "ITComponent"
 },
 "identifier": {
  "external": {
   "id": {
    "expr": "${content.id}"
   },
   "type": {
    "expr": "externalId"
   }
  }
 },
 "run": 1,
 "updates": [
  {
   "key": {
    "expr": "name"
   },
   "values": [
    {
     "expr": "Integration API Document"
    }
   ]
  },
  {
   "key": {
    "expr": "documentType"
   },
   "values": [
    {
     "expr": "website"
    }
   ]
  },
  {
   "key": {
    "expr": "url"
   },
   "values": [
    {
     "expr": "https://dev.leanix.net/docs/integration-api"
    }
   ]
  }
 ]
}

Inbound Tag

Tag sent as an array

In the below example if you just specify the name of the tag without other attributes the Tag by the name specified will be created under "Other Tags" and attached to the FactSheet.

{
 "processorType": "inboundTag",
 "processorName": "Tag creation",
 "processorDescription": "Creates tags and tag groups",
 "factSheets": {
  "external": {
   "ids": "${content.id}",
   "type": {
    "expr": "externalId"
   }
  }
 },
 "run": 1,
 "updates": [
  {
   "key": {
    "expr": "name"
   },
   "values": [
    {
     "expr": "${integration.valueOfForEach}"
    }
   ]
  },
  {
   "key": {
    "expr": "description"
   },
   "values": [
    {
     "expr": "${integration.valueOfForEach}"
    }
   ]
  },
  {
   "key": {
    "expr": "color"
   },
   "values": [
    {
     "expr": "#123456"
    }
   ]
  },
  {
   "key": {
    "expr": "group.name"
   },
   "values": [
    {
     "expr": "Kubernetes Tags"
    }
   ]
  },
  {
   "key": {
    "expr": "group.shortName"
   },
   "values": [
    {
     "expr": "k8s"
    }
   ]
  },
  {
   "key": {
    "expr": "group.description"
   },
   "values": [
    {
     "expr": "Tags relevant for Kubernetes"
    }
   ]
  },
  {
   "key": {
    "expr": "group.mode"
   },
   "values": [
    {
     "expr": "MULTIPLE"
    }
   ]
  },
  {
   "key": {
    "expr": "group.restrictToFactSheetTypes"
   },
   "values": [
    {
     "expr": "Application"
    },
    {
     "expr": "ITComponent"
    }
   ]
  }
 ],
 "forEach": "${data.tags}",
 "logLevel": "debug"
}
{
 "connectorType": "ee",
 "connectorId": "Kub Dev-001",
 "connectorVersion": "1.2.0",
 "lxVersion": "1.0.0",
 "description": "Imports kubernetes data into LeanIX",
 "processingDirection": "inbound",
 "processingMode": "partial",
 "customFields": {},
 "content": [
  {
   "type": "Deployment",
   "id": "784616bf-198c-11f9-9da8-9263b0573fbe",
   "data": {
    "app": "Finance Service",
    "version": "10.5",
    "maturity": "5",
    "clusterName": "westeurope",
    "tags": [
     "Important"
    ]
   }
  }
 ]
}
{
 "connectorType": "Report Technology Radar",
 "connectorId": "Technology Radar Tags",
 "connectorVersion": "1.0.0",
 "lxVersion": "1.0.0",
 "processingDirection": "inbound",
 "processingMode": "partial",
 "customFields": {},
 "content": [
  {
   "type": "Deployment",
   "id": "1",
   "data": {
    "taggroups": [
     {
      "name": "Technology radar - Quadrant",
      "shortname": "TRQ",
      "description": "Beschreibung Quadrant",
      "mode": "SINGLE",
      "factsheettype": "ITComponent"
     },
     {
      "name": "Technology radar - Ring",
      "shortname": "TRR",
      "description": "Beschreibung Ring",
      "mode": "SINGLE",
      "factsheettype": "ITComponent"
     }
    ],
    "tags": [
     {
      "groupname": "Technology radar - Quadrant",
      "name": "Architecture Concepts",
      "description": "Beschreibung Architecture Concepts",
      "color": "#ff0000"
     },
     {
      "groupname": "Technology radar - Quadrant",
      "name": "Platforms",
      "description": "Beschreibung Platforms",
      "color": "#00ff00"
     },
     {
      "groupname": "Technology radar - Quadrant",
      "name": "Techniques",
      "description": "Beschreibung Techniques",
      "color": "#0000ff"
     },
     {
      "groupname": "Technology radar - Quadrant",
      "name": "Tools & Infrastructure",
      "description": "Beschreibung Tools & Infrastructure",
      "color": "#000000"
     },
     {
      "groupname": "Technology radar - Ring",
      "name": "Hold",
      "description": "Beschreibung Hold",
      "color": "#ff0000"
     },
     {
      "groupname": "Technology radar - Ring",
      "name": "Incubating",
      "description": "Beschreibung Incubating",
      "color": "#00ff00"
     },
     {
      "groupname": "Technology radar - Ring",
      "name": "Emerging",
      "description": "Beschreibung Emerging",
      "color": "#0000ff"
     },
     {
      "groupname": "Technology radar - Ring",
      "name": "Mature",
      "description": "Beschreibung Mature",
      "color": "#000000"
     }
    ]
   }
  }
 ]
}

Tag Groups

Processor and sample LDIF for Tag Groups and Tags.

{
 "processors": [
  {
   "processorType": "inboundTag",
   "processorName": "Tag group creation",
   "processorDescription": "Creates tag groups",
   "run": 0,
   "forEach": "${data.taggroups}",
   "updates": [
    {
     "key": {
      "expr": "group.name"
     },
     "values": [
      {
       "expr": "${integration.valueOfForEach.name}"
      }
     ]
    },
    {
     "key": {
      "expr": "group.shortName"
     },
     "values": [
      {
       "expr": "${integration.valueOfForEach.shortname}"
      }
     ]
    },
    {
     "key": {
      "expr": "group.description"
     },
     "values": [
      {
       "expr": "${integration.valueOfForEach.description}"
      }
     ]
    },
    {
     "key": {
      "expr": "group.mode"
     },
     "values": [
      {
       "expr": "${integration.valueOfForEach.mode}"
      }
     ]
    },
    {
     "key": {
      "expr": "group.restrictToFactSheetTypes"
     },
     "values": [
      {
       "expr": "${integration.valueOfForEach.factsheettype}"
      }
     ]
    }
   ],
   "logLevel": "warning",
   "enabled": true
  },
  {
   "processorType": "inboundTag",
   "processorName": "Tag creation",
   "processorDescription": "Creates tags",
   "run": 1,
   "forEach": "${data.tags}",
   "updates": [
    {
     "key": {
      "expr": "group.name"
     },
     "values": [
      {
       "expr": "${integration.valueOfForEach.groupname}"
      }
     ]
    },
    {
     "key": {
      "expr": "name"
     },
     "values": [
      {
       "expr": "${integration.valueOfForEach.name}"
      }
     ]
    },
    {
     "key": {
      "expr": "description"
     },
     "values": [
      {
       "expr": "${integration.valueOfForEach.description}"
      }
     ]
    },
    {
     "key": {
      "expr": "color"
     },
     "values": [
      {
       "expr": "${integration.valueOfForEach.color}"
      }
     ]
    }
   ],
   "logLevel": "warning",
   "enabled": true
  }
 ]
}
{
 "connectorType": "Report Technology Radar",
 "connectorId": "Technology Radar Tags",
 "connectorVersion": "1.0.0",
 "lxVersion": "1.0.0",
 "processingDirection": "inbound",
 "processingMode": "partial",
 "customFields": {},
 "content": [
  {
   "type": "Deployment",
   "id": "1",
   "data": {
    "taggroups": [
     {
      "name": "Technology radar - Quadrant",
      "shortname": "TRQ",
      "description": "Beschreibung Quadrant",
      "mode": "SINGLE",
      "factsheettype": "ITComponent"
     },
     {
      "name": "Technology radar - Ring",
      "shortname": "TRR",
      "description": "Beschreibung Ring",
      "mode": "SINGLE",
      "factsheettype": "ITComponent"
     }
    ],
    "tags": [
     {
      "groupname": "Technology radar - Quadrant",
      "name": "Architecture Concepts",
      "description": "Beschreibung Architecture Concepts",
      "color": "#ff0000"
     },
     {
      "groupname": "Technology radar - Quadrant",
      "name": "Platforms",
      "description": "Beschreibung Platforms",
      "color": "#00ff00"
     },
     {
      "groupname": "Technology radar - Quadrant",
      "name": "Techniques",
      "description": "Beschreibung Techniques",
      "color": "#0000ff"
     },
     {
      "groupname": "Technology radar - Quadrant",
      "name": "Tools & Infrastructure",
      "description": "Beschreibung Tools & Infrastructure",
      "color": "#000000"
     },
     {
      "groupname": "Technology radar - Ring",
      "name": "Hold",
      "description": "Beschreibung Hold",
      "color": "#ff0000"
     },
     {
      "groupname": "Technology radar - Ring",
      "name": "Incubating",
      "description": "Beschreibung Incubating",
      "color": "#00ff00"
     },
     {
      "groupname": "Technology radar - Ring",
      "name": "Emerging",
      "description": "Beschreibung Emerging",
      "color": "#0000ff"
     },
     {
      "groupname": "Technology radar - Ring",
      "name": "Mature",
      "description": "Beschreibung Mature",
      "color": "#000000"
     }
    ]
   }
  }
 ]
}

Tag sent as a comma-separated list

Data in the tags looks like "Important,Mature". Helper function(toList) below will convert the comma-separated string to an Array and the Output of the below processor will be "Other Tags" : Important and Mature attached to Deployment "Finance Service"

{
 "processorType": "inboundTag",
 "processorName": "Tag creation",
 "processorDescription": "Creates tags and tag groups",
 "factSheets": {
  "external": {
   "ids": "${content.id}",
   "type": {
    "expr": "externalId"
   }
  }
 },
 "run": 1,
 "updates": [
  {
   "key": {
    "expr": "name"
   },
   "values": [
    {
     "expr": "${integration.valueOfForEach.trim()}"
    }
   ]
  }
 ],
 "forEach": "${helper:toList(data.tags.split(','))}",
 "logLevel": "debug"
}
{
 "connectorType": "ee",
 "connectorId": "Kub Dev-001",
 "connectorVersion": "1.2.0",
 "lxVersion": "1.0.0",
 "description": "Imports kubernetes data into LeanIX",
 "processingDirection": "inbound",
 "processingMode": "partial",
 "customFields": {},
 "content": [
  {
   "type": "Deployment",
   "id": "784616bf-198c-11f9-9da8-9263b0573fbe",
   "data": {
    "app": "Finance Service",
    "version": "10.5",
    "maturity": "5",
    "clusterName": "westeurope",
    "tags": "Important,Mature"
   }
  }
 ]
}

The inboundTag processor does automatically creates tags that do not exist for a tag group (the tag processor does not create new tag groups).
The inbound tagProcessor can be configured to not create any tags nor change metadata of existing tags but only to assign Fact Sheets to existing tags. To use this functionality, an additional key "tagsReadOnly" needs to be configured in the updates section as shown in the example:

{
 "updates": [
  {
   "key": {
    "expr": "tagsReadOnly"
   },
   "values": [
    {
     "expr": "${true}"
    }
   ]
  },
  {
   "key": {
    "expr": "name"
   },
   "values": [
    {
     "expr": "${data.myTagName}"
    }
   ]
  }
 ]
}

πŸ“˜

Make output optional

The "optional" flag avoids warning messages if the input data may not contain values for all fields and this is expected.

πŸ“˜

variableProcessor

Is used to only write values to internal variables. This will be used for aggregation use cases where the LDIF content needs to be used to only collect values without directly writing anything to LeanIX.

Write To LDIF

The processor allows Administrators to configure an inbound Integration API run to write a new LDIF file. The resulting LDIF will be available using the /results and the /resultsUrl endpoints same as with outbound Integration API runs.
With this functionality, inbound runs can used in all combinations to read, process, update Pathfinder entities and even write a new LDIF in one step. Integrations that write to LeanIX and read data from LeanIX can be written and managed in one Integration API configuration executed with a single call.
The new processor can even be used to only export data or just transform an LDIF into another LDIF.

In a configuration, all defined processors will write to a single, globally defined LDIF. This allows to collect all kinds of data objects to the target LDIF from multiple processors including content from aggregations and other processing (e.g. variables).

The LDIF header definition needs to be set as a global key in the Integration API congifuration. All fields can be freely configured and will be evaluated to a String using JUEL. Exception is the "customFields" key. If defined, the value will be interpreted as an object and passed to the target LDIF. Please ensure the expression always results in a map object to not break the LDIF format.

Example

{
 "processors": [
  {
   "processorType": "writeToLdif",
   "updates": [
    {
     "key": {
      "expr": "content.id"
     },
     "values": [
      {
       "expr": "${content.id}"
      }
     ]
    },
    {
     "key": {
      "expr": "content.type"
     },
     "values": [
      {
       "expr": "${content.type}"
      }
     ]
    },
    {
     "key": {
      "expr": "description"
     },
     "values": [
      {
       "expr": "Just a test. Could be any read content or JUEL calculation"
      }
     ]
    }
   ]
  }
 ],
 "targetLdif": {
  "dataConsumer": {
   "type": "leanixStorage"
  },
  "ldifKeys": [
   {
    "key": "connectorType",
    "value": "myNewLdif"
   },
   {
    "key": "connectorId",
    "value": "mycreatedId"
   },
   {
    "key": "customFields",
    "value": "${integration.toObject('{\"anyKey\":\"anyValue\"}')}"
   }
  ]
 }
}

Advanced Example

A more advanced example shows how to read content from Pathfinder and write it to an LDIF using the processor. It requires an input LDIF with at lease one data object but content is not relevant. The data object is only used to trigger.
writeToLdif can write variables as well (run levels are supported and variables are available one run after creation). You may even define multiple writeToLdif processors. All content will be collected and written to one resulting LDIF.
Please ensure to adjust the search scope to your workspace and change the id to an existing one.

{
 "processors": [
  {
   "processorType": "writeToLdif",
   "filter": {
    "advanced": "${integration.contentIndex==0}",
    "onRead": "${lx.factsheet.description!=''}"
   },
   "identifier": {
    "search": {
     "scope": {
      "ids": [
       "8de51ff7-6f13-47df-8af8-9132ada2e74d"
      ],
      "facetFilters": []
     },
     "filter": "${true}",
     "multipleMatchesAllowed": true
    }
   },
   "run": 0,
   "updates": [
    {
     "key": {
      "expr": "content.id"
     },
     "values": [
      {
       "expr": "${lx.factsheet.id}"
      }
     ]
    },
    {
     "key": {
      "expr": "content.type"
     },
     "values": [
      {
       "expr": "${lx.factsheet.name}"
      }
     ]
    },
    {
     "key": {
      "expr": "description"
     },
     "values": [
      {
       "expr": "${lx.factsheet.description}"
      }
     ]
    }
   ],
   "logLevel": "warning",
   "read": {
    "fields": [
     "description"
    ]
   }
  }
 ],
 "targetLdif": {
  "dataConsumer": {
   "type": "leanixStorage"
  },
  "ldifKeys": [
   {
    "key": "connectorType",
    "value": "${header.connectorType}_export"
   },
   {
    "key": "connectorId",
    "value": "${header.connectorId}_export"
   },
   {
    "key": "description",
    "value": "Enriched imporeted LDIF for Applications"
   }
  ]
 }
}
{
    "connectorType": "dummy",
    "connectorId": "dummy",
    "lxVersion": "1.0.0",
    "processingDirection": "inbound",
    "processingMode": "partial",
    "content": [
        {
            "type": "",
            "id": "",
            "data": {}
        }
    ]
}

Inbound Impact

The processor is used to write BTM impacts to the LeanIX pathfinder backend.
A processor always writes a single impact. In case a list of input data is given where "forEach" can iterate over, multiple impacts can be written by a single processor. Different types of impacts should be split into different processors in order to keep a configuration readable as different impacts have different parameters.

Example

The example below shows how to define Impacts. Please be aware that each type of impacts may require a different set of keys to be configured.

πŸ“˜

Tip

To find out about the different keys required for each Impact type, create the needed types of impacts in the UI, then export using an outbound processor or by using the "read" section in an inbound processor and either export to LDIF or write to a field like "description"

{
 "processors": [
  {
   "processorType": "inboundImpact",
   "updates": [
    {
     "key": {
      "expr": "groupName"
     },
     "values": [
      {
       "expr": "G1"
      }
     ]
    },
    {
     "key": {
      "expr": "description"
     },
     "values": [
      {
       "expr": "Group Description 2"
      }
     ]
    },
    {
     "key": {
      "expr": "impacts"
     },
     "values": [
      {
       "map": [
        {
         "key": "type",
         "value": "FACTSHEET_SET"
        },
        {
         "key": "factSheetId",
         "value": "28fe4aa2-6e46-41a1-a131-72afb3acf256"
        },
        {
         "key": "fieldName",
         "value": "functionalSuitabilityDescription"
        },
        {
         "key": "fieldValue",
         "value": "${data.value}"
        }
       ]
      }
     ]
    }
   ],
   "identifier": {
    "internal": "6d8acf0c-fa4e-40ed-9986-97da860f3414"
   },
   "logLevel": "warning"
  }
 ]
}

Updated 27 days ago

Inbound Processors


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.