Skip to content
Snippets Groups Projects

didmos2-openldap

This is the base OpenLDAP for didmos2. It provides two installation / usage methods:

  • Docker image with an already installed OpenLDAP and all scripts needed to generate a configuration and start the server.
  • Local install script which requires a manual installation of the OpenLDAP but brings all scripts needed to generate configuration.

Local install

The local installation is based on the symas openldap package. The openldap might run with another package as well, but is not tested.

Preparations

Preparing the sources

The provided TAR should unpacked into the /usr/src/didmos/ directory.

mkdir -p /usr/src/didmos/
cd /usr/src/didmos/
tar -xzf PATH_TO_FILE/ldap.tgz

or the GIT repository should be cloned to the local directory /usr/src/didmos/:

mkdir /usr/src/didmos/
cd /usr/src/didmos/
git clone https://gitlab.daasi.de/didmos2/didmos2-openldap.git ldap
cd ldap
git checkout feature/scale

If there are customer specific additions, these can be unpacked in the directory extensions/ or the path to the extensions can be given with flag -x. There migt be extensions as subdirectories for

  • schema/: Additional schema files in OpenLDAP .schema syntax
  • variables/: A file with special variable definitions

The preferred way is to clone the additional repository to /usr/src/didmos/

cd /usr/src/didmos/
git clone https://gitlab.daasi.de/<customer_group>/<customer-repo> ldap_customer

The method to copy the extensions to the folder /usr/src/didmos/ldap/extensions then during an update this has to be handled with care.

Enabling additional repositories on Debian 12

To install the additional contrib overlays and receive the latest bug fixes, the Symas repository is integrated:

wget https://repo.symas.com/repo/gpg/RPM-GPG-KEY-symas-com-signing-key -O /usr/share/keyrings/symas-key.asc
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/symas-key.asc] https://repo.symas.com/repo/deb/main/release26 jammy main' | tee -a /etc/apt/sources.list.d/soldap-release26.list
apt update
apt install symas-openldap-clients symas-openldap-server symas-openldap-dev

Configure the run directory

To be sure the directory for the PID file is available in /run you need to use the file /etc/tmpfiles.d/openldap.conf. This does ensure that the directory for the PID file is created correctly every time you restart.

# /etc/tmpfiles.d/openldap.conf
d /run/slapd/ 0755 openldap openldap -

Creating the environment file with passwords and options

The passwords for manager accounts are separated in an additional file. In the file you define EXPORT statements which are called by the installation script. You could load the passwords to the environment permanently but this is not recommended.

Create a file called /etc/ldap/.env

export MANAGER_PW=secret
export EVENTLOG_PW=secret
export ACCESSLOG_PW=secret
export DIDMOSCONFIG_PW=secret
export LOG_LEVEL=stats
export SUPER_ADMIN_LDAP_ACCOUNT_PW=secret
export BACKEND_PW=secret

and replace the passwords by your own values. If you do not specify a value for MANAGER_PW, EVENTLOG_PW, ACCESSLOG_PW or DIDMOSCONFIG_PW a random value is created and displayed once when running the installation script.

There are more options you can but to this file:

export DB_MAX_SIZE_ACCESSLOG=500000000
export DB_MAX_SIZE_DATA=500000000
export DB_MAX_SIZE_EVENTLOG=500000000
export ACCESSLOG_LOG_PURGE="07+00:00 01+00:00"
export TLS_CHAIN=/etc/ssl/certs/MYCHAIN.pem
export TLS_CERT=/etc/ssl/certs/MYCERT.pem
export TLS_KEY=/etc/ssl/private/MYKEY.pem

Calling the installation script

The installation script is the same script which runs in the Docker image. It does analyze the environment to see if it is running in a Docker image or on a local machine. You can use these parameters:

  • -s: Installation source directory
  • -t: Installation target directory
  • -m: Backup, recovery and migrations base directory
  • -e: Environment file to load
  • -d: MDB data directory
  • -i: Install only, do not run
  • -c: File with TLS chain in PEM format
  • -r: File with TLS server cert in PEM format
  • -k: File with TLS private key corresponding to -r
  • -p: Port to listen on, works only on local deployment
  • -x: Defines an directory where extensions should be loaded from. If not set, the directory extension/ in the installation source directory is used

In a standard (Debian12) setup you should be able to use the following statement:

bash entrypoint-scale-p.sh -s /usr/src/didmos/ldap -t /etc/ldap/ -m /var/didmos/ldap/ -d /var/lib/ldap/ -x /usr/src/didmos/ldap_customer/extension/ -e /etc/ldap/.env

Moving to SCALE from an older image

Moving to the SCALE image from an older image is done by following these steps:

Change the volume definition

Change the volume definition in your docker-compose.yml file

ldap:
  container_name: ...
  ...
  volumes:
  - didmos2-demo-openldap-db:/var/lib/ldap:rw
  - didmos2-demo-openldap-config:/etc/ldap:rw
  - didmos2-demo-openldap-data:/var/didmos/ldap:rw
  - didmos2-demo-openldap-mig:/MIGRATIONS:rw
  - mux_socket:/var/run/saslauthd/
  - /var/didmos/core/ldap/backup:/BACKUP:rw

The volume mux_socket might not be present in your system. Important here is that you do not remove the old volume mounted to /MIGRATIONS and /BACKUP. They will be migrated to the new structure and can be removed later.

Change .env file

Change the value for LDAP_URL to:

LDAP_URL=ldap://ldap:1389

The port on which the system is listening inside the container has changed.

Switch image

You can now switch to the new image. If you are using the didmos2-openldap directly then this would be

docker.gitlab.daasi.de/didmos2/didmos2-openldap/didmos2-openldap-scale-p

If you are building your own image then this can now be based on this image.

After switching the used image, you can restart and you should see in the logs that the migrations and backups are moved from one volume to another.

Remove old volumes

If the system starts and all data is still present then you can remove the old volumes and restart your container again.

ldap:
  container_name: ...
  ...
  volumes:
  - didmos2-demo-openldap-db:/var/lib/ldap:rw
  - didmos2-demo-openldap-config:/etc/ldap:rw
  - didmos2-demo-openldap-data:/var/didmos/ldap:rw
  - mux_socket:/var/run/saslauthd/

After the volumes have been removed and the container has been recreated and restarted, the system should continue to work as before.

Migrations

A migration mechanism is available, that performs static changes in LDAP by loading LDIF-files as well as a dynamic migration of already existing customer data in LDAP.

General Features

  • Load of LDIF-files with or without changetype in the LDIF-data (default is add)
  • Configuration of the dynamic migration in json format

LDIF-Migrations

You can use normal ldap syntax which the exeption that between every block and also at the end of the file 2 newlines are required.

JSON-Migration-Features

A Json block has one or more of the following items, which will be executed in the given order

delete

"delete": [
  { "dn": "<DN>" }
]

rename

"rename": [
  { "dn": "<OLD_DN>",
    "newdn": "<NEW_DN>" }
]

add

"add": [
  { "dn": "ou=permissions,@parent.baseDN@",
    "attributes": [
      { "name": "<ATTRIBUTE_1>",
        "value": ["<VALUE_1>", "<VALUE_2>"]
      },
      { "name": "<ATTRIBUTE_2>",
        "value": ["<VALUE_3>", "<VALUE_4>"]
      }
    ]
  }
]

modify

Note: in case of delete the value is optional. If no value is provided all values will be deleted for the given attribute.

"modify": [
  { "dn": "rbacName=admin-permission,@parent.baseDN@",
    "attributes": [
      { "name": "<ATTRIBUTE_1>",
        "value": ["<VALUE_1>", "<VALUE_2>"],
        "op": "add | delete | modify | replace"
      }
    ]
  }
],

search

baseDN and searchFilter are required. scope is optional. if No scope is provided it defaults to sub. forEach and else are also optional. Inside of forEach and else you have access to the placeholder @parent.baseDN@. In forEach you additionally have access to the placeholder @forEach@.

"search": {
  "baseDN": "ou=tenants,ou=data,ou=root-tenant,dc=didmos,dc=de",
  "searchFilter": "(&(objectClass=didmosTenant)(objectClass=organizationalUnit))",
  "scope": "one | base | sub",
  "forEach": {

  },
  "else": {

  }
}

continue on errors

This works for add, delete, modify and rename.

You can set per operation to ignoreErrors and continue with the execution. Imagine you want to add 2 new values to and attribute where 1 value already might be existing. If you always want to get the missing values added you will have to split it in 2 seperate operations like so:

{
	"version": "1",
  "modify": [
    {
      "dn": "rbacName=defaultuser-modify-permission,ou=permissions,ou=pdp,ou=root-tenant,dc=didmos,dc=de",
      "attributes": [
        { "name": "rbacConstraint",
          "value": ["EXISTINGVALUE"],
          "op": "add"
        }
      ],
      "ignoreErrors": true
    },
    {
      "dn": "rbacName=defaultuser-modify-permission,ou=permissions,ou=pdp,ou=root-tenant,dc=didmos,dc=de",
      "attributes": [
        { "name": "rbacConstraint",
          "value": ["NEWVALUE"],
          "op": "add"
        }
      ]
    }
  ]
}

Placeholders and extension methods

  • Use dynamic built-in variables in place holders
  • Support of method calls with arguments and use the result to resolve place holders
  • Method arguments can be built again on their part by a method call
  • Already existing methods:
    • ext_generate_uuid() -> str: generate an UUID
    • ext_explode_dn(dn: str, count: int) -> str: cuts the DN by count RDN's
    • ext_search(search_base_dn: str, search_filter: str, ret_attr: str = "-") -> list: searches in search_base_dn for search_filter and returns a the attribute ret_attr or the DN (default) as a list
    • ext_intersection(self, *lists) -> list: returns the intersection of lists of strings

Future JSON-Migration-features

  • Additional methods can be easily implemented to be used in configuration files

Complex Json configuration examples

{
  "version": "1",
  "search": {
    "baseDN": "ou=tenants,ou=data,ou=root-tenant,dc=didmos,dc=de",
    "searchFilter": "(&(objectClass=didmosTenant)(objectClass=organizationalUnit))",

    "forEach": {
      "search": {
        "baseDN": "ou=pdp,@forEach@",
        "searchFilter": "(&(ou=permissions)(objectClass=organizationalUnit))",
        "scope": "one",
  
        "forEach": {
          "functions": [
            { "name": "ext_explode_dn", "args": ["@forEach@", 2], "result": "tenant.DN" },
            { "name": "ext_generate_uuid", "args": "", "result": "UUID1" },
            { "name": "ext_intersection", "result": "rbacResourceDN",
              "args": [
              { "name": "ext_search",
                "args": ["ou=people,ou=data,@tenant.DN@", "(&(objectClass=didmosPerson)(sn=Admin))"] },
              { "name": "ext_search",
                "args": ["ou=roles,@parent.baseDN@", "(rbacName=admin)", "rbacPerformer"] }
              ]
            }
          ],
          "add": [
            {
              "dn": "rbacName=@UUID1@,@forEach@",
              "attributes": [
                { "name": "objectClass", "value": ["rbacPermission"] },
                { "name": "rbacName", "value": "@UUID1@" },
                { "name": "rbacRoleDN", "value": "rbacName=defaultuser,ou=roles,@parent.baseDN@" },
                { "name": "rbacConstraint", "value": "attribute:sn" },
                { "name": "rbacOperation", "value": [ "read" ] },
                { "name": "rbacResourceDN", "value": "@rbacResourceDN@"}
              ]
            }
          ]
        },
        "else": {
          "functions": [
            {
              "name": "ext_generate_uuid",
              "args": "",
              "result": "UUID1"
            },
            {
              "name": "ext_search",
              "args": ["ou=roles,@parent.baseDN@", "(&(objectClass=rbacRole)(rbacDisplayName=defaultuser))"],
              "result": "DEFUALTUSERROLEDN"
            }
          ],
          "add": [
            { "dn": "ou=permissions,@parent.baseDN@",
              "attributes": [
              { "name": "objecClass",
                "value": ["organizationalUnit"],
              },
              { "name": "ou",
                "value": ["permissions"],
              }
            },
            {
              "dn": "rbacName=@UUID1@,ou=permissions,@parent.baseDN@",
              "attributes": [
                { "name": "rbacName",
                  "value": ["@UUID1@"],
                },
                { "name": "objecClass",
                  "value": ["rbacPermission"],
                },
                { "name": "rbacPermissionString",
                  "value": ["self"],
                }
                { "name": "rbacOperations",
                  "value": ["read", "write", "modify-del", "modify-replace", "modify-add"],
                }
                { "name": "rbacRoleDN",
                  "value": ["@DEFUALTUSERROLEDN@"],
                }
              ]
            }
          ]
        }
      }
    }
  }
}

In the example above

  1. search for existing tenants in ou=tenants

  2. in each tenant search for ou=permissions under ou=pdp. Here the place holder @forEach@ is used, which is the tenant DN

  3. in each ou=permissions (obiously only one!) first a couple of methods are called to set variables used later: tenant.DN, UUID1, rbacResourceDN. Note that the latter one uses method calls as arguments. Note also that a just created result can be used in the evaluation of the next one.

  4. then finally an entry is created after having resolved all place holders

if ou=permissions does not yet exists

  1. add ou=permissions
  2. add one permission object for the defaultuser role

Modify, delete and rename are supported as well and shown in the next example:

{
  "version": "1",
  "search": {
    "baseDN": "ou=tenants,ou=data,ou=root-tenant,dc=didmos,dc=de",
    "searchFilter": "(&(objectClass=didmosTenant)(objectClass=organizationalUnit))",

    "forEach": {
      "search": {
        "baseDN": "ou=pdp,@forEach@",
        "searchFilter": "(&(ou=permissions)(objectClass=organizationalUnit))",
        "scope": "one",
  
        "forEach": {
          "add": [
            {
              "dn": "rbacName=defaultuser-permission,@forEach@",
              "attributes": [
                { "name": "objectClass", "value": ["rbacPermission"] },
                { "name": "rbacName", "value": "defaultuser-permission" },
                { "name": "rbacRoleDN", "value": "rbacName=defaultuser,ou=roles,@parent.baseDN@" },
                { "name": "rbacOperation", "value": [ "delete", "modify-add", "modify-del", "modify-replace",
                                    "read", "readHistory", "write"
                                  ] },
                { "name": "rbacPermissionString", "value": "self" }
              ]
            },
            {
              "dn": "rbacName=groupMember-permission,@parent.baseDN@",
              "attributes": [
                { "name": "objectClass", "value": ["rbacPermission"] },
                { "name": "rbacName", "value": "groupMember-permission" },
                { "name": "rbacRoleDN",
                  "value": {
                  "search": {
                    "baseDN": "",
                    "searchFilter": "(objectClass=person)",
                    "attributes": ["entryDN"]
                  }
                  }
                },
                { "name": "rbacOperation", "value": ["read"] },
                { "name": "rbacConstraint", "value": ["attribute:cn", "member:self"] },
                { "name": "rbacPermissionFilter", "value": "(&(objectClass=didmosGroup)(member=self))" }
              ]
            }
          ],
          "modify": [
            {
              "dn": "rbacName=admin-permission,@parent.baseDN@",
              "attributes": [
                { "name": "rbacOperation",
                  "value": ["delete", "modify-add", "modify-del", "modify-replace"],
                  "op": "add"
                },
                { "name": "rbacOperation",
                  "value": ["create"],
                  "op": "delete"
                },
                { "name": "description",
                  "op": "delete"
                },
                { "name": "rbacPermissionFilter",
                  "value": ["(objectClass=*)"],
                  "op": "replace"
                }
              ]
            }
          ],
          "delete": [
            { "dn": "rbacName=dummy,@forEach@" }
          ],
          "rename": [
            { "dn": "rbacName=prod,@forEach@",
              "newdn": "rbacName=test,@forEach@" }
          ]
        }
      }
    }
  }
}

Usage

The migration scrypt's name is migration.py and resides in the top path as the current read-me file.

usage: migration.py [-h] (-l LDIF_FILE | -j CONFIG_FILE) [-H URI] [-D BIND]
                    [-w PASSWORD OR FILE] [-W] [-d] [-v] [-c]

optional arguments:
  -h, --help            show this help message and exit
  -l LDIF_FILE, --ldif LDIF_FILE
                        The ldif to import
  -j CONFIG_FILE, --json CONFIG_FILE
                        The json configuration file
  -H URI, --uri URI     The connection URI for the server (LDAP or HTTP)
  -D BIND, --bind BIND  The bind DN that is allowed to read the LDAP
                        configuration
  -w PASSWORD, --password PASSWORD
                        The password to authenticate the script for reading
                        the configuration
  -W, --ask-password    Ask for the password to authenticate the script for
                        reading the configuration
  -d, --dryrun          If set to true, planed changes are only shown but
                        changes are made to the LDAP
  -v, --verbose         If set to true, changes are shown
  -c, --continue        Continue processing even if errors occur

To Orchestrate many migrations from different files a shell script exists.

This Script calls the migration.py per ldif/json file and is able to replace variables within the files from environment variables or a file containing key-value arguments to replace the variables in the ldif/json files. If a Variable Name is not set the variable in the ldif file is set to an empty string. The Password is the ldap Password of the system.

There are three options available for the variable replacement.

  • Start the migration.sh file with Variable Names to be replaced.
    • Syntax: didmos2-openldap/migrations.sh PASSWORT '$VARIABLE1 $VARIABLE2'
    • All given variables are replaced.
  • Start the migration.sh file without a variable set.
    • Syntax: didmos2-openldap/migrations.sh PASSWORT
    • All set Environment variables are replaced. This could lead to unwanted behaviour if a variable is named to obvious names used in the linux environment. So be cautious.
  • Start the migration.sh file with a path to a file in which the variables are set
    • Syntax: didmos2-openldap/migrations.sh PASSWORT /PATH/to/file
    • All Variables form the file are replaced in the ldif files.
usage: migration.sh PASSWORD [FILE|VARIABLES] 

required arguments: 
  PASSWORD              The ldap password of the system. 

optional arguments:
  VARIABLES             A list of Variable Names which are replaced during the migration. 
  FILE                  Path to a file containing key-value pairs of variables.