Time-Series Databases

Currently time-series data can be stored either in TimescaleDB/PostgreSQL or InfluxDB (versions 1.7 or 2.0) or Apache Kafka.

More storage destinations will be added in coming versions (for details on planned features please refer section Roadmap).

To edit Time-Series Database settings use menu Settings/Time-Series Databases.

Common settings

This section describes settings common for multiple types of time-series databases.

Settings for intermediate in-memory queue.

When data values are received from OPC UA Server by collector engine, they are written into back of the in-memory intermediate queue. This allows decouple data collection and storing operations, so they do not block each other and happen asynchronously. Second purpose it to avoid data loss when connection to the time-series database is temporarily lost. To avoid out of memory situation, size of the queue (number of data values stored in it) is limited by option Max Queue Size. When the queue is full and more data is received, the oldest data is removed from the queue and becomes lost.

Storage engine retrives data from the front of the queue and writes them into TSDB in batches. If there is no data to compose full batch, it waits up to time period defined by the option Max Write Interval, and then writes whatever data is available. This puts limit on delay of delivery of data to the TSDB.

Option Read Batch Size is used when data is read from TSDB back to serve requests coming from Grafana’s SimpleJson data source plugin via REST endpoint. Used only when type of TSDB is Apache Kafka.

Option Connection Wait Time In Seconds defines timeout for initial connection.

Option Timeout is used on read/write operations.

Settings for OPC UA to TSDB mapping.

These settings define some parameters determining how time-series records written into the underlying database. They are described in detail in sections specific for each type of time-series database.

Briefly, there are few settings participating in composition of such fields in TSDB records as measurement or tags in case of InfluxDB or topic name, key and partition in case of Apache Kafka.

  • Columns Topic Name and Key Name in the Logged Variables table allow to fine turn settings individually at each tag level.
  • If Topic Name or Key Name in the Logged Variables table are not defined explicitely, then options Topic Name Generation Mode and Key Name Generation Mode in the TSDB configuration settings define how to generate defualt values. Listed below properties can particiate during auto-generation process:
    • Properties tag or name or id of OPC UA Server settings;
    • Properties tag or name of Variable Group;
    • Property Display Name or id of Logged Variable settings.

How all these options are used to generate default values for TSDB record fields, described in detail in sections specific for TSDB type.



Versions 10 and 11 were tested and should work.

Start page to download pre-built binaries fo PostgreSQL has links for installation packages for various operation systems: https://www.postgresql.org/download/

Installers for Windows can be found here: https://www.postgresql.org/download/windows/.

For Ubuntu, instructions can be found here: https://www.postgresql.org/download/linux/ubuntu/.

For Debian, istructions can be found here: https://www.postgresql.org/download/linux/debian/.



To manage PostgreSQL database using Web GUI frontend, you can install PgAdmin

  • Using your favorite PostgreSQL management tool create a user which will be used to access the database from ogamma Visual Logger. Default username/password in the ogamma Visual Logger’s configuration are ogamma/ogamma.
  • Create database (default name is ogammalogger); and assign user ogamma as its owner. Or you can grant rights to create database to the user ogamma, and the table will be created, when the ogamma Visual Logger starts very first time. It will also create required tables in this database.
  • In case of automatic deployment, adjust user credentials and database connection parameters in ogamma Visual Logger configuration file data/config.json.

Example of configuration follows below:

{"timeSeriesDataBases": [
    "type": "PostgreSQL",
    "displayName": "PostgreSQL database at host tsdb",
    "host": "localhost",
    "port": 5432,
    "dbName": "ogammalogger",
    "userName": "ogamma",
    "password": "ogamma",
    "connectionWaitTimeInSeconds": 10,
    "numberOfValuesInWriteBatch": 1000,
    "maxWriteInterval": 500,
    "timeout": 2000
  • To configure settings using Web GUI, open dialog window by menu Setting/Time-Series Databases, add or edit desired record, open it to edit and select database type PostgreSQL.

Mapping from OPC UA to SQL database, table and columns.

For PostgreSQL type, data is written into the database defined by option path in the corresponding record in Time-Series Databases dialog window (default name is ogammalogger). Table name and columns are not configurable.

Table name is values, and it has columns:

  • time for timestamp, with millisecond precision.
  • value for value. If a OPC UA variable value is numeric type, then value is converted to float before writing. For other data types, they are converted to String type and then length of that string is written.
  • status for OPC UA Status;
  • sourceid which defines corresponding OPC UA variable record in configuration database (default name ogammaloggerconfig), table loggingNodes, related with column id.

Connection in secured mode

Low level communicaton with PostgreSQL database is implemented by using of the library libpq. Weither to use SSL in TCP/IP connection and how to use it is controlled by option sslmode, passed on the call to open connection. Possible values for this option are described in documentation for the libpq library at https://www.postgresql.org/docs/11/libpq-connect.html Explicitely value for this option can be set in the JSON formatted field Database specific settings. If it is not defined explicitely, then its value depends on checkbox Use Secure Mode: if it is ON, then value of the option sslmode is set to require, otherwise set to prefer.

Using PostgreSQL instance hosted at Microsoft Azure

If extension TimescaleDB is not enabled in the instance of PostgreSQL server hosted at Microsoft Azure (which is default configuration), initialization of the database would fail with error Connection lost to server.

To resolve this issue, extension TimescaleDB needs to be enabled, following instructions at https://docs.microsoft.com/en-us/azure/postgresql/concepts-extensions

Using instance of TimescaleDB hosted by Timescale

If instance of the TimescaleDB is hosted at https://portal.timescale.cloud/, then after initialization it might not have database postres, instead database defaultdb can be existing. This causes failure of initialization of the TSDB database and tables by ogamma Visual Logger for OPC (initialliy it opens connection using default database.).

To fix this initialization issue, value of the JSON formatted field Database specific settings needs to be modified: sub-field defaultdb should be set to the existing default database name.

In the JSON code block below default value is displayed:

{"defaultdb": "postgres", "sslmode": "prefer"}

In the screenshot below modified value is displayed:

InfluxDB version 1.7.


To install InfluxDB version 1.7, refer to its documentation page at https://docs.influxdata.com/influxdb/v1.7/introduction/getting-started/.


Database (by default called ogammadb) will be created by ogamma Visual Logger automatically if it does not exist.


To manage InfluxDB version 1.7 using Web GUI frontend, you can install TimeSeries Admin

In case of automatic deployment, settings can be configured in config.json file as in the example below:

{"timeSeriesDataBases": [
    "type": "InfluxDb",
    "version": "1.7",
    "displayName": "InfluxDb 1.7 at localhost",
    "host": "localhost",
    "port": 8086,
    "useSecureMode": false,
    "dbName": "ogammadb",
    "userName": "ogamma",
    "password": "ogamma",
    "connectionWaitTimeInSeconds": 15,
    "maxQueueSize": 20000,
    "numberOfValuesInWriteBatch": 5000,
    "maxWriteInterval": 500,

To configure from web GUI, use menu Settings/Time-Series Databases.

Mapping OPC UA to InflixDb.

In InfluxDB each time-series record has few parts: measurement, tags and fields.

What goes into the measurement part is defined by option measurement in the section Database specific settings, field json:

To open JSON editor, click on Edit button located to the left from it.

  • if the field measurement is set to value "[TN]" (default), value defined in the column Topic Name of the Logged Variables table will be usedas InfluxDB measurement. Note that when not explicitely defined, values for Topic Name column are auto-generated, in a way depending on option Topic Name Generation Mode.
  • If any other value is set, then that value is used as measurement for all records written into InfluxDB.

  • InfluxDB tags part is assigned the value defined in column Key Name of the Logged Variables table. If it is not defined, then default tag name is set to n and its value is supplied according to the option Key Name Generation Mode: can be either value from id column or Display Name column. If it is defined, then tag name portion can be omitted, in this case tag name is set to n. Tag name also can be defined explicitly, and moreover multiple tag name-value pairs separated by comma can be defined, for example: Country=Canada,City=Edmonton,Parameter=Temperature.

  • InfluxDB fields part for numeric values is composed like v=12.5,q=0i, where v is field name for value of the OPC UA variable, and q means OPC UA Status code. All other data types are written as string, with field name sv, for example: sv="This is string type value",q=0i.

  • Timestamp precision. Timestamps can be written into InfluxDB with different precision, which can be set in Database specific settings, option json.


    Here possible values are: s for seconds, ms for milliseconds, u for microseconds and ns for nanoseconds. default value is ms (milliseconds).

InfluxDB version 2.0.


To install InfluxDB version 2.0, refer to its documentation page at https://v2.docs.influxdata.com/v2.0/get-started/


After installation, InfluxDB version 2.0 can be initialized over its Web GUI or from command line interface. Instructions can be found at the link above.


InfluxDB version 2.0 comes with built-in Web GUI frontend, which is used not only for configuration, but also provides rich funcationality such as dashboards, monitoring and alerts, etc.

Example of configuration section in ogamma Visual Logger configuration file follows below:

{"timeSeriesDataBases": [
    "id": 1,
    "type": "InfluxDb",
    "version": "2.0",
    "displayName": "InfluxDb 2.0 at localhost",
    "host": "localhost",
    "port": 9999,
    "useSecureMode": false,
    "dbName": "ogammadb",
    "userName": "ogamma",
    "password": "ogamma123",
    "organization": "ogamma",
    "bucket": "ogamma",
    "token": "secrettoken",
    "precision": "ms",
    "initScript": "./windows-debug-data/init-db-postgres.sql",
    "connectionWaitTimeInSeconds": 15,
    "maxQueueSize": 20000,
    "numberOfValuesInWriteBatch": 5000,
    "maxWriteInterval": 500,

Steps to configure InfluxDb2 running in Docker container.

Once container is running, web configuration frontend will be available on port 9999:

You will need to configure organization, bucket and access token.

Click on Get Started button. In the next page, enter user credentials, organization and bucket name:

On the next page click on Quick Start button.

On the next page on the left side panel click on icon Load Data and select Tokens command.

On the next page click on link to view the token for the user: ogamma's token.

Click on Copy to clipboard link to copy token.

Now you can navigate to ogamma Visual Logger configuration page at port 4880, open dialog window to configure database settings (via menu Settings/Timer-Series Databases), select existing record or create new record with InfluxDb 2.0 type.

Then by clicking on button Edit open JSON editor to enter organization, bucket and token:

After saving settings from JSON editor, you can check if connection settings are correct by clicking on the Test connection button.

After configuration of the Time-Series database settings is complete, the application instance needs to be configured to use this time-series database to store data. For that, open Application instances dialog window via menu Settings/Instances, from the table select instance record, and open Instance Settings dialog window by clicking on Edit icon. And in the field TSDB Configuration select required record.

Now you can check if values are written to the InfluxDB 2 database, opening Data Explorer page from InfluxDB web frontend, and then selecting bucket, measurement, field and tag:

Grafana data source plugin for InfluxDB 2.0

When the newest version of Grafana is used (6.7.1 for example), beta version of the data source plugin for InfluxDB 2.0 is available to use at https://v2.docs.influxdata.com/v2.0/visualize-data/other-tools/grafana/. In the screenshot below you can see dashboard editing page:

Here is sample query text:

from(bucket: "ogamma")
  |> range($range)
  |> filter(fn: (r) => r._measurement == "d")
  |> filter(fn: (r) => r._field == "v")
  |> filter(fn: (r) => r.n == "Byte")

Mapping OPC UA to InflixDb.

Mapping is done the same way as for InfluxDB 1.7.

Apache Kafka


In case of automatic deployment configuration settings can be set in file ./data/config.json before first start of the application. Sample configuration for Apache Kafka related node is provided below. This configuration should work to connect to the Kafka instance running in Docker container, which is created using file docker-kafka/docker-compose.yml (based on image https://hub.docker.com/r/bitnami/kafka).

{"timeSeriesDataBases": [
    "type": "Kafka",
    "displayName": "Kafka database at localhost",
    "connectionWaitTimeInSeconds": 15,
    "maxQueueSize": 500000,
    "numberOfValuesInReadBatch": 1000,
    "numberOfValuesInWriteBatch": 1000,
    "topicNameGenerationMode": "6",
    "keyNameGenerationMode": "1",
    "maxWriteInterval": 500,
    "timeout": 10000,
    "Kafka": {
      "sasl.username": "user",
      "sasl.password": "bitnami",
      "metadata.broker.list": "localhost:9092",
      "broker.version.fallback": "",
      "sasl.mechanisms": "PLAIN",
      "security.protocol": "PLAINTEXT",
      "client.id": "VisualLogger",
      "batch.num.messages": 1000,
      "queue.buffering.max.ms": 500,
      "Producer": {
        "timeout": 2000,
        "auto.commit.interval.ms": 500,
        "client.id": "VisualLogger-Producer"
      "Consumer": {
        "timeout": 500,
        "client.id": "VisualLogger-Consumer",
        "group.id": "VisualLogger",
        "enable.auto.commit": false,
        "auto.offset.reset": "latest",
        "fetch.wait.max.ms": 100

*  ``type`` - must be set to ``Kafka``.
*  ``Kafka`` - sub-nodes of this node define Kafka-specific settings. They are passed to set options in the Configuration class by underlying library ``librdkafka``. Full list of possible options and their descriptions can be found here: https://docs.confluent.io/2.0.0/clients/librdkafka/CONFIGURATION_8md.html. Note that options right under node ``Kafka`` will be applied for both Producer and Consumer. Options under sub-node ``Producer`` are applied for Producer only, and accordingly options from sub-node ``Consumer`` are applied for Consumer only. In most cases only few options need to be modified:

   *  ``sasl.username``, ``sasl.password`` - user name and password used to connect to the broker;
   * ``bootstrap.servers`` or ``metadata.broker.list`` - comma separated list of brokers to connect to. If none of these fileds is set, then values from fileds ``host`` and ``port`` are used to connect to the broker.
   * ``security.protocol`` - protocol used to communicate with brokers (``SASL_SSL`` for encrypted communication and ``PLAINTEXT`` for non-encrypted communication).
   * ``sasl.mechanisms`` - defines client authentication modes.

  .. tip:: Connection settings for cloud hosted instances of Kafka usually are provided on their configuration / management console. For example, for Kafka instances hosted by `Confluent <https://confluent.cloud/>`_ all settings can be found under ``CLI & client configuration`` page:
     .. thumbnail:: images/kafka-confluent-config.png

    .. note:: option ``ssl.ca.location`` can be omitted.

In order to configure settings via web GUI, open menu Settings/Time-Series Databases. Note that for Apache Kafka various settings at producer and consumer level can be set in section Database specific settings, field json. Open json editor by clicking on Edit button for more convenient editing fo JSON format settings.

Mapping from OPC UA to Kafka Topic, Key names and Partitions.

By default, OPC UA Server node’s name is used as a Kafka topic name, OPC UA variable’s Display name is used as a key name, and partition is set to 0.


Topic name cannot have spaces, therefore name field for OPC UA Server should not have spaces.


Some brokers does not allow automatic creation of topics. In those cases you need to create them by using other tools, before starting logging to Kafka.

Topic name, key name and partition also can be configured for each variable (columns Topic Name, Partition and Key Name in the Logged Variables table can be edited). To reset to default values, enter empty values.

It is possible also to use other values as topic name and key name, modifying options Topic Name Generation Mode and Key Name Generation Mode.


Payload of Kafka messages depend on data type of OPC UA variabale value. If value is numeric, then it is converted to float and then written in text format. If value is not numeric type, then its string representation is written. Length of strings is limited to 256 bytes.

TSDB database access optimization.

According to our preliminary tests, mostly throughput of the ogamma Visual Logger (number of values collected and written to the TSDB per second) is limited by the database. The highest throughput can be achieved with InfluxDB. This version is tested with the rates up to 100,000 values per second (5000 tags updating with 50 ms interval).

With fine-turning of the TSDB database (or splitting load to multiple instances of TSDB if required), single instance of the ogamma Visual Logger should be able to provide 1,000,000 values per second throughput.