Timestamp is a way of recording the date and time of each data point or event in the data acquisition system. Having correct timestamp in process control data acquisition is essential for ensuring the validity, reliability, and usability of the data, and for enabling effective process monitoring, control, and optimization. Precise timestamps enable accurate analysis and interpretation of the data, such as identifying trends, patterns, anomalies, and correlations. They also facilitate synchronization and integration of data from multiple sources, such as different sensors, instruments, devices, or systems.

According to the OPC UA Standard, each value of an OPC UA variable is associated with 2 timestamps: Source and Server timestamps.

 The Source Timestamp is used to reflect the timestamp that was applied to a variable value by the data source. In other words, this is the time when data value was measured at the lowest level data source. Note that this data source can be located either in the same machine where OPC UA Server runs, or it can be some separate device with its own system clock.

 The Server Timestamp is used to reflect the time that the Server received a variable value or knew it to be accurate.

 If the server reads data values itself, then source timestamp and server timestamp usually will be equal. If OPC UA Server gets data from some other device which supports timestamps, then source timestamp can be significantly different than the server timestamp.

It is important to note that system clocks in all devices participating in data acquisition path should be in sync. Ideally system clocks should be synchronized with time servers, for example using NTP protocol.

To avoid confusion and errors which might happen during conversions, all timestamps in OPC UA are UTC timestamps: no time zones, no daylight saving times. Usually timestamp values are converted to the user’s local time by the application displaying them.

When OPC UA client creates subscription and creates monitored items in it, it can define what timestamps it needs to receive: only source, only server timestamp, or both.

Our Industrial Data Collector  (further below Idako) creates subscriptions and monitored items requesting both timestamps. When data values are forwarded to the destination time-series database or MQTT broker, how timestamps are written depends on the destination database type.

When the destination is a SQL database, the source timestamp is written in a “time” column of the values table. If source timestamp is not defined, then server timestamp is used. Also it is possible to write so called client timestamp in the column “client_time“. The client_time is the time when data value had been received by Idako from the OPC UA Server. This timestamp can be useful when server or source timestamps are not reliable and can be not accurate enough.

When the destination is InfluxDB or Confluent / Redpanda / Apache Kafka, then source timestamp is used as a record timestamp. If the payload is composed using template, then all 3 timestamps can be also included into the payload using placeholders “[SourceTimestamp]”, “[ServerTimestamp]” and/or “[ClientTimestamp]”.

When the destination is MQTT broker, timestamps cannot be part of the published messages as the MQTT protocol does not explicitly specify how timestamps should be passed from the publisher to the broker. So they can be only included in the payload. For that, the payload should be defined using template, with placeholders for timestamps: “[SourceTimestamp]”, “[ServerTimestamp]” and/or “[ClientTimestamp]”.

Note that in some cases duplicate records (with the same value and timestamp for the same variable) can be written to the database. This can happen, when Idako disconnects from the server and reconnects, or when it restarts. In these cases it is possible that for a variable data value can be still the same, and also the source timestamp can be the same as before reconnecting. This might cause duplicate records error in SQL databases if values table is configured to have unique index by source_id and time column values. To resolve this issue, the OVL has configuration settings allowing duplicate records (refer the User Manual for details).

Another important thing to mention in regard of OPC UA timestamps is that OPC UA allows to define timestamps with high enough resolution: down to 10 picoseconds precision. Usually target databases do not support such high resolution. Idako allows to configure the precision with which timestamps are written: can be seconds, milliseconds or microseconds. 

Also it is worth to mention that different storage destinations have different ways to represent timestamps. Idako allows to fine tune timestamp format: it can be either integer number representing so called Unix epoch time (depending on the precision, number of seconds or milliseconds or microseconds since 1 Jan 1970), or can be a string value formatted following ISO-8601 standard, or OPC UA DateTime value (integer number of 100 nanosecond intervals passed since January 1, 1601)