Project Description
The Deformation Monitoring Package (DMPACK) is a free and open source software package for sensor control and automated time series processing in geodesy and geotechnics. The package consists of a library libdmpack and additional programs based on it which serve as a reference implementation of solutions to various problems in deformation monitoring, such as:
-
sensor control
-
sensor data parsing and processing
-
database access
-
remote procedure calls
-
data synchronisation and export
-
spatial transformations
-
time series analysis
-
plotting and reporting
-
web-based data access
-
distributed logging
-
MQTT connectivity
-
scripting
-
e-mail
DMPACK is a scientific monitoring system developed for automated control measurements of buildings, infrastructure, terrain, geodetic nets, and other objects. The software runs on sensor nodes, usually industrial embedded systems or single-board computers, and obtains observation data from arbitrary sensors, like total stations, digital levels, inclinometers, weather stations, or GNSS receivers. The raw sensor data is then processed, stored, and optionally transmitted to a server. The software package may be used to monitor objects like:
-
bridges, tunnels, dams
-
landslides, cliffs, glaciers
-
construction sites, mining areas
-
churches, monasteries, and other heritage buildings
DMPACK is built around the relational SQLite database for time series and log storage on client and server. The server component is optional. It is possible to run DMPACK on clients only, without data distribution. The client-side message passing is based on POSIX message queues and POSIX semaphores.
Currently, only 64-bit Linux and FreeBSD are supported as operating systems.
Software Architecture
Similar Software
There are similar open source projects that provide middleware for autonomous sensor networks:
- Argus
-
A non-geodetic sensor data monitoring and alerting solution built with MariaDB, Node.js and React. (MIT)
- FROST
-
Fraunhofer Open Source SensorThings (FROST) is the reference implementation of the OGC SensorThings API in Java. The project provides an HTTP- and MQTT-based message bus for data transmission between client and server. Developed by Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (IOSB). (LGPLv3)
- Global Sensor Networks
-
A Java-based software middleware designed to facilitate the deployment and programming of sensor networks, by Distributed Information Systems Laboratory (EPFL), Switzerland. (GPLv3)
- istSOS
-
A server implementation of the OGC Sensor Observation Service in Python, for managing and dispatching observations from monitoring sensors. The project also provides a graphical user interface and a RESTful web API to automate administration procedures. Developed by Istituto Scienze della Terra, University of Applied Sciences and Arts of Southern Switzerland. (GPLv2)
- Kotori
-
A multi-channel, multi-protocol, telemetry data acquisition and graphing toolkit for time-series data processing in Python. It supports scientific environmental monitoring projects, distributed sensor networks, and likewise scenarios. (AGPLv3)
- OpenADMS
-
The Open Automatic Deformation Monitoring software is an IoT sensor network middleware in Python 3. The system was developed as a prototype of DMPACK and includes client and server programs. (BSD)
- Project Mjolnir
-
An open source client–server IoT architecture for scientific sensor networks written in Python, by University of Alabama in Huntsville and NASA. Includes a sensor client for data logging, uplink and control, as well as a server component to store, serve/display, and monitor data from remote sensors. (MIT)
- Ulyxes
-
An open source project in Python to control robotic total stations (RTS) and other sensors, and to publish observation results on web based maps. Developed at the Department of Geodesy and Surveying of the Budapest University of Technology and Economics. (GPLv2)
Requirements
DMPACK has the following requirements:
-
Linux (glibc) or FreeBSD operating system (x86-64, AArch64)
-
Fortran 2018 and ANSI C compiler
Additional dependencies have to be present to build and run the software of this package:
-
BLAS
-
FastCGI
-
Gnuplot
-
LAPACK
-
libcurl (≥ 8.3.0)
-
Lua 5.4
-
PCRE2
-
SQLite 3 (≥ 3.39.0)
-
zlib
To generate the man pages, the User’s Guide, and the source code documentation, you will also need:
-
AsciiDoctor, Pygments, and pygments.rb
DMPACK depends on the following interface libraries:
If the DMPACK repository is cloned recursively, these submodules will be
downloaded automatically to directory vendor/
. Otherwise, they have to be
cloned or downloaded manually.
Installation
This section describes how to build the DMPACK library and programs from source.
FreeBSD
First, install the build and run-time dependencies:
$ doas pkg install databases/sqlite3 devel/git devel/pcre2 devel/pkgconf ftp/curl lang/gcc \ lang/lua54 math/gnuplot math/lapack www/fcgi
Instead of math/gnuplot
, you may want to install math/gnuplot-lite
which
does not depend on X11 (but neither includes raster graphic terminals).
Optionally, install Pygments and AsciiDoctor to generate the man pages and the User’s Guide:
$ doas pkg install devel/rubygem-pygments.rb textproc/rubygem-asciidoctor
Then, clone the repository recursively. Run the provided POSIX Makefile to build from source:
$ git clone --depth 1 --recursive https://github.com/dabamos/dmpack $ cd dmpack/ $ make freebsd
Install the library and all programs system-wide to /usr/local
:
$ doas make install_freebsd
You can change the installation prefix with argument PREFIX
. To install to a
custom directory, run:
$ doas make install PREFIX=/opt
The DMPACK programs require the shared library libgfortran.so
if they have
been compiled with GNU Fortran.
Path | Description |
---|---|
|
DMPACK programs. |
|
DMPACK configuration files. |
|
DMPACK module files. |
|
DMPACK libraries. |
|
DMPACK examples, scripts, style sheets. |
|
DMPACK databases. |
|
WWW root directory. |
Linux
On Debian, install GCC, GNU Fortran, and the build environment:
$ sudo apt install gcc gfortran git make pkg-config
The third-party dependencies have to be installed with development headers:
$ sudo apt install --no-install-recommends libblas-dev liblapack-dev \ curl libcurl4 libcurl4-openssl-dev libfcgi-bin libfcgi-dev \ gnuplot lua5.4 liblua5.4 liblua5.4-dev libpcre2-8-0 libpcre2-dev \ sqlite3 libsqlite3-dev zlib1g zlib1g-dev
Instead of package gnuplot
, you can install the no-X11 flavour gnuplot-nox
,
if raster image formats are not needed (SVG output only). Depending on the
package repository, the names of the Lua packages may differ.
Clone the DMPACK repository, and then execute the Makefile with build target
linux
:
$ git clone --depth 1 --recursive https://github.com/dabamos/dmpack $ cd dmpack/ $ make linux
Install the library and all programs system-wide to /usr
:
$ sudo make install_linux
To install to a custom directory, run:
$ sudo make install PREFIX=/opt
System Configuration
Additional changes to the system configuration should be considered to prevent issues while conducting a long-term monitoring.
Time Zone
The local time zone of the sensor client should be set to a zone without summer
daylight-saving. For instance, time zone Europe/Berlin
implies Central
European Summer Time (CEST), which is usually not desired for long-term
observations, as it leads to time jumps. Instead, use time zone GMT+1
or UTC
in this case.
FreeBSD
On FreeBSD, configure the time zone using:
# tzsetup
Linux
On Linux, list all time zones and set the preferred one with timedatectl(1):
# timedatectl list-timezones # timedatectl set-timezone Etc/GMT+1
Time Synchronisation
The system time should be updated periodically by synchronising it with network time servers. A Network Time Protocol (NTP) client has to be installed and configured to enable the synchronisation.
FreeBSD
Set the current date and time intially by passing the IP or FQDN of the NTP server to ntpdate(1):
# ntpdate -b ptbtime1.ptb.de
The NTP daemon ntpd(8) is configured through file /etc/ntp.conf
. If
favoured, we can replace the existing NTP server pool 0.freebsd.pool.ntp.org
with a single server, for example:
server ptbtime1.ptb.de iburst
Add the following entries to /etc/rc.conf
:
ntpd_enable="YES" ntpd_sync_on_start="YES" ntpd_flags="-g"
Start the ntpd(8) service:
# service ntpd start
Linux
On Debian Linux, install the NTP package:
# apt install ntp
Query the NTP servers to synchronise with:
# ntpq -p
The system time should be updated now:
# date -R
On error, try to reconfigure the NTP service:
# dpkg-reconfigure ntp
Power Saving
On Linux, power saving for USB devices may be enabled by default. This can cause
issues if sensors are attached through an USB adapter. USB power saving is
enabled if the kernel boot parameter usbcore.autosuspend
is not equal -1
:
# cat /sys/module/usbcore/parameters/autosuspend 2
We can update the boot loader to turn auto-suspend off. Edit /etc/default/grub
and change GRUB_CMDLINE_LINUX_DEFAULT
to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet usbcore.autosuspend=-1"
Then, update the boot loader:
# update-grub
The system has to be rebooted for the changes to take effect.
Message Queues
The operating system must have POSIX message queues enabled to run DMPACK programs on sensor nodes.
FreeBSD
On FreeBSD, make sure the kernel module mqueuefs
is loaded, and the message
queue file system is mounted:
# kldstat -m mqueuefs Id Refs Name 522 1 mqueuefs
Otherwise, we can simply load and mount the file system:
# kldload mqueuefs # mkdir -p /mnt/mqueue # mount -t mqueuefs null /mnt/mqueue
To load messages queues at system start, add the module mqueuefs
to
/etc/rc.conf
, and the file system to /etc/fstab
:
# sysrc kld_list+="mqueuefs" # echo "null /mnt/mqueue mqueuefs rw 0 0" >> /etc/fstab
Additionally, we may increase the system limits of POSIX message queues with
sysctl(8), or in /etc/sysctl.conf
. The defaults are:
# sysctl kern.mqueue.maxmsg kern.mqueue.maxmsg: 100 # sysctl kern.mqueue.maxmsgsize kern.mqueue.maxmsgsize: 16384
The maximum message size has to be at least 16384 bytes.
Linux
The POSIX message queue file system should be mounted by default on Linux. If not, run:
# mkdir -p /dev/mqueue # mount -t mqueue none /dev/mqueue
Set the maximum number of messages and the maximum message size to some reasonable values:
# sysctl fs.mqueue.msg_max=100 # sysctl fs.mqueue.msgsize_max=16384
The maximum message size has to be at least 16384 bytes.
Cron
On Unix-like operating system, cron is usually used to run jobs periodically. For example, in order to update an XML feed, or to generate HTML reports, add a schedule of the task to perform to the local crontab(5) file.
For example, we can edit the cron jobs of user www
with crontab(1):
# crontab -u www -e
The following crontab(5) file adds a task to generate reports every hour:
SHELL=/bin/sh
MAILTO=/dev/null
# Create reports every hour, suppress logging.
@hourly -q /usr/local/share/dmpack/mkreport.sh
The shell script mkreport.sh
must have the execution bits set. Update the
script to your configuration.
Deformation Monitoring Entities
The data structures used by DMPACK are based on the following entities.
Observation Entities
- Node
-
A unique sensor node within a sensor network. Contains id, name, and additional meta information.
- Sensor
-
A unique sensor attached to a node, with id, name, and additional meta information.
- Target
-
A unique measurement target (point of interest, location) with id, name, and additional meta information. Multiple nodes and sensors may share a single target.
- Observation
-
A single measurement identified by name and unique UUID4 that contains requests to and responses from a sensor, referencing a node, a sensor, and a target. An observation can contain up to 8 requests which will be sent to the sensor in sequential order.
- Request
-
Command to send to the sensor, referencing an observation and ordered by index. A request can contain up to 16 responses.
- Response
-
Floating-point values in the raw response of a sensor can be matched by regular expression groups. Each matched group is stored as a response. Responses reference a request, and are ordered by index. They contain name, value, unit, and an optional error code.
Log Entities
- Log
-
Log message of a sensor node, either of level debug, info, warning, error, or critical, and optionally related to a sensor, a target, or an observation.
Beat Entities
- Beat
-
Short status message (heartbeat, handshake) that contains node id, timestamp, system uptime, and last connection error, sent periodically from client to server.
RPC Entities
- API Status
-
Short key–value response of the HTTP-RPC API service in plain-text format.
Program Overview
DMPACK includes programs for sensor I/O, database management, observation processing, and other tasks related to automated control measurements. The programs may be classified into the following categories.
Databases
- dmbackup
-
Creates an online backup of a database by either using the SQLite backup API or
VACUUM INTO
. - dmdb
-
Stores observations received from POSIX message queue in a SQLite database.
- dmdbctl
-
A command-line interface to the DMPACK observation database, to read, add, update, or delete nodes, sensors, and targets.
- dmexport
-
Exports beats, nodes, sensors, targets, observations, and logs from database to file, either in CSV, JSON, or JSON Lines format.
- dmimport
-
Imports nodes, sensors, targets, observations, and logs from CSV file into database.
- dminit
-
Creates and initialises SQLite observation, log, and beat databases.
- dmlogger
-
Stores logs received from POSIX message queue in a SQLite database.
Message Passing
- dmlog
-
A utility program to send log messages from command-line or shell script to the POSIX message queue of a dmlogger process, to be stored in the log database.
- dmrecv
-
Receives logs or observations from POSIX message queue and writes them to stdout, file, or named pipe.
- dmsend
-
Sends observations or logs from file to a DMPACK application via POSIX message queue.
Observation Processing
- dmlua
-
Runs a custom Lua script to process an observation and forward it to the next specified receiver.
Plots & Reports
Remote Procedure Calls
- dmapi
-
A FastCGI-based HTTP-RPC service that provides an API for node, sensor, target, observation, and log synchronisation, as well as heartbeat transmission. Clients may either send records to be stored in the server database, or request data of a given time range. Depending on the HTTP Accept header, the server returns data in CSV, JSON, JSON Lines or Namelist format. Requires a FastCGI-compatible web server, such as lighttpd(1).
- dmbeat
-
Sends short status messages (heartbeats) periodically to a remote dmapi instance.
- dmsync
-
Synchronises nodes, sensors, targets, observations, and log messages between client and dmapi server. Only uni-directional synchronisation from client to server is supported.
Sensor Control
- dmfs
-
Reads sensor data from virtual file system, file, or named pipe. The program be used to read values from sensors connected via 1-Wire (OWFS). Observations are forwarded via POSIX message queue and/or written to file.
- dmpipe
-
Executes a program as a sub-process connected through an anonymous pipe and forwards the output via POSIX message queue. Optionally, observations are written to file or stdout.
- dmserial
-
Connects to a TTY/PTY serial port for sensor communication. The program sends requests to a connected sensor to receive responses. The program pre-processes the response data using regular expressions and forwards observations via POSIX message queue.
Utilities
Web
- dmfeed
-
Creates an Atom syndication feed in XML format (RFC 4287) from logs of given sensor node and log level. If the feed is served by a web server, clients can subscribe to it by using a feed reader or news aggregator. The program may be executed periodically as a cron job.
- dmweb
-
A CGI-based web user interface for DMPACK database access on client and server. Requires a web server and gnuplot(1).
Programs
Some programs read settings from an optional or mandatory configuration file.
Examples of configuration files are provided in directory
/usr/local/etc/dmpack/
. The configuration file format is based on Lua tables
and is scriptable. Comments in the configuration file start with --
.
You may want to enable Lua syntax highlighting in your editor (for instance,
set syntax=lua
in Vim), or use the file ending .lua
instead of .conf
.
dmapi
dmapi is an HTTP-RPC API service for remote DMPACK database access. The web application has to be executed through a FastCGI-compatible web server or a FastCGI spawner. It is recommended to run lighttpd(1).
The dmapi service offers endpoints for clients to insert beats, logs, and observations into the local SQLite database, and to request data in CSV or JSON format. Authentication and encryption are independent from dmapi and have to be provided by the web server.
All POST data has to be serialised in Fortran 95 Namelist format, with optional deflate compression.
If HTTP Basic Auth is enabled, the sensor id of each beat, log, node, sensor,
and observation sent to the RPC service must match the name of the
authenticated user. For example, to store an observation of a node with the id
node-1
, the HTTP Basic Auth user name must equal the node id. If the
observation is sent by any other user, it will be rejected (HTTP 401).
Environment Variable | Description |
---|---|
|
Path to heartbeat database (required). |
|
Path to log database (required). |
|
Path to observation database (required). |
|
Set to |
The web application is configured through environment variables. The web server or FastCGI spawner must be able to pass environment variables to dmapi. See RPC Server for an example configuration.
The service accepts HTTP GET and POST requests. Section RPC API gives an overview of the available endpoints. The response format depends on the MIME type set in the HTTP Accept header of the request, either:
-
application/json
(JSON) -
application/jsonl
(JSON Lines) -
application/namelist
(Fortran 95 Namelist) -
text/comma-separated-values
(CSV)
By default, responses are in CSV format. The Namelist format is available only
for single records. Status messages are returned as key–value pairs, signaled by
MIME type text/plain
.
dmbackup
The dmbackup utility creates an online backup of a running SQLite database. By default, the SQLite backup API is used. The program is functional equivalent to running the sqlite3(1) command-line interface:
$ sqlite3 <database> ".backup '<output>'"
dmbackup does not replace existing backup databases.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path of the backup database. |
|
|
– |
Path of the SQLite database to backup. |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Use |
|
|
off |
Print backup progess (not in vacuum mode). |
|
|
– |
Output version information and quit. |
|
|
off |
Enable WAL journal for backup database. |
Examples
Create an online backup of an observation database:
$ dmbackup --database /var/dmpack/observ.sqlite --backup /tmp/observ.sqlite
dmbeat
The dmbeat program is a heartbeat emitter that sends
status messages via HTTP POST to a remote dmapi service. The
status messages include timestamp, system uptime, and last connection error.
The server may inspect this data to check if a client is still running and has
network access. The RPC endpoint is expected at
[http|https]://<host>:<port>/api/v1/beat
.
Passing the server credentials via the command-line arguments --username
and
--password
is insecure on multi-user operating systems and only recommended
for testing.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
0 |
Maximum number of heartbeats to send (unlimited if |
|
|
off |
Forward log messages of level DEBUG via IPC (if logger is set). |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
IP or FQDN of HTTP-RPC host (for example, |
|
|
60 |
Emit interval in seconds. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Optional name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
– |
HTTP-RPC API password. |
|
|
0 |
Port of HTTP-RPC API server. The default |
|
|
off |
Use TLS encryption. |
|
|
– |
HTTP-RPC API user name. If set, implies HTTP Basic Auth. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
dmdb
The dmdb program collects observations from a POSIX message queue and
stores them in a SQLite database. The name of the message queue equals the
given dmdb name, by default dmdb
. The IPC option enables process
synchronisation via POSIX semaphores. The value of the semaphore is changed
from 0 to 1 if a new observation has been received. The name of the semaphore
equals the dmdb name. Only a single process may wait for the semaphore.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
– |
Path to SQLite observation database. |
|
|
off |
Forward log messages of level DEBUG via IPC (if logger is set). |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Uses a POSIX semaphore for process synchronisation. The name of the semaphore
matches the instance name (with leading |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Optional name of program instance, configuration, POSIX message queue, and POSIX semaphore. |
|
|
– |
Node id. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Create a message queue /dmdb
, wait for incoming observations, and store them
in the given database:
$ dmdb --name dmdb --node dummy-node --database /var/dmpack/observ.sqlite --verbose
Log messages and observation ids are printed to stdout.
dmdbctl
The dmdbctl utility program performs create, read, update, or delete operations (CRUD) on the observation database. Only nodes, sensors, and targets are supported. Data attributes are passed through command-line arguments.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Create record of given type ( |
|
|
– |
Delete record of given type ( |
|
|
– |
Path to SQLite observation database (required). |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Node, sensor, or target id (required). |
|
|
– |
Node, sensor, or target meta description (optional). |
|
|
– |
Node, sensor, or target name. |
|
|
– |
Id of node the sensor is associated with. |
|
|
– |
Read record of given type ( |
|
|
– |
Serial number of sensor (optional). |
|
|
|
Sensor type: |
|
|
– |
Updates record of given type ( |
|
|
off |
Print additional log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Add node, sensor, and target to observation database:
$ dmdbctl --database observ.sqlite --create node --id node-1 --name "Node 1" $ dmdbctl --database observ.sqlite --create sensor --id sensor-1 --name "Sensor 1" --node node-1 $ dmdbctl --database observ.sqlite --create target --id target-1 --name "Target 1"
Delete a target from the database:
$ dmdbctl --database observ.sqlite --delete target --id target-1
Read attributes of sensor sensor-1
:
$ dmdbctl --database observ.sqlite --read sensor --id sensor-1 sensor.id: sensor-1 sensor.node_id: node-1 sensor.type: virtual sensor.name: Sensor 1 sensor.sn: 12345 sensor.meta: dummy sensor
dmexport
The dmexport program writes beats, logs, nodes, sensors, targets, observations, and data points from database to file, in ASCII block, CSV, JSON, or JSON Lines format. The ASCII block format is only available for X/Y data points. The types data point, log, and observation require a sensor id, a target id, and a time range in ISO 8601 format.
If no output file is given, the data is printed to standard output. The output file will be overwritten if it already exists. If no records are found, an empty file will be created.
Format | Supported Types | Description |
---|---|---|
|
|
ASCII block format. |
|
|
CSV format. |
|
|
JSON format. |
|
|
JSON Lines format. |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to SQLite database (required). |
|
|
– |
Output file format: |
|
|
– |
Start of time range in ISO 8601 (required for types |
|
|
off |
Add CSV header. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Node id (required). |
|
|
– |
Path of output file. |
|
|
– |
Response name for type |
|
|
– |
Sensor id (requied for types |
|
|
|
CSV separator character. |
|
|
– |
Target id (required for types |
|
|
– |
End of time range in ISO 8601 (required for types |
|
|
– |
Type of record to export: |
|
|
– |
Output version information and quit. |
Examples
Export log messages from database to JSON file:
$ dmexport --database /var/dmpack/log.sqlite --type log --format json --node dummy-node \ --from 2020-01-01 --to 2023-01-01 --output /tmp/log.json
Export observations from database to CSV file:
$ dmexport --database observ.sqlite --type observ --format csv --node dummy-node \ --sensor dummy-sensor --target dummy-target --from 2020-01-01 --to 2025-01-01 \ --output /tmp/observ.csv
dmfeed
This program creates a web feed from log messages in Atom Syndication Format. The log messages are read from database and written as XML to standard output or file.
The feed id has to be a 36 characters long UUID with hyphens. News aggregators use the id to identify the feed. Therefore, the id should not be reused among different feeds. Run dmuuid to generate a valid UUID4.
If an XSLT style sheet is given, web browsers may be able to display the Atom
feed in HTML format. Set the option to the (relative) path of the public XSL on
the web server. An example style sheet feed.xsl
is located in
/usr/local/share/dmpack/
.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Name of feed author or organisation. |
|
|
– |
Path to configuration file. |
|
|
– |
Path to SQLite log database. |
|
|
– |
E-mail address of feed author. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
UUID of the feed, 36 characters long with hyphens. |
|
|
5 |
Select log messages of the given maximum log level (between 1 and 5). Must be greater or equal the minimum level. |
|
|
1 |
Select log messages of the given minimum log level (between 1 and 5). |
|
|
|
Name of instance and table in given configuration file. |
|
|
50 |
Maximum number of entries in feed (max. 500). |
|
|
– |
Select log messages of the given node id. |
|
|
stdout |
Path of the output file. If empty or |
|
|
– |
Sub-title of feed. |
|
|
– |
Title of feed. |
|
|
– |
Public URL of the feed. |
|
|
– |
Output version information and quit. |
|
|
– |
Path to XSLT style sheet. |
Examples
First, generate a unique feed id:
$ dmuuid --hyphens 19c12109-3e1c-422c-ae36-3ba19281f2e
Then, write the last 50 log messages in Atom format to file feed.xml
, and
include a link to the XSLT style sheet feed.xsl
:
$ dmfeed --database /var/dmpack/log.sqlite --output /var/www/feed.xml \ --id 19c12109-3e1c-422c-ae36-3ba19281f2e --xsl feed.xsl
Copy the XSLT style sheet to the directory of the Atom feed:
$ cp /usr/local/share/dmpack/feed.xsl /var/www/
If /var/www/
is served by a web server, feed readers can subscribe to the
feed. Furthermore, we may translate feed and style sheet into a single HTML
document feed.html
, using an arbitrary XSLT processor, for instance:
$ xsltproc --output feed.html /var/www/feed.xsl /var/www/feed.xml
dmfs
The dmfs program reads observations from file system, virtual file, or named pipe. The program can be used to read sensor data from the 1-Wire File System (OWFS).
If any receivers are specified, observations are forwarded to the next receiver
via POSIX message queue. dmfs can act as a sole data logger if output and
format are set. If the output path is set to -
, observations are written to
stdout instead of file.
The requests of each observation have to contain the path of the (virtual) file
in attribute request
. Response values are extracted by named group from the
raw response using the given regular expression pattern. Afterwards, the
observation is forwarded to the next receiver via POSIX message queue.
A configuration file is mandatory to describe the jobs to perform. Each observation must have a valid target id. Node, sensor, and target have to be present in the database.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file (required). |
|
|
off |
Forward log messages of level DEBUG via IPC (if logger is set). |
|
|
– |
Output format, either |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
– |
Output file to append observations to ( |
|
|
– |
Sensor id. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
First, install the 1-Wire file system package. On FreeBSD, run:
# pkg install comms/owfs
On Linux, install the package instead with:
# apt install owfs
Connect a 1-Wire temperature sensor through USB (device /dev/ttyU0
), and mount
the 1-Wire file system with owfs(1) under /mnt/1wire/
:
# mkdir -p /mnt/1wire # owfs -C -d /dev/ttyU0 --allow_other -m /mnt/1wire/
On Linux, the path to the USB adapter slightly differs:
# owfs -C -d /dev/ttyUSB0 --allow_other -m /mnt/1wire/
The command-line argument -C
selects output in °C. The settings can be added
to the owfs(1) configuration file, usually at /usr/local/etc/owfs.conf
or
/etc/owfs.conf
:
device = /dev/ttyU0 mountpoint = /mnt/1wire allow_other Celsius
The file system is mounted automatically at system start-up if owfs(1) is configured to run as a service.
Reading a temperature value from the connected sensor:
$ cat /mnt/1wire/10.DCA98C020800/temperature 19.12
Then, initialise the observation and log databases:
$ dminit --type observ --database /var/dmpack/observ.sqlite --wal $ dminit --type log --database /var/dmpack/log.sqlite --wal
Create node node-1
, sensor sensor-1
, and target target-1
in database
/var/dmpack/observ.sqlite
through dmweb or dmdbctl:
$ dmdbctl -d /var/dmpack/observ.sqlite -C node --id node-1 --name "Node 1" $ dmdbctl -d /var/dmpack/observ.sqlite -C sensor --id sensor-1 --name "Sensor 1" --node node-1 $ dmdbctl -d /var/dmpack/observ.sqlite -C target --id target-1 --name "Target 1"
Set the program settings in configuration file
/usr/local/etc/dmpack/dmfs.conf
:
-- dmfs.conf
dmfs = {
logger = "dmlogger", -- Logger to send logs to (optional).
node = "node-1", -- Node id (required).
sensor = "sensor-1", -- Sensor id (required).
output = "", -- Path to output file, empty or `-` for stdout (optional).
format = "none", -- Output format (`csv` or `jsonl`).
jobs = { -- List of jobs to perform.
{
delay = 10 * 1000, -- Delay in mseconds to wait afterwards (optional).
disabled = false, -- Disable to ignore job (optional).
onetime = false, -- Run job only once (optional).
observation = { -- Observation to execute (required).
name = "observ-1", -- Observation name (required).
target_id = "target-1", -- Target id (required).
receivers = { "dmdb" }, -- Optional list of receivers (up to 16).
requests = { -- List of files to read.
{
request = "/mnt/1wire/10.DCA98C020800/temperature", -- File path.
pattern = "(?<temp>[-+0-9\\.]+)", -- RegEx pattern.
delay = 500, -- Delay in mseconds (optional).
responses = {
{
name = "temp", -- RegEx group.
unit = "degC" -- Unit.
}
}
}
}
}
}
},
debug = false, -- Forward log messages of level DEBUG via IPC.
verbose = true -- Print messages to standard output (optional).
}
Log messages will be sent to logger dmlogger
, observations to receiver dmdb
.
Start the logger process:
$ dmlogger --name dmlogger --database /var/dmpack/log.sqlite
Start the database process:
$ dmdb --name dmdb --database /var/dmpack/observ.sqlite --node node-1 --logger dmlogger
Start dmfs to execute the configured job:
$ dmfs --name dmfs --config /usr/local/etc/dmpack/dmfs.conf
dmgraph
The dmgraph program is a front-end to gnuplot(1) that creates plots of observations read from database. Plots are either written to file or displayed in terminal or X11 window.
Depending on the selected terminal backend, you may have to set the environment
variable GDFONTPATH
to the local font directory first:
$ export GDFONTPATH="/usr/local/share/fonts/webfonts/"
The output file is ignored when using the terminals sixelgd
and x11
.
Plotting parameters passed via command-line have priority over those from
configuration file.
Terminal | Description |
---|---|
|
ASCII format, in ANSI colours. |
|
ASCII format. |
|
GIF format (libgd). |
|
PNG format (libgd). |
|
PNG format (libcairo), created from vector graphics. |
|
Sixel format (libgd), originally for DEC terminals. |
|
W3C Scalable Vector Graphics (SVG) format. |
|
Persistent X11 window (libX11). |
Descriptor | Description (Format) |
---|---|
|
year (YYYY) |
|
month (MM) |
|
day (DD) |
|
hour (hh) |
|
minute (mm) |
|
second (ss) |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Background colour (for example, |
|
|
– |
Path to configuration file. |
|
|
– |
Path to SQLite observation database. |
|
|
– |
Font name or file path (for example, |
|
|
|
Foreground colour (for example, |
|
|
– |
Start of time range in ISO 8601. |
|
|
400 |
Plot height. |
|
|
– |
Output available command-line arguments and quit. |
|
|
|
Name of table in configuration file. |
|
|
– |
Node id. |
|
|
– |
File path of plot image. May include format descriptors. |
|
|
– |
Response name. |
|
|
– |
Sensor id. |
|
|
– |
Target id. |
|
|
– |
|
|
|
– |
Plot title. |
|
|
– |
End of time range in ISO 8601. |
|
|
– |
Output version information and quit. |
|
|
1000 |
Plot width. |
Examples
Create a plot of observations selected from database observ.sqlite
in PNG
format, and write the file to /tmp/plot.png
:
$ dmgraph --node dummy-node --sensor dummy-sensor --target dummy-target --response dummy \ --from 2020 --to 2024 --database observ.sqlite --terminal pngcairo --output /tmp/plot.png
Output the plot directly to terminal, with the configuration loaded from file:
$ dmgraph --name dmgraph -node --config dmgraph.conf --terminal sixelgd
The sixelgd
format requires a terminal emulator with Sixel support (such as
xterm(1) or mlterm(1)).
dminfo
The dminfo utility program prints build, database, and system information to
standard output. The path to the beat, log, or observation database is passed
through command-line argument --database
.
The output contains compiler version and options; database PRAGMAs, tables, and number of rows; as well as system name, version, and host name.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to SQLite database. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Output version information and quit. |
Examples
Print build, database, and system information:
$ dminfo --database /var/dmpack/observ.sqlite build.compiler: GCC version 13.1.0 build.options: -mtune=generic -march=x86-64 -std=f2018 db.application_id: 444D31 db.foreign_keys: T db.journal_mode: wal db.path: /var/dmpack/observ.sqlite db.size: 286720 db.table.beats: F db.table.beats.rows: 0 ...
dmimport
The dmimport program reads logs, nodes, sensors, targets, and observations in CSV format from file and imports them into the database. The database inserts are transaction-based. If an error occurs, the transaction is rolled back, and no records are written into the database at all.
The database has to be a valid DMPACK database and must contain the tables required for the input records. The nodes, sensors, and targets referenced by input observations must exist in the database. The nodes referenced by input sensors must exist as well.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to SQLite database (required, unless in dry mode). |
|
|
off |
Dry mode. Reads and validates records from file but skips database import. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Path to input file in CSV format (required). |
|
|
– |
CSV quote character. |
|
|
|
CSV separator character. |
|
|
– |
Type of record to import: |
|
|
off |
Print progress to stdout. |
|
|
– |
Output version information and quit. |
Examples
Import observations from CSV file observ.csv
into database observ.sqlite
:
$ dmimport --type observ --input observ.csv --database observ.sqlite --verbose
dminit
The dminit utility program creates beat, log, and observation databases. No action is performed if the specified database already exists.
A synchronisation table is required for observation and log synchronisation with an dmapi server. The argument can be omitted if this functionality is not used.
The journal mode Write-Ahead Logging (WAL) should be enabled for databases with multiple readers.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path of the new SQLite database. |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Add synchronisation tables. Enable for data synchronisation between client and server. |
|
|
– |
Type of database, either |
|
|
– |
Output version information and quit. |
|
|
off |
Enable Write-Ahead Logging (WAL). |
Examples
Create an observation database with remote synchronisation tables (WAL):
$ dminit --database /var/dmpack/observ.sqlite --type observ --sync --wal
Create a log database with remote synchronisation tables (WAL):
$ dminit --database /var/dmpack/log.sqlite --type log --sync --wal
Create a heartbeat database (WAL):
$ dminit --database /var/dmpack/beat.sqlite --type beat --wal
dmlog
The dmlog utility forwards a log message to the message queue of a
dmlogger instance. The argument --message
is mandatory. The default log
level is INFO. Pass the name of the dmlogger instance through argument
--logger
. The program terminates after log transmission.
The following log levels are accepted:
Level | Name |
---|---|
1 |
DEBUG |
2 |
INFO |
3 |
WARNING |
4 |
ERROR |
5 |
CRITICAL |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
0 |
DMPACK error code (optional). |
|
|
– |
Output available command-line arguments and quit. |
|
|
2 |
Log level, from 1 to 5. |
|
|
|
Name of logger instance and POSIX message queue. |
|
|
– |
Log message (max. 512 characters). |
|
|
– |
Node id (optional). |
|
|
– |
Observation id (optional). |
|
|
– |
Sensor id (optional). |
|
|
– |
Source of the log message (optional). |
|
|
– |
Target id (optional). |
|
|
off |
Print log to stderr. |
|
|
– |
Output version information and quit. |
Examples
Send a log message to the message queue of logger dmlogger
:
$ dmlog --level 3 --message "low battery" --source test --verbose 2022-12-09T22:50:44.161+01:00 [WARNING ] test - low battery
The dmlogger
process will receive the log message and store it in the log
database (if the log level is ≥ the minimum log level):
$ dmlogger --node dummy-node --database /var/dmpack/log.sqlite --verbose 2022-12-09T22:50:44.161+01:00 [WARNING ] test - low battery
dmlogger
The dmlogger program collects log messages from a POSIX message queue and
stores them in a SQLite database. The name of the message queue equals the
given dmlogger name with leading /
, by default /dmlogger
.
If a minimum log level is selected, only logs of a level greater equal the minimum are stored in the database. Log messages with lower level are printed to standard output before being discarded (if verbose mode is enabled).
The IPC option allows process synchronisation via POSIX semaphores. The value of
the semaphore is changed from 0
to 1
whenever a new log was received. The
name of the semaphore equals the dmlogger name with leading /
. Only a
single process should wait for the semaphore unless round-robin passing is
desired.
This feature may be used to automatically synchronise incoming log messages with
a remote HTTP-RPC API server. dmsync will wait for new logs before starting
synchronisation if the dmlogger instance name has been passed through
command-line argument --wait
.
The following log levels are accepted:
Level | Name |
---|---|
1 |
DEBUG |
2 |
INFO |
3 |
WARNING |
4 |
ERROR |
5 |
CRITICAL |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
– |
Path to SQLite log database. |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Use POSIX semaphore for process synchronisation. The name of the semaphore matches the instance name (with leading slash). The semaphore is set to 1 whenever a new log message was received. Only a single process may wait for this semaphore. |
|
|
3 |
Minimum level for a log to be stored in the database, from 1 to 5. |
|
|
|
Name of logger instance, configuration, POSIX message queue, and POSIX semaphore. |
|
|
– |
Node id. |
|
|
off |
Print received logs to stderr. |
|
|
– |
Output version information and quit. |
Examples
Create a message queue /dmlogger
, wait for incoming logs, and store them in
the given database if logs are of level 4 (ERROR) or higher:
$ dmlogger --node dummy-node --database /var/dmpack/log.sqlite --minlevel 4
Push semaphore /dmlogger
each time a log has been received:
$ dmlogger --node dummy-node --database /var/dmpack/log.sqlite --ipc
Let dmsync wait for semaphore /dmlogger
before synchronising the log
database with host 192.168.1.100
, then repeat:
$ dmsync --type log --database /var/dmpack/log.sqlite --host 192.168.1.100 --wait dmlogger
dmlua
The dmlua program runs a custom Lua script to process observations received
from message queue. Each observation is passed as a Lua table to the function of
the name given in option procedure
. If the option is not set, function name
process
is assumed by default. The Lua function must return the (modified)
observation table on exit.
The observation returned from the Lua function is forwarded to the next receiver specified in the receivers list of the observation. If no receivers are left, the observation will be discarded.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file (optional). |
|
|
off |
Forward log messages of level DEBUG via IPC (if logger is set). |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
|
Name of Lua function to call. |
|
|
– |
Path to Lua script to run. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
The following Lua script script.lua
just prints observation tables to
standard output:
-- script.lua
function process(observ)
print(dump(observ))
return observ
end
function dump(o)
if type(o) == 'table' then
local s = '{ '
for k, v in pairs(o) do
if type(k) ~= 'number' then k = '"' .. k .. '"' end
s = s .. '[' .. k .. '] = ' .. dump(v) .. ','
end
return s .. '} '
else
return tostring(o)
end
end
Observations sent to message queue /dmlua
will be passed to the Lua function
process()
in script.lua
, then forwarded to the next receiver:
$ dmlua --name dmlua --node dummy-node --script script.lua --verbose
dmpipe
The dmpipe program reads responses from processes connected via pipe.
All requests of an observation have to contain the process in attribute
request
. Response values are extracted by group from the raw response using
the given regular expression pattern.
If any receivers are specified, observations are forwarded to the next receiver
via POSIX message queue. The program can act as a sole data logger if output and
format are set. If the output path is set to -
, observations are printed to
stdout.
A configuration file is mandatory to configure the jobs to perform. Each observation must have a valid target id. Node id, sensor id, and observation id are added by dmpipe. Node, sensor, and target have to be present in the database for the observation to be stored.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file (required). |
|
|
off |
Forward log messages of level DEBUG via IPC (if logger is set). |
|
|
– |
Output format, either |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
– |
Output file to append observations to ( |
|
|
– |
Sensor id. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
The example reads the remaining battery life returned by the sysctl(8) tool (available on FreeBSD):
$ sysctl hw.acpi.battery.life hw.acpi.battery.life: 100
On Linux, the battery life can be read with dmfs from
/sys/class/power_supply/BAT0/capacity
instead.
The regular expression pattern describes the response and defines the group
battery
for extraction. The name of one of the responses in the responses
table must equal the group name. The observation will be forwarded to the
message queue of a dmdb process.
Backslash characters in the string values have to be escaped with \
.
-- dmpipe.conf
dmpipe = {
logger = "dmlogger", -- Logger to send logs to (optional).
node = "dummy-node", -- Node id (required).
sensor = "dummy-sensor", -- Sensor id (required).
output = "", -- Path to output file, empty or `-` for stdout (optional).
format = "none", -- Output format (`csv` or `jsonl`).
jobs = { -- Jobs to perform.
{
delay = 60 * 1000, -- Delay to wait afterwards in mseconds (optional).
disabled = false, -- Disable to ignore job (optional).
onetime = false, -- Run job only once (optional).
observation = { -- Observation to execute (optional).
name = "dummy-observ", -- Observation name (required).
target_id = "dummy-target", -- Target id (required).
receivers = { "dmdb" }, -- Optional list of receivers (up to 16).
requests = { -- Pipes to open.
{
request = "sysctl hw.acpi.battery.life", -- Command to run.
pattern = "hw\\.acpi\\.battery\\.life: (?<battery>[0-9]+)", -- RegEx pattern.
delay = 0, -- Delay in mseconds (optional).
responses = {
{
name = "battery", -- RegEx group.
unit = "%" -- Unit.
}
}
}
}
}
}
},
debug = false, -- Forward log messages of level DEBUG via IPC.
verbose = true -- Print messages to standard output (optional).
}
Pass the path of the configuration file to dmpipe:
$ dmpipe --name dmpipe --config /usr/local/etc/dmpipe.conf
The result returned by sysctl(8) will be formatted according to the current locale (decimal separator). You may have to change the locale first to match the regular expression pattern:
$ export LANG=C $ dmpipe --name dmpipe --config /usr/local/etc/dmpipe.conf
dmrecv
The dmrecv program listens to the POSIX message queue of its name and writes
received logs or observations to stdout, file, or named pipe; in CSV, JSON
Lines, or Namelist format. By default, the serialised data is appended to the
end of the output file. If argument --replace
is passed, the file will be
replaced consecutively.
Received observations are not forwarded to the next specified receiver unless
argument --forward
is set. If no receivers are defined or left, the
observation will be discarded after output.
The output format block
is only available for observation data and requires
a response name to be set. Observations will be searched for this response name
and converted to data point type if found. The data point is printed in ASCII
block format.
If the JSON Lines output format is selected, logs and observations are written
as JSON objects to file or stdout, separated by new line (\n
). Use jq(1)
to convert records in JSON Lines file input.jsonl
into a valid JSON array in
output.json
:
$ jq -s '.' input.jsonl > output.json
The program settings are passed through command-line arguments or an optional configuration file. The arguments overwrite settings from file.
Format | Type | Description |
---|---|---|
|
|
ASCII block format (timestamp and response value). |
|
|
CSV format. |
|
|
JSON Lines format. |
|
|
Fortran 95 Namelist format. |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
off |
Forward log messages of level DEBUG via IPC (if logger is set). |
|
|
– |
Output format: |
|
|
off |
Forward observations to the next specified receiver. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of table in configuration file and POSIX message queue to subscribe to. |
|
|
– |
Optional node id. |
|
|
stdout |
Output file to append observations to (empty or |
|
|
off |
Replace output file instead of appending data. |
|
|
– |
Name of observation response to output (required for format |
|
|
– |
Data type to receive: |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Write log messages received from POSIX message queue /dmrecv
to file
/tmp/logs.csv
in CSV format:
$ dmrecv --name dmrecv --type log --format csv --output /tmp/logs.csv
Output observations in JSON Lines format to stdout:
$ dmrecv --name dmrecv --type observ --format jsonl
Write the observations serialised in JSON Lines format to named pipe
/tmp/dmrecv_pipe
:
$ mkfifo /tmp/dmrecv_pipe $ dmrecv --name dmrecv --type observ --format jsonl --output /tmp/dmrecv_pipe
Another process can now read the observations from /tmp/dmrecv_pipe
:
$ tail -f /tmp/dmrecv_pipe
dmreport
The dmreport program creates reports in HTML5 format, containing plots of observations and/or log messages selected from database. Plots are created by calling gnuplot(1) and inlining the returned image (GIF, PNG, SVG) as a base64-encoded data URI. Any style sheet file with classless CSS can be included to alter the presentation of the report. The output of dmreport is a single HTML file.
A configuration file is mandatory to create reports. Only a few parameters can be set through command-line arguments. Passed command-line arguments have priority over settings in the configuration file.
Descriptor | Description (Format) |
---|---|
|
year (YYYY) |
|
month (MM) |
|
day (DD) |
|
hour (hh) |
|
minute (mm) |
|
second (ss) |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file (required). |
|
|
– |
Start of time range in ISO 8601. |
|
|
– |
Output available command-line arguments and quit. |
|
|
|
Name of program instance and configuration. |
|
|
– |
Sensor node id. |
|
|
– |
Path of the HTML output file. May include format descriptors. |
|
|
– |
Path to the CSS file to inline. |
|
|
– |
End of time range in ISO 8601. |
|
|
– |
Output version information and quit. |
Examples
The settings are stored in Lua table dmreport
in the configuration file. The
observations are read from database observ.sqlite
, the log messages from
log.sqlite
.
-- dmreport.conf
dmreport = {
node = "dummy-node",
from = "1970-01-01T00:00:00.000+00:00",
to = "2070-01-01T00:00:00.000+00:00",
output = "%Y-%M-%D_dummy-report.html",
style = "/usr/local/share/dmpack/dmreport.min.css",
title = "Monitoring Report",
subtitle = "Project",
meta = "",
plots = {
disabled = false, -- Disable plots.
database = "observ.sqlite", -- Path to observation database.
title = "Plots", -- Overwrite default heading.
meta = "", -- Optional description.
observations = { -- List of plots to generate.
{
sensor = "dummy-sensor", -- Sensor id (required).
target = "dummy-target", -- Target id (required).
response = "tz0", -- Response name (required).
unit = "deg C", -- Response unit.
format = "svg", -- Plot format (gif, png, pngcairo, svg).
title = "Temperature", -- Plot title.
subtitle = "tz0", -- Plot sub-title.
meta = "", -- Optional description.
color = "#ff0000", -- Graph colour.
width = 1000, -- Plot width.
height = 300, -- Plot height.
}
}
},
logs = {
disabled = false, -- Disable logs.
database = "log.sqlite", -- Path to log database.
minlevel = LOG_WARNING, -- Minimum log level (default: LOG_WARNING).
maxlevel = LOG_CRITICAL, -- Maximum log level (default: LOG_CRITICAL).
title = "Logs", -- Overwrite default heading.
meta = "", -- Optional description.
}
}
Write a report to file report.html
based on settings in dmreport.conf
:
$ dmreport --name dmreport --config dmreport.conf --output report.html
The command-line arguments overwrite the settings of the configuration file.
In order to create monthly reports, we may customise the shell script
/usr/local/share/dmpack/mkreport.sh
to determine the timestamps of the last
and the current month, which will then be passed to dmreport. Modify the
script mkreport.sh
to your set-up:
dmreport="/usr/local/bin/dmreport"
name="dmreport"
config="/usr/local/etc/dmpack/dmreport.conf"
output="/var/www/reports/"
Executing the shell script creates two reports, one for time series of the previous month (in case some observations have arrived late), and one for those of the current month, for example:
$ sh /usr/local/share/dmpack/mkreport.sh --- Writing report of 2023-08 to file /var/www/reports/2023-08_report.html ... --- Writing report of 2023-09 to file /var/www/reports/2023-09_report.html ...
To run the report generation periodically, simply add the script to your crontab.
dmsend
The dmsend program reads observations or logs in CSV or Fortran 95 Namelist
format, and sends them sequentially to the POSIX message queue of the given
receiver. The data is either read from file or from standard input. If the input
data is of type observ
and the argument --forward
is passed, each
observation will be sent to its next specified receiver in the receivers list.
If no receivers are declared, or if the end of the receivers list is reached,
the observation will not be forwarded.
The program settings are passed through command-line arguments or an optional configuration file. The arguments overwrite settings from file.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
off |
Forward log messages of level DEBUG via IPC (if logger is set). |
|
|
– |
Input format: |
|
|
stdin |
Path to input file (empty or |
|
|
off |
Forward observations to the next specified receiver. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in configuration file. |
|
|
– |
Optional node id. |
|
|
– |
Name of receiver/message queue. |
|
|
– |
Input data type: |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Read observation from Namelist file observ.nml
and send it to the next
specified receiver:
$ dmsend --type observ --format nml --input observ.nml --forward
Send logs in CSV file logs.csv
sequentially to process dmrecv
:
$ dmsend --receiver dmrecv --type log --format csv --input logs.csv
dmserial
The dmserial program sends requests to a sensor or actor connected via USB/RS-232/RS-422/RS-485. Sensor commands and responses are sent/received through a teletype (TTY) device provided by the operating system. A pseudo-terminal (PTY) may be used to connect a virtual sensor.
Each request of an observation must contains the raw request intended for the
sensor in attribute request
. Response values are extracted by group from the
raw response using the given regular expression pattern. Each group name must
match a response name. Response names are limited to eight characters.
Observations will be forwarded to the next receiver via POSIX message queue if
any receiver is specified. The program can act as a sole data logger if output
and format are set. If the output path is set to -
, observations are printed
to stdout, else to file.
A configuration file is required to configure the jobs to perform. Each observation must have a valid target id. The database must contain the specified node, sensor, and targets.
The following baud rates are supported: 50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800, 9600, 19200, 38400, 57600, 115200, 230400, 460800, 921600.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
9600 |
Number of symbols transmitted per second (4800, 9600, 115200, …). |
|
|
8 |
Byte size (5, 6, 7, 8). |
|
|
– |
Path to configuration file (required). |
|
|
off |
Forward log messages of level DEBUG via IPC (if logger is set). |
|
|
off |
Enable Data Terminal Ready (DTR). |
|
|
– |
Output format, either |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
– |
Output file to append observations to ( |
|
|
|
Parity bits ( |
|
|
off |
Enable Request To Send (RTS). |
|
|
– |
Sensor id. |
|
|
1 |
Number of stop bits (1, 2). |
|
|
0 |
Connection timeout in seconds (max. 25). |
|
|
– |
Path to TTY/PTY device (for example, |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Read the jobs to perform from configuration file and execute them sequentially:
$ dmserial --name dmserial --config /usr/local/etc/dmpack/dmserial.conf --verbose
dmsync
The dmsync program synchronises logs, nodes, observations, sensors, and targets from local database concurrently with a remote dmapi server. The synchronisation may be started only once if no interval is set (to transfer nodes, sensors, and targets from client to server), periodically as a cron job, or by waiting for a POSIX semaphore.
The nodes, sensors, and targets referenced by observations in the local database must also exist in the remote server database. They can be created either with dmdbctl or dmweb, but also synchronised with dmsync. Logs and targets do not require any additional database entries on server-side.
The client databases must contain synchronisation tables. The tables are created
automatically by dminit if command-line argument --sync
is passed.
Alternatively, start dmsync with argument --create
once.
If the RPC server uses HTTP Basic Auth for authentication, the RPC user name must match the node id of the transmitted node, sensor, observation, log, or beat record. Otherwise, it will be rejected by the RPC server (HTTP 401).
The database records are send in compressed Fortran 95 Namelist format via HTTP
to the server. The program uses libcurl for data transfer. The accessed RPC API
endpoints are expected under URL [http|https]://<host>:<port>/api/v1/<endpoint>
.
The result of each synchronisation attempt is stored in the local database. Records are marked as synchronised only if the server returns HTTP 201 (Created).
Passing the server credentials via the command-line arguments --username
and
--password
is insecure on multi-user operating systems and only recommended
for testing.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
off |
Create database synchronisation tables if they do not exist. |
|
|
– |
Path to SQLite log or observation database. |
|
|
off |
Forward log messages of level DEBUG via IPC (if logger is set). |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
IP address or FQDN of HTTP-RPC host (for example, |
|
|
60 |
Synchronisation interval in seconds. If |
|
|
– |
Name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of program instance and configuration. |
|
|
– |
Node id, required for types |
|
|
– |
HTTP-RPC API password. |
|
|
0 |
Port of HTTP-RPC API server (set to |
|
|
off |
Use TLS-encrypted connection. |
|
|
– |
Type of data to sychronise, either |
|
|
– |
HTTP-RPC API user name. If set, implies HTTP Basic Auth. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
|
|
– |
Name of POSIX semaphore to wait for. Synchronises databases if semaphore is > 0. |
Examples
Synchronise nodes, sensors, and targets in the local observation database with an RPC server:
$ dmsync --database observ.sqlite --type node --host 192.168.1.100 $ dmsync --database observ.sqlite --type sensor --node dummy-node --host 192.168.1.100 $ dmsync --database observ.sqlite --type target --host 102.168.1.100
Synchronise observations:
$ dmsync --database observ.sqlite --type observ --host 192.168.1.100
Synchronise log messages:
$ dmsync --database log.sqlite --type log --host 192.168.1.100
dmuuid
The dmuuid program is a command-line tool to generate pseudo-random UUID4s. By
default, DMPACK uses 32 characters long UUID4s in hexadecimal format (without
hyphens). Hyphens can be added by a command-line flag. The option --convert
expects UUID4s to be passed via standard input. Invalid UUID4s will be replaced
with the default UUID4.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
off |
Add hyphens to 32 characters long hexadecimal UUIDs passed via stdin. |
|
|
1 |
Number of UUIDs to generate. |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Return 36 characters long UUIDs with hyphens. |
|
|
– |
Output version information and quit. |
Examples
Create three identifiers:
$ dmuuid --count 3 6827049760c545ad80d4082cc50203e8 ad488d0b8edd4c6c94582e702a810ada 3d3eee7ae1fb4259b5df72f854aaa369
Create a UUID4 with hyphens:
$ dmuuid --hyphens d498f067-d14a-4f98-a9d8-777a3a131d12
Add hyphens to a hexadecimal UUID4:
$ echo '3d3eee7ae1fb4259b5df72f854aaa369' | dmuuid --convert 3d3eee7a-e1fb-4259-b5df-72f854aaa369
dmweb
dmweb is a CGI-based web user interface for DMPACK database access on client and server. The web application has to be executed through a CGI-compatible web server. It is recommended to run lighttpd(1). Any other server must be able to pass environment variables to the CGI application. gnuplot(1) is required for the plotting backend (no-X11 flavour is sufficient).
The web application provides the following pages:
- Dashboard
-
Lists the observations, logs, and heartbeats that have been added to the databases most recently.
- Nodes
-
Lists all sensor nodes, and allows to add new ones.
- Sensors
-
Lists all sensors, and allows to add new ones.
- Targets
-
Lists all targets, and allows to add new ones.
- Observations
-
Lists observations in database, selected by filter.
- Plots
-
Creates plots in SVG format from observation responses in database.
- Logs
-
Lists log messages stored in database, with optional filter.
- Beats
-
Lists received heartbeat messages, sorted by node id. The beat view shows the time the heartbeat was sent and received, as well as the time passed since then, additionally in Swatch Internet Time.
The style sheet of dmweb is based on missing.css. It may be replaced with any other classless CSS theme. For best experience, the IBM Plex font family should be installed locally.
Environment variables are used to configure dmweb. Transport security and authentication have to be provided by the web server.
Environment Variable | Description |
---|---|
|
Path to heartbeat database (server). |
|
Path to log database (client, server). |
|
Path to observation database (client, server). |
|
Set to |
See section Web UI for an example configuration.
Web Applications
dmapi | dmweb | |
---|---|---|
Description |
HTTP-RPC API |
Web UI |
Base Path |
|
|
Protocol |
FastCGI |
CGI |
Location |
server |
client, server |
Configuration |
environment variables |
environment variables |
Authentication |
HTTP Basic Auth |
HTTP Basic Auth |
Content-Types |
CSV, JSON, JSON Lines, Namelist, Text |
HTML5 |
HTTP Methods |
GET, POST |
GET, POST |
Database |
SQLite 3 |
SQLite 3 |
Read-Only Mode |
Yes |
Yes |
The following web applications are part of DMPACK (comparison):
Both applications may be served by the same web server. It is recommended to run them in lighttpd(1). On FreeBSD, install the package with:
# pkg install www/lighttpd
The web server is configured through /usr/local/etc/lighttpd/lighttpd.conf
.
In the listed examples, the DMPACK executables are assumend to be in
/usr/local/bin/
, but you may copy the programs to /var/www/cgi-bin/
or any
other directory. Set appropriate owner and access rights.
Authentication
Set auth.backend.htpasswd.userfile
to the path of the file that contains the
HTTP Basic Auth credentials, or remove the related lines from the configuration
if authentication is not desired. You can run openssl(1) to add credentials to
the htpasswd user file:
# printf "<user>:`openssl passwd -crypt '<password>'`\n" >> /usr/local/etc/lighttpd/htpasswd
Replace <user>
and <password>
with real values. Instead of a htpasswd
file for, we may select a different authentication backend, for example, LDAP,
MySQL/MariaDB, PostgreSQL, or SQLite 3. See the lighttpd(1) auth module
documentation for further instructions.
Cross-Origin Resource Sharing
If the HTTP-RPC API will be accessed by a client-side application running in the
browser, the web server has to be configured to send the appropriate
Cross-Origin Resource Sharing
(CORS) headers. By default, asynchronous JavaScript requests are forbidden by
the same-origin security policy. Refer to the documentation of the web server on
how to set the Access-Control-*
headers. For lighttpd(1), load the module
mod_setenv
and add response headers for OPTION requests:
$HTTP["request-method"] =~ "^(OPTIONS)$" {
setenv.add-response-header = (
"Access-Control-Allow-Origin" => "*",
"Access-Control-Allow-Headers" => "accept, origin, x-requested-with, content-type, x-transmission-session-id",
"Access-Control-Expose-Headers" => "X-Transmission-Session-Id",
"Access-Control-Allow-Methods" => "GET, POST, OPTIONS"
)
}
If the web server is behind a reverse proxy, CORS headers should be set by the proxy instead.
Databases
The databases are expected to be in directory /var/dmpack/
. Change the
environment variables in the web server configuration to the actual paths. The
observation, log, and beat databases the web applications will access must be
created and initialised beforehand:
# dminit --type observ --database /var/dmpack/observ.sqlite --wal # dminit --type log --database /var/dmpack/log.sqlite --wal # dminit --type beat --database /var/dmpack/beat.sqlite --wal
Make sure the web server has read and write access to the directory:
# chown -R www:www /var/dmpack
Change www:www
to the user and the group the web server is running as.
RPC Server
The snippet in this section may be added to the lighttpd(1) configuration to run the dmapi service. The lighttpd(1) web server does not require an additional FastCGI spawner. The following server modules have to be imported:
-
mod_authn_file
(HTTP Basic Auth) -
mod_extforward
(real IP, only if the server is behind a reverse proxy) -
mod_fastcgi
(FastCGI)
Add the IP address of the proxy server to the list of trusted forwarders to have access to the real IP of a client.
$SERVER["socket"] == "0.0.0.0:80" { }
# Load lighttpd modules.
server.modules += (
"mod_authn_file",
"mod_extforward",
"mod_fastcgi"
)
# Set authentication backend and path of password file.
auth.backend = "htpasswd"
auth.backend.htpasswd.userfile = "/usr/local/etc/lighttpd/htpasswd"
# Real IP of client in case the server is behind a reverse proxy. Set one or
# more trusted proxies.
# extforward.headers = ( "X-Real-IP" )
# extforward.forwarder = ( "<PROXY IP>" => "trust" )
# FastCGI configuration. Run two worker processes, and pass the database paths
# through environment variables.
fastcgi.server = (
"/api/v1" => ((
"socket" => "/var/lighttpd/sockets/dmapi.sock",
"bin-path" => "/usr/local/bin/dmapi",
"max-procs" => 2,
"check-local" => "disable",
"bin-environment" => (
"DM_DB_BEAT" => "/var/dmpack/beat.sqlite",
"DM_DB_LOG" => "/var/dmpack/log.sqlite",
"DM_DB_OBSERV" => "/var/dmpack/observ.sqlite",
"DM_READ_ONLY" => "0"
)
))
)
# URL routing.
$HTTP["url"] =^ "/api/v1" {
# Enable HTTP Basic Auth.
auth.require = ( "" => (
"method" => "basic",
"realm" => "dmpack",
"require" => "valid-user"
))
}
The FastCGI socket will be written to /var/run/lighttpd/sockets/dmapi.sock
.
Change max-procs
to the desired number of FastCGI processes. Set the
environment variables to the locations of the databases. The databases must
exist prior start.
On FreeBSD, add the service to the system rc file /etc/rc.conf
and start the
server manually:
# sysrc lighttpd_enable="YES" # service lighttpd start
If served locally, access the RPC API at http://127.0.0.1/api/v1/.
Web UI
The lighttpd(1) web server has to be configured to run the CGI
application under base path /dmpack/
. The following server modules are
required:
-
mod_alias
(URL rewrites) -
mod_authn_file
(HTTP Basic Auth) -
mod_cgi
(Common Gateway Interface) -
mod_setenv
(CGI environment variables)
The example configuration may be appended to your lighttpd.conf
:
$SERVER["socket"] == "0.0.0.0:80" { }
# Load lighttpd modules.
server.modules += (
"mod_alias",
"mod_authn_file",
"mod_cgi",
"mod_setenv"
)
# Set maximum number of concurrent connections and maximum
# HTTP request size of 8192 KiB (optional).
server.max-connections = 16
server.max-request-size = 8192
# Pass the database paths through environment variables.
setenv.add-environment = (
"DM_DB_BEAT" => "/var/dmpack/beat.sqlite",
"DM_DB_LOG" => "/var/dmpack/log.sqlite",
"DM_DB_OBSERV" => "/var/dmpack/observ.sqlite",
"DM_READ_ONLY" => "0"
)
# Set authentication backend and path of password file.
auth.backend = "htpasswd"
auth.backend.htpasswd.userfile = "/usr/local/etc/lighttpd/htpasswd"
# URL routing.
$HTTP["url"] =^ "/dmpack/" {
# Map URL to CGI executable.
alias.url += ( "/dmpack" => "/usr/local/bin/dmweb" )
# Enable HTTP Basic Auth.
auth.require = ( "" => (
"method" => "basic",
"realm" => "dmpack",
"require" => "valid-user"
))
# CGI settings. Do not assign file endings to script interpreters,
# execute only applications with execute bit set, enable write and
# read timeouts of 30 seconds.
cgi.assign = ( "" => "" )
cgi.execute-x-only = "enable"
cgi.limits = (
"write-timeout" => 30,
"read-timeout" => 30,
"tcp-fin-propagate" => "SIGTERM"
)
}
Copy the CSS file from /usr/local/share/dmpack/dmpack.min.css
to the WWW root
directory, in this case, /var/www/
, or just create a symlink. If the style
sheet has to be served from a path different from the root path, add a rewrite
rule or alias to the web server configuration.
On FreeBSD, add the service to the system rc file /etc/rc.conf
and start the
server manually:
# sysrc lighttpd_enable="YES" # service lighttpd start
If served locally, access the web application at http://127.0.0.1/dmpack/.
RPC API
All database records are returned in CSV format by default, with content type
text/comma-separated-values
. Status and error messages are returned as
key–values pairs, with content type text/plain
.
The following HTTP endpoints are provided by the RPC API:
Endpoint | Method | Description |
---|---|---|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET, POST |
|
|
GET, POST |
|
|
GET, POST |
|
|
GET, POST |
|
|
GET, POST |
|
|
GET, POST |
Read Service Status
Returns service status in API status format as text/plain
.
Paths
-
/api/v1/
Methods
-
GET
Responses
Status | Description |
---|---|
200 |
Always. |
Example
Return the RPC service status:
$ curl -s -u <username>:<password> --header "Accept: text/plain" \ "http://localhost/api/v1/"
Read Beats
Paths
-
/api/v1/beats
-
/api/v1/beats?header=<0|1>
Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Beats are returned. |
|
No beats found. |
|
Server error. |
|
Database error. |
Example
Return beats of all nodes in JSON format, pretty-print the result with jq(1):
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/beats" | jq
Read Logs
Returns logs of a given node and time range in CSV, JSON, or JSON Lines format from database. Node id and time range are mandatory.
Paths
-
/api/v1/logs?node_id=<id>&from=<timestamp>&to=<timestamp>
Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
|
string |
Start of time range (ISO 8601). |
|
string |
End of time range (ISO 8601). |
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Nodes are returned. |
|
Invalid request. |
|
No nodes found. |
|
Server error. |
|
Database error. |
Example
Return all logs of node dummy-node
and year 2023 in CSV format:
$ curl -s -u <username>:<password> --header "Accept: text/comma-separated-values" \ "http://localhost/api/v1/logs?node_id=dummy-node&from=2023&to=2024"
Read Nodes
Paths
-
/api/v1/nodes
-
/api/v1/nodes?header=<0|1>
Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Nodes are returned. |
|
No nodes found. |
|
Server error. |
|
Database error. |
Example
Return all nodes in database as JSON array:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/nodes"
Read Observations
Returns observations of given node, sensor, target, and time range from database, in CSV, JSON, or JSON Lines format.
Paths
-
/api/v1/observs?<parameters>
Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
|
string |
Sensor id. |
|
string |
Target id. |
|
string |
Response name. |
|
string |
Start of time range (ISO 8601). |
|
string |
End of time range (ISO 8601). |
|
integer |
Max. number of results (optional). |
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Observations are returned. |
|
Invalid request. |
|
No observations found. |
|
Server error. |
|
Database error. |
Example
Return all observations related to node dummy-node
, sensor dummy-sensor
, and
target dummy-target
of a single month in JSON format, pretty-print the result
with jq(1):
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/observs?node_id=dummy-node&sensor_id=dummy-sensor\ &target_id=dummy-target&from=2023-01&to=2023-01" | jq
Read Sensors
Paths
-
/api/v1/sensors
-
/api/v1/sensors?header=<0|1>
Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Sensors are returned. |
|
No sensors found. |
|
Server error. |
|
Database error. |
Example
Return all sensors of node dummy-node
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/sensors?node_id=dummy-node"
Read Targets
Paths
-
/api/v1/targets
-
/api/v1/targets?header=<0|1>
Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Targets are returned. |
|
No targets found. |
|
Server error. |
|
Database error. |
Example
Return all targets in CSV format:
$ curl -s -u <username>:<password> --header "Accept: text/comma-separated-values" \ "http://localhost/api/v1/targets"
Read Time Series
Returns time series as observation views or data points (X/Y records) in CSV format from database. In comparison to the observation endpoint, the time series include only a single response, selected by name.
Paths
-
/api/v1/timeseries?<parameters>
Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
|
string |
Sensor id. |
|
string |
Target id. |
|
string |
Response name. |
|
string |
Start of time range (ISO 8601). |
|
string |
End of time range (ISO 8601). |
|
integer |
Max. number of results (optional). |
|
integer |
Add CSV header (0 or 1). |
|
integer |
Return observation views instead of data points (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Observations are returned. |
|
Invalid request. |
|
No observations found. |
|
Server error. |
|
Database error. |
Example
Return time series of responses dummy
related to node dummy-node
, sensor
dummy-sensor
, and target dummy-sensor
, from 2023 to 2024, as X/Y data in CSV
format:
$ curl -s -u <username>:<password> --header "Accept: text/comma-separated-values" \ "http://localhost/api/v1/timeseries?node_id=dummy-node&sensor_id=dummy-sensor\ &target_id=dummy-target&response=dummy&from=2023&to=2024"
For additional meta information, add the parameter view=1
.
Read or Update Beat
On POST, adds or updates heartbeat given in Namelist format. Optionally, the payload may be deflate compressed. The API returns HTTP 201 Created if the beat was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the beat, otherwise, the request will be rejected as unauthorised (HTTP 401).
Paths
-
/api/v1/beat
-
/api/v1/beat?node_id=<id>
Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Heartbeat is returned. |
|
Invalid request. |
|
Heartbeat not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Heartbeat was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return the heartbeat of node dummy-node
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/beat?node_id=dummy-node"
Read or Create Log
On POST, adds log in Namelist format to database. Optionally, the payload may be deflate compressed. The API returns HTTP 201 Created if the log was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the log, otherwise, the request will be rejected as unauthorised (HTTP 401).
Paths
-
/api/v1/log
-
/api/v1/log?id=<id>
Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Log id (UUID4). |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Log is returned. |
|
Invalid request. |
|
Log not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Log was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Log exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return a specific log in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/log?id=51adca2f1d4e42a5829fd1a378c8b6f1"
Read or Create Node
On POST, adds node in Namelist format to database. Optionally, the payload may be deflate compressed. The API returns HTTP 201 Created if the node was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the node, otherwise, the request will be rejected as unauthorised (HTTP 401).
Paths
-
/api/v1/node
-
/api/v1/node?id=<id>
Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Node is returned. |
|
Invalid request. |
|
Node not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Node was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Node exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return node dummy-node
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/node?node_id=dummy-node"
Read or Create Observation
On POST, adds observation in Namelist format to database. Optionally, the payload may be deflate compressed. The API returns HTTP 201 Created if the observation was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the observation, otherwise, the request will be rejected as unauthorised (HTTP
401).
Paths
-
/api/v1/observ
-
/api/v1/observ?id=<id>
Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Observation id (UUID4). |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Observation is returned. |
|
Invalid request. |
|
Observation not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Observation was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Observation exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return a specific observation in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/observ?id=7b98ae11d80b4ee392fe1a74d2c05809"
Read or Create Sensor
On POST, adds node in Namelist format to database. Optionally, the payload may be deflate compressed. The API returns HTTP 201 Created if the sensor was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the sensor, otherwise, the request will be rejected as unauthorised (HTTP 401).
Paths
-
/api/v1/sensor
-
/api/v1/sensor?id=<id>
Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Sensor id. |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Sensor is returned. |
|
Invalid request. |
|
Sensor not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Sensor was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Sensor exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return sensor dummy-sensor
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/sensor?id=dummy-sensor"
Read or Create Target
On POST, adds target in Namelist format to database. Optionally, the payload may be deflate compressed. The API returns HTTP 201 Created if the target was accepted.
Paths
-
/api/v1/target
-
/api/v1/target?id=<id>
Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Target id. |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Target is returned. |
|
Invalid request. |
|
Target not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Target was accepted. |
|
Invalid request or payload. |
|
Target exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return target dummy-target
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/target?id=dummy-target"
Data Serialisation
DMPACK supports the following data serialisation formats:
- Atom (XML)
-
Export of log messages in Atom Syndication Format (RFC 4287), with optional XSLT style sheet.
- Block
-
Export of observation responses as X/Y data points in ASCII block format, consisting of timestamp (ISO 8601) and real value.
- CSV
-
Export and import of beat, log, node, observation, sensor, and target data, with custom field separator and quote character. A CSV header is added optionally.
- JSON
-
Export of beat, log, node, observation, sensor, and target data as JSON objects or JSON arrays.
- JSON Lines
-
Export of beat, log, node, observation, sensor, and target data in JSON Lines / Newline Delimited JSON format.
- Lua
-
Converting observations from and to Lua tables. Import of observations from Lua file or stack-based data exchange between Fortran and Lua.
- Namelist
-
Import from and export to Fortran 95 Namelist format of single beat, log, node, observation, sensor, and target data. The syntax is case-insensitive, line-breaks are optional. Default values are assumed for omitted attributes of data in Namelist format.
- Text
-
Status messages of the HTTP-RPC API are returned as key–value pairs in plain text format
API Status
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
DMPACK application version. |
|
string |
32 |
DMPACK library version. |
|
string |
32 |
Server host name. |
|
string |
32 |
Server software (web server). |
|
string |
29 |
Server date and time in ISO 8601. |
|
string |
32 |
Server status message (optional). |
|
integer |
4 |
Beat
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Node id ( |
|
string |
45 |
IPv4/IPv6 address of client. |
|
string |
29 |
Date and time heartbeat was sent (ISO 8601). |
|
string |
29 |
Date and time heartbeat was received (ISO 8601). |
|
integer |
4 |
|
|
integer |
4 |
Emit interval in seconds. |
|
integer |
4 |
Client uptime in seconds. |
{
"node_id": "dummy-node",
"address": "127.0.0.1",
"time_sent": "1970-01-01T00:00:00.000+00:00",
"time_recv": "1970-01-01T00:00:00.000+00:00",
"error": 0,
"interval": 0,
"uptime": 0
}
Data Point
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
29 |
X value (ISO 8601). |
|
double |
8 |
Y value. |
Column | Attribute | Description |
---|---|---|
1 |
|
X value. |
2 |
|
Y value. |
Log
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Log id (UUID4). |
|
integer |
4 |
|
|
integer |
4 |
|
|
string |
29 |
Date and time (ISO 8601). |
|
string |
32 |
Node id (optional). |
|
string |
32 |
Sensor id (optional). |
|
string |
32 |
Target id (optional). |
|
string |
32 |
Observation id (optional). |
|
string |
32 |
Log source (optional). |
|
string |
512 |
Log message. |
Level | Name |
---|---|
1 |
DEBUG |
2 |
INFO |
3 |
WARNING |
4 |
ERROR |
5 |
CRITICAL |
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<generator version="1.0">DMPACK</generator>
<title>DMPACK Logs</title>
<subtitle>Log Messages Feed</subtitle>
<id>urn:uuid:a6baaf1a-43b7-4e59-a18c-653e6ee61dfa</id>
<updated>1970-01-01T00:00:00.000+00:00</updated>
<entry>
<title>DEBUG: dummy log message</title>
<id>urn:uuid:26462d27-d7ff-4ef1-b10e-0a2e921e638b</id>
<published>1970-01-01T00:00:00.000+00:00</published>
<updated>1970-01-01T00:00:00.000+00:00</updated>
<summary>DEBUG: dummy log message</summary>
<content type="xhtml">
<div xmlns="http://www.w3.org/1999/xhtml">
<table>
<tbody>
<tr><th>ID</th><td><code>26462d27d7ff4ef1b10e0a2e921e638b</code></td></tr>
<tr><th>Timestamp</th><td>1970-01-01T00:00:00.000+00:00</td></tr>
<tr><th>Level</th><td>DEBUG (1)</td></tr>
<tr><th>Error</th><td>dummy error (2)</td></tr>
<tr><th>Node ID</th><td>dummy-node</td></tr>
<tr><th>Sensor ID</th><td>dummy-sensor</td></tr>
<tr><th>Target ID</th><td>dummy-target</td></tr>
<tr><th>Observation ID</th><td><code>9bb894c779e544dab1bd7e7a07ae507d</code></td></tr>
<tr><th>Source</th><td>dummy</td></tr>
<tr><th>Message</th><td>dummy log message</td></tr>
</tbody>
</table>
</div>
</content>
<author>
<name>dummy</name>
</author>
</entry>
</feed>
{
"id": "26462d27d7ff4ef1b10e0a2e921e638b",
"level": 1,
"error": 2,
"timestamp": "1970-01-01T00:00:00.000+00:00",
"node_id": "dummy-node",
"sensor_id": "dummy-sensor",
"target_id": "dummy-target",
"observ_id": "9bb894c779e544dab1bd7e7a07ae507d",
"message": "dummy log message"
}
&DMLOG LOG%ID="26462d27d7ff4ef1b10e0a2e921e638b", LOG%LEVEL=1, LOG%ERROR=2, LOG%TIMESTAMP="1970-01-01T00:00:00.000+00:00", LOG%NODE_ID="dummy-node", LOG%SENSOR_ID="dummy-sensor", LOG%TARGET_ID="dummy-target", LOG%OBSERV_ID="9bb894c779e544dab1bd7e7a07ae507d", LOG%SOURCE="dummy", LOG%MESSAGE="dummy log message", /
Node
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Node id ( |
|
string |
32 |
Node name. |
|
string |
32 |
Node description (optional). |
Column | Attribute | Description |
---|---|---|
1 |
|
Node id. |
2 |
|
Node name. |
3 |
|
Node description. |
Observation
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Observation id (UUID4). |
|
string |
32 |
Node id ( |
|
string |
32 |
Sensor id ( |
|
string |
32 |
Target id ( |
|
string |
32 |
Observation name ( |
|
string |
29 |
Date and time of observation (ISO 8601). |
|
string |
32 |
Path of TTY/PTY device. |
|
integer |
4 |
Message queue priority (>= 0). |
|
integer |
4 |
Observation error code. |
|
integer |
4 |
Cursor of receiver list (0 to 16). |
|
integer |
4 |
Number of receivers (0 to 16). |
|
integer |
4 |
Number of sensor requests (0 to 8). |
|
array |
16 × 32 |
Array of receiver names (16). |
|
array |
8 × 1277 |
Array of requests (8). |
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
29 |
Date and time of request (ISO 8601). |
|
string |
256 |
Raw request to sensor. Non-printable characters have to be escaped. |
|
string |
256 |
Raw response of sensor. Non-printable characters will be escaped. |
|
string |
8 |
Request delimiter. Non-printable characters have to be escaped. |
|
string |
256 |
Regular expression pattern that describes the raw response using named groups. |
|
integer |
4 |
Delay in mseconds to wait after the request. |
|
integer |
4 |
Request error code. |
|
integer |
4 |
Number of performed retries. |
|
integer |
4 |
Request state (unused, for future additions). |
|
integer |
4 |
Request timeout in mseconds. |
|
integer |
4 |
Number of responses (0 to 16). |
|
array |
16 × 28 |
Extracted values from the raw response (16). |
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
8 |
Response name ( |
|
string |
8 |
Response unit. |
|
integer |
4 |
Response error code. |
|
double |
8 |
Response value. |
{
"id": "9273ab62f9a349b6a4da6dd274ee83e7",
"node_id": "dummy-node",
"sensor_id": "dummy-sensor",
"target_id": "dummy-target",
"name": "dummy-observ",
"timestamp": "1970-01-01T00:00:00.000+00:00",
"path": "/dev/null",
"priority": 0,
"error": 0,
"next": 0,
"nreceivers": 2,
"nrequests": 1,
"receivers": [
"dummy-receiver1",
"dummy-receiver2"
],
"requests": [
{
"timestamp": "1970-01-01T00:00:00.000+00:00",
"request": "?\\n",
"response": "10.0\\n",
"delimiter": "\\n",
"pattern": "(?<sample>[-+0-9\\.]+)",
"delay": 0,
"error": 0,
"retries": 0,
"state": 0,
"timeout": 0,
"nresponses": 1,
"responses": [
{
"name": "sample",
"unit": "none",
"error": 0,
"value": 10.0
}
]
}
]
}
{
id = "9273ab62f9a349b6a4da6dd274ee83e7",
node_id = "dummy-node",
sensor_id = "dummy-sensor",
target_id = "dummy-target",
timestamp = "1970-01-01T00:00:00.000+00:00",
path = "/dev/null",
name = "dummy-observ",
error = 0,
next = 1,
priority = 0,
nreceivers = 2,
nrequests = 1,
receivers = { "dummy-receiver1", "dummy-receiver2" },
requests = {
{
timestamp = "1970-01-01T00:00:00.000+00:00",
request = "?\\n",
response = "10.0\\n",
pattern = "(?<sample>[-+0-9\\.]+)",
delimiter = "\\n",
delay = 0,
error = 0,
retries = 0,
state = 0,
timeout = 0,
nresponses = 1,
responses = {
{
name = "sample",
unit = "none",
error = 0,
value = 10.0
}
}
}
}
}
&DMOBSERV OBSERV%ID="9273ab62f9a349b6a4da6dd274ee83e7", OBSERV%NODE_ID="dummy-node", OBSERV%SENSOR_ID="dummy-sensor", OBSERV%TARGET_ID="dummy-target", OBSERV%NAME="dummy-observ", OBSERV%TIMESTAMP="1970-01-01T00:00:00.000+00:00", OBSERV%PATH="/dev/null", OBSERV%PRIORITY=0, OBSERV%ERROR=0, OBSERV%NEXT=0, OBSERV%NRECEIVERS=2, OBSERV%NREQUESTS=1, OBSERV%RECEIVERS="dummy-receiver1","dummy-receiver2", OBSERV%REQUESTS(1)%TIMESTAMP="1970-01-01T00:00:00.000+00:00", OBSERV%REQUESTS(1)%REQUEST="?\n", OBSERV%REQUESTS(1)%RESPONSE="10.0\n", OBSERV%REQUESTS(1)%DELIMITER="\n", OBSERV%REQUESTS(1)%PATTERN="(?<sample>[-+0-9\.]+)", OBSERV%REQUESTS(1)%DELAY=0, OBSERV%REQUESTS(1)%ERROR=0, OBSERV%REQUESTS(1)%RETRIES=0, OBSERV%REQUESTS(1)%STATE=0, OBSERV%REQUESTS(1)%TIMEOUT=0, OBSERV%REQUESTS(1)%NRESPONSES=1, OBSERV%REQUESTS(1)%RESPONSES(1)%NAME="sample", OBSERV%REQUESTS(1)%RESPONSES(1)%UNIT="none", OBSERV%REQUESTS(1)%RESPONSES(1)%ERROR=0, OBSERV%REQUESTS(1)%RESPONSES(1)%VALUE=10.00000000000000, /
Sensor
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Sensor id ( |
|
string |
32 |
Node id ( |
|
integer |
4 |
|
|
string |
32 |
Sensor name. |
|
string |
32 |
Sensor serial number (optional). |
|
string |
32 |
Sensor description (optional). |
# | Name | Description |
---|---|---|
0 |
|
Unknown sensor type. |
1 |
|
Virtual sensor. |
2 |
|
File system. |
3 |
|
Process or service. |
4 |
|
Meteorological sensor. |
5 |
|
Robotic total station. |
6 |
|
GNSS receiver. |
7 |
|
Level sensor. |
8 |
|
MEMS sensor. |
Column | Attribute | Description |
---|---|---|
1 |
|
Sensor id. |
2 |
|
Node id. |
3 |
|
Sensor type. |
4 |
|
Sensor name. |
5 |
|
Sensor serial number. |
6 |
|
Sensor description. |
{
"id": "dummy-sensor",
"node_id": "dummy-node",
"type": 3,
"name": "Dummy Sensor",
"sn": "00000",
"meta": "Description."
}
Databases
The DMPACK programs use three distinct databases to store deformation monitoring entity records:
- Observation Database
-
Stores nodes, sensors, targets, observations, observation receivers, observation requests, and observation responses, with optional synchronisation tables for all record types.
- Log Database
-
Stores all log messages in single table.
- Heartbeat Database
-
Stores heartbeat messages by unique node id. Records are added via the SQL query
REPLACE INTO
.
The databases are usually located in directory /var/dmpack/
.
Administration
The sqlite3(1) program is stand-alone command-line shell for SQLite database access that allows the user to execute arbitrary SQL statements. Third-party programs provide an additional graphical user interface:
- DB Browser for SQLite (DB4S)
-
A spreadsheet-like graphical interface for Linux, Unix, macOS, and Windows. (MPLv2, GPLv3)
- HeidiSQL
-
A free database administration tool for MariaDB, MySQL, MS SQL Server, PostgreSQL, and SQLite. For Windows only. (GPLv2)
- phpLiteAdmin
-
A web front-end for SQLite database administration, written in PHP. (GPLv3)
Examples
Write all schemas of an observation database to file schema.sql
, using the
sqlite3(1) command-line tool:
$ sqlite3 /var/dmpack/observ.sqlite ".schema" > schema.sql
To dump an observation database as raw SQL to observ.sql
:
$ sqlite3 /var/dmpack/observ.sqlite ".dump" > observ.sql
Dump only table logs
of a log database:
$ sqlite3 /var/dmpack/log.sqlite ".dump 'logs'" > log.sql