THE INTERNET OF INSURED THINGS
Danish insurance company develops IoT platform for preventive monitoring
By Peter Reetz, Topdanmark, and Kjetil Kræmer, TechPeople
Topdanmark, Denmark’s second largest insurance company, is developing an IoT platform to process and enrich sensor data coming from its customers. Based on the processed data the platform generates reports and trigger warnings, thus adding preventive monitoring to Topdanmark’s service portfolio.
Traditionally, the business of insurance is based on statistical risk models: The price of your car insurance depends on your risk profile – age, job, where you live, etc. But now, with the emergence of the Internet of Things comes a possibility to add a new layer of proactive prevention to the traditional insurance business model of reactive damage compensation.
Together with Machine Learning algorithms and powerful cloud services the Internet of Things creates new opportunities: It can unearth valuable information to enable insurance companies to focus much more on prevention. A continuous feed of live information allows them to prevent incidents rather than compensating for them when they have occurred. This trend of combining known risk parameters with new data streams has the potential to change the insurance business significantly.
Scalability and flexibility
Topdanmark has responded to that challenge by going all the way, developing its own IoT platform.
It is a generic end-to-end visualization platform based on Amazon Web Services and designed with scalability, robustness and security in mind. The platform integrates different types of sensors, like temperature, humidity etc., to process and enrich the sensor data and give meaningful insights based on the sensor type.
The three main components of the framework are taken from the Amazon Web Services portfolio.
The edge component of the platform handles the two dominant protocols used when devices push data into a system, MQTT and CoAP. Both protocols are implemented by Amazon in its IoT infrastructure, which means that if a device speaks one of these languages the platform can receive data from it. Traditional M2M communication and REST API can be processed as well.
This enables the infrastructure to handle a very large number of incoming data streams from IoT devices, while receiving data from many different sources and radio technologies, like NB-IoT, LoRa, Sigfox etc.
Real-time processing and storage
The second component secures the ability to perform real time analysis of the data received, regardless of the amount of data coming in. Amazon Kinesis, a technology similar to Apache Kafka, is the component that secures real-time processing of large streams of data. This enables the platform to generate incidents in real-time based on data transmitted from the connected devices.
As the Amazon cloud infrastructure is extremely scalable, it can perform real time analysis on a very large scale, e.g. if you have 10 million devices streaming data to the system.
The third component is storage. After data has been analysed in real-time it has to be stored. Again, this can mean very large amounts of data, and the system uses the Amazon S3 object storage service to secure the scalability and performance needed. After being stored the data can be used for machine learning or other forms of analysis.
Furthermore, the platform can do provisioning. It can register a device and tie it to a specific customer and supply it with a certificate for the system to identify it correctly.
One of the advantages of AWS is that it is cloud-only, so Topdanmark avoids having to build a new data centre. The server room is replaced by a configuration file. Everything is virtual. The entire IoT platform is defined in code, and using the CloudFormation tool you can describe and provision all the infrastructure resources in your cloud environment.
As an example, if Topdanmark decides to run its IoT infrastructure out of an Amazon datacentre in Ireland, that can be done via CloudFormation. If the development team decides to move the platform to a datacentre in Sweden, it can be done within 20 minutes.
This cloud-only setup gives extreme flexibility and scalability, allowing the platform to grow very large if needed.
Security and GDPR
Regarding security, from when the data goes into the gateway and from there further into the system it is secure. In addition to AWS’ strong layer of security, Topdanmark has its own Authentication & Authorization protection, securing that only known and well-defined data sources are allowed to go into the system. Everything is encrypted, in-transit as well as at-rest. On top of that, Topdanmark is GDPR compliant, which among other things means guaranteeing the ability to erase data if a customer requests it.
The Topdanmark development team works in Java and Python with a Continuous Integration / Continuous Delivery architecture. All code is stored in Git, and the team has a Jenkins infrastructure and a Docker platform that produces their APIs and deploys them to the Amazon cloud.
The development team is now working on a number of use cases. One of them is deploying monitoring devices in industrial cooling environments, to secure correct temperature and to avoid expensive damage, if the cooling fails which can compromise the goods.
The use case has been selected because the owner of the facility as well as the insurance company want to secure that all the stored goods are uncompromised, so it is a win-win situation for both parties.
The team started out with experimenting with temperature sensors and hooking them up to the infrastructure to make sure the data could fit into the system. The data comes in two categories. The first is master data, which is static metadata about the customer, devices, addresses, type of sensors, their location, what certificate they are equipped with etc. With that data in place the system is ready to receive actual measurement data from the sensors.
When the sensor data hits the gateway it goes into the Kinesis infrastructure and from there to a database, ready to be processed and analysed. The analysis shows if it is necessary to raise an event, e.g. when the temperature rises to a critical level.
These events then can be handled in various ways, e.g. sending a text message directly to the customer or alerting customer service.
On top of that, measurement data can be fed into other systems and be used for dynamic pricing, statistics etc.
Detecting slurry levels and water leaks
As Topdanmark is a big insurance player in agriculture, Smart Farming concepts are being developed as well.
As an example, the development team has integrated slurry sensors, which enable farmers to monitor the slurry level and additional metrics of their slurry tanks through the landmand.dk portal.
Each integrated slurry tank is equipped with a sensor reporting the slurry level and
identifying which farmer, farm and unit the sensor is associated with. The platform enables farmers to detect slurry level anomalies by triggering actionable alerts in real-time.
Also, the Topdanmark development team is collaborating with the people behind the LeakBot device, a sensor designed to detect water leaks in private homes, to integrate LeakBot data into the platform.
With the framework in place, the Topdanmark developers are now focusing on adapting it to various use cases presented by the company’s business developers. Every new use case demands a number of customizations, like analysis of what events that need to be detected, integration of new device types, new protocols etc.
Also, the data collected is passed on to the Topdanmark data scientists for them to design machine learning algorithms to extract new knowledge from the data.
Looking into the future there are a number of challenges ahead, some of them technical, others not so much.
The platform is able to receive data, process it and generate trigger warnings. But it is not yet able to control different kinds of machinery, like closing a valve to shut down a leaking heating system. Actions like that require a whole other level of validation and verification and will be a future challenge to consider.
Also, when insurance companies go into the business of sensor networks and IoT platforms, they need to rethink their customer service and the skill sets required for call centre personnel, and they need to adjust their logistics to handle physical products. All this is far from the core task of developing an IoT platform, but yet essential to transforming the promise of IoT into good business.