Device SDK is integrated with your device SW, allowing your device to access all platform features via a simple callback in your code. The SDK is fully maintained by us, so you can focus on your code.
Devices are connected to the platform via stateful MQTT connection managed by Device SDK. That allows to limit delay in receiving updates from the cloud and to avoid performance hit when establishing new connections.
IoT gateway can be used to shield your devices from Internet threats, manage connectivity issues, and optimize traffic to make sure that all data is delivered to the cloud, even if the connectivity to the Internet is unreliable.
Secrets can be generated automatically via platform API and embedded into the device software at the factory. Once deployed to the field, devices are fully available on the platform without additional configuration.
Interacting with devices from the cloud via RPC-like synchronous methods or asynchronous methods such as messaging or desired state configuration.
Keeping the configuration of your devices always up to date via automatically evaluated and applied configuration that is stored in the cloud and target device groups via tags configured on your devices.
Understanding your devices' status and their cloud activity. Setting up alerting rules that match your IoT solution requirements.
Installed on your device, Device SDK provides a single interface to ingest data of any type and size to the platform. For devices connected directly to the cloud or via IoT Gateway.
The platform stores all incoming data to provided object storage, but you also have the option to send it to other storages such as object storages, databases, and message queues. You can even send it to apps within your current system.
Data can be automatically normalized to the structure that complies with your configuration reflecting your solution needs. The platform supports time windows and sessions, as well as, automated data deduplication.
Device SDK provides device-side data buffering capabilities to ensure that no data is lost in case of power failure or unstable connection. Based on your solution needs and the configuration of the platform, priority data is always sent first to make sure that are available for your business critical applications.
REST APIs documented with OpenAPI specification and SDKs to access all platform features programmatically to address specific needs of your next IoT solution.
Command-line interface and the Platform UI to work with the platform without the need for coding skills. Streamline CI/CD operations via CLI and address most common use cases via GUI.
Platform's storage with for role-based access control, built on cloud-native technologies such as Azure Blob Storage to allow easy integration with your existing data ecosystem.
Customize the platform to your company branding and manage access via your corporate Active Directory services.
Route your data to our fully managed or custom Grafana instance to get insights into your machines immediately.
Route machine data via the platform to your OpenTelemetry endpoint and supercharge monitoring of your fleet.
Connect platform storage to third-party visualization and exploration tools such as Tableau, PowerBI, Looker, JupyterLab, and more.
Develop your code on a local machine, pack it into Docker container and upload it to the cloud registry. Assign the container to any IoT gateway instance running around the globe.
All your edge instalations, workloads and targeting rules are managed from unified interface within the platform. You can configure your edge environment with a few clicks in the UI. Edge module of the IoT gateway running on the local network will take care of the rest.
Container deployment is automatic based on the configuration in the cloud. Once deployed, containers are kept up-to-date and running. Update of containers is facilitated by uploading new versions to the repository and configuring new version for the edge devices (IoT gateways).
Define what data should be processed locally and what data should be routed to the cloud platform. Create workflows that route data within multiple workloads with as many inputs and outputs as you need.
Strong identity verification is ensured using a zero-trust security model for communication between devices and the platform. TLS 1.2+ secures data in transit, and 256-bit AES protects data at rest. Users interacting with the platform can be authenticated by industry-standard identity providers, including MFA, or use existing accounts from 3rd party identity providers. Administrators are provided with tools for configuring granular authorization rules, encouraging the principle of least privilege.
The platform's design allows to gracefully handle transient network issues, partial failures of the underlying infrastructure, or unexpected traffic spikes. All services providing management capabilities for users are deployed in multiple replicas for high availability. Updates and fixes are deployed with zero downtime. Platform architecture allows the introduction of new features with minimal risk of breaking existing functionality. All data is stored in multiple replicas within one data center or geographically distributed for maximum data durability.
Horizontal scalability is embedded into the platform's architecture. It can support a few devices as well as hundreds of thousands while keeping the same performance. Capacity is adjusted dynamically with regard to the current load. Components of the platform are scaled independently, which allows scaling the platform precisely for the customer's workload in a cost-efficient manner.
The platform is designed to seamlessly fit into your existing ecosystem thanks to the open standards and best engineering practices. From industry-standard transport protocols, data formats, and storage to common application interfaces. This approach is followed by not only externally facing APIs but also by platform internals, which allows extending dedicated platform instances with your custom functionality.
All management APIs provide strong consistency guarantees. Sending data from devices into the platform is an asynchronous process, so there is a short delay between the platform’s frontend accepting data from devices and the moment when the data is available in the cloud. However, we guarantee per-device in-order delivery of data and the storages used for device data is strongly consistent. Cloud-to-device operations provide strong or eventual consistency, depending on the method chosen by the user.
"For us at Lely, the satisfaction of our customers is a key. Thanks to the implementation of the Spotflow IIoT platform, we gained the opportunity to offer our customers additional products and services. As an example, our customers can now individualize the care of their animals and monitor their health condition 24/7 from anywhere in the world.
We were able to introduce our paid subscription Horizon Farm mngt solution based on this Data platform in >60 countries absorbing huge amounts of real-time IoT data of the machines and animals for over 90% of our customers worldwide in a scalable and reliable way. We continue to develop and scale all our Data applications build on top of this foundation."
"Our goal is to provide the best coffee solutions to our customers. Thanks to the implementation of the Spotflow IIoT platform, we were able to start collecting data from our machines around the world continuously. Now we have a perfect overview of the state and condition of all devices and the behavior of our customers.
We understand the workload of all appliances, customer preferences, and consumption of individual types of coffee, so we can constantly improve our services, respond to trends and predict consumption. Also, we can optimize the amount of roasted coffee delivered and eliminate losses caused by coffee spoilage."
"Spotflow IIoT platform ensures that data is gathered from more than 30,000 machines in a near real-time manner. The data is securely sent to and stored on centralized cloud-based storage, which makes it immediately accessible to our teams working on various innovative projects. Our key requirement was to make this process fully autonomous and robust because we couldn’t keep supporting it manually, given the size of our fleet.
On top of data ingestion, we can remotely configure and monitor all machines using simple tools with the ultimate goal of making the process fully automated in the future, such that the need to visit them physically is dramatically reduced. For example, we can detect disruptions to our data processing pipelines and resolve issues when they occur, if not before."
"I have been working with the platform for over three years on its applications in the field. I witnessed the creators unable to stop discussing the topics of distributed systems, clouds, and technologies regardless of place, occasion, and weather. :). Yet they consistently succeeded in breaking out of their bubble and, with curiosity, listened and discussed with me the "real world" problems of our customers. They were always looking for a solution that might not be the easiest but the right one.
Over the past years, the platform went through a period of dynamic development, which wasn't always easy, but now I can confidently say that it has matured enough. The team has many ideas for what to build next, yet the foundations are solid."