Our tutorial, titled “The Role of Event-Time Analysis Order in Data Streaming”, will be presented next week at the 14th ACM International Conference on Distributed and Event-Based Systems (DEBS) conference. We have recorded the tutorial, and you can find the videos at the following links:
Part 1: https://youtu.be/SW_WS6ULsdY
Part 2: https://youtu.be/bq3ECNvPwOU
You can find the slides, as well as the code examples, here. The slides are also available at SlideShare (here)
The data streaming paradigm was introduced around the year 2000 to overcome the limitations of traditional store-then-process paradigms found in relational databases (DBs). Opposite to DBs’ “first-the-data-then-the-query” approach, data streaming applications build on the “first-the-query then-the-data” alternative. More concretely, data streaming applications do not rely on storage to initially persist data and later query it, but rather build on continuous single-pass analysis in which incoming streams of data are processed on the fly and result in continuous streams of outputs.
In contrast with traditional batch processing, data streaming applications require the user to reason about an additional dimension in the data: event-time. Numerous models have been proposed in the literature to reason about event-time, each with different guarantees and trade-offs. Since it is not always clear which of these models is appropriate for a particular application, this tutorial studies the relevant concepts and compares the available options. This study can be highly relevant for people working with data streaming applications, both researchers and industrial practitioners.
Our paper DRIVEN: a framework for efficient Data Retrieval and clusterIng in VEhicular Networks has been accepted for publication at Elsevier’s Future Generation Computer Systems journal. This work is an extension of the conference publication:
Havers, Bastian, et al. “DRIVEN: a framework for efficient Data Retrieval and clusterIng in VEhicular Networks.” 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019.
In this extended version, we build on and extend our framework, which leverages streaming-based Piecewise Linear Approximation and clustering for edge-to-core analysis. We show that real-world raw data such as GPS, LiDAR and other vehicular signals can be compressed (within each vehicle, in a streaming fashion) to 5-35 % of its original size, significantly reducing communication costs and overheads, and clustered (at the cloud, in a streaming fashion) with an accuracy loss below 10%.
The abstract follows:
The growing interest in data analysis applications for Cyber-Physical Systems stems from the large amounts of data such large distributed systems sense in a continuous fashion. A key research question in this context is how to jointly address the efficiency and effectiveness challenges of such data analysis applications.
DRIVEN proposes a way to jointly address these challenges for a data gathering and distance-based clustering tool in the context of vehicular networks. To cope with the limited communication bandwidth (compared to the sensed data volume) of vehicular networks and data transmission’s monetary costs, DRIVEN avoids gathering raw data from vehicles, but rather relies on a streaming-based and error-bounded approximation, through Piecewise Linear Approximation (PLA), to compress the volumes of gathered data. Moreover, a streaming-based approach is also used to cluster the collected data (once the latter is reconstructed from its PLA-approximated form). DRIVEN’s clustering algorithm leverages the inherent ordering of the spatial and temporal data being collected to perform clustering in an online fashion, while data is being retrieved. As we show, based on our prototype implementation using Apache Flink and thorough evaluation with real-world data such as GPS, LiDAR and other vehicular signals, the accuracy loss for the clustering performed on the gathered approximated data can be small (below 10 %), even when the raw data is compressed to 5-35 % of its original size, and the transferring of historical data itself can be completed in up to one-tenth of the duration observed when gathering raw data.
The slides I used to present the paper “Automatic Translation of Spatio-Temporal Logics to Streaming-Based Monitoring Applications for IoT-Equipped Autonomous Agents” at the 2019 ACM/IFIP Middleware conference – 6th International Workshop on Middleware and Applications for the Internet of Things (M4IoT) are now available at SlideShare: https://www.slideshare.net/VincenzoGulisano/strel-streaming.
The abstract of the paper follows:
Environments in which IoT-equipped autonomous agents and humans tightly interact require safety rules that monitor the agents’ behaviors. In this context, expressive and human-comprehensible rules based on Spatio-Temporal Logics (STLs) are desirable because they are informative and easy to maintain. Unfortunately, STLs usually build on ad-hoc platforms implementing the logic semantics.
We tackle this limitation with a mechanism to transparently compile STL rules to monitoring applications composed of standard data streaming operators, thus opening up the use of high-throughput and low-latency Stream Processing Engines for monitoring rule compliance in realistic, data-rich IoT scenarios. Our contribution can favor a broader and faster adoption of STLs for IoT-equipped agent monitoring by separating the concerns of designing a rule from those of implementing its semantics. Together with our formal description of how to translate STLs to the streaming domain, we evaluate our prototype implementation based on Apache Flink, studying the effects of parameters such as time and space resolution on the monitoring performance.