Journal paper about efficient data localization in fleets of connected vehicles accepted at IEEE Access

Our paper titled “Time- and Computation-Efficient Data Localization at Vehicular Networks’ Edge.” (Romaric Duvignau, Bastian Havers, Vincenzo Gulisano and Marina Papatriantafilou) has been accepted at the IEEE Access journal.

This work, conducted within the scope of the Vinnova project AutoSPADA (in collaboration with Volvo) and the VR project Haren, introduces a novel approach for localizing data in an evolving distributed system of connected vehicles, and extends our previous work titled “Querying Large Vehicular Networks: How to Balance On-Board Workload and Queries Response Time?” (Romaric Duvignau, Bastian Havers, Vincenzo Gulisano, Marina Papatriantafilou) which was published at the 22nd Intelligent Transportation Systems Conference (ITSC) in 2019.

The paper (open access) is available here: https://ieeexplore.ieee.org/abstract/document/9562541. The abstract follows:

As Vehicular Networks rely increasingly on sensed data to enhance functionality and safety, efficient and distributed data analysis is needed to effectively leverage new technologies in real-world applications. Considering the tens of GBs per hour sensed by modern connected vehicles, traditional analysis, based on global data accumulation, can rapidly exhaust the capacity of the underlying network, becoming increasingly costly, slow, or even infeasible.
Employing the edge processing paradigm, which aims at alleviating this drawback by leveraging vehicles’ computational power, we are the first to study how to localize, efficiently and distributively, relevant data in a vehicular fleet for analysis applications. This is achieved by appropriate methods to spread requests across the fleet, while efficiently balancing the time needed to identify relevant vehicles, and the computational overhead induced on the Vehicular Network.
We evaluate our techniques using two large sets of real-world data in a realistic environment where vehicles join or leave the fleet during the distributed data localization process. As we show, our algorithms are both efficient and configurable, outperforming the baseline algorithms by up to a $40\times$ speedup while reducing computational overhead by up to $3\times$, while providing good estimates for the fraction of vehicles with relevant data and fairly spreading the workload over the fleet. All code as well as detailed instructions are available at \url{https://github.com/dcs-chalmers/dataloc_vn}.

Posted in Uncategorized

Paper accepted at the 47th International Conference on Very Large Data Bases (VLDB)!

Our paper titled “Ananke: A Streaming Framework for Live Forward Provenance” (Dimitris Palyvos-Giannas, Bastian Havers, Marina Papatriantafilou, Vincenzo Gulisano) has been accepted at the 47th International Conference on Very Large Data Bases (VLDB)!

This work, conducted within the scope of the VR project Haren and the Vinnova project AutoSPADA (in collaboration with Volvo), introduces the first streaming framework providing fine-grained forward provenance. As we explain in the paper, such a tool is valuable for distributed and parallel analysis in edge to cloud infrastructures, since it eases the retrieval of source data connected to analysis outcomes, while being able to discriminate whether each piece of source data could still contribute to future analysis outcomes or not.

Ananke is available in Github at the following link: https://github.com/dmpalyvos/ananke and implemented on top of the Apache Flink streaming processing engine (a framework used by Alibaba and Amazon Kinesis Data Analytics, among others). All the experiments we present in the paper can be reproduced with the scripts we made available in our repository.

The abstract follows:

Data streaming enables online monitoring of large and continuous event streams in Cyber-Physical Systems (CPSs). In such scenarios, fine-grained backward provenance tools can connect streaming query results to the source data producing them, allowing analysts to study the dependency/causality of CPS events. While CPS monitoring commonly produces many events, backward provenance does not help prioritize event inspection since it does not specify if an event’s provenance could still contribute to future results. To cover this gap, we introduce Ananke, a framework to extend any fine-grained backward provenance tool and deliver a live bi-partite graph of fine-grained forward provenance. With Ananke, analysts can prioritize the analysis of provenance data based on whether such data is still potentially being processed by the monitoring queries. We prove our solution is correct, discuss multiple implementations, including one leveraging streaming APIs for parallel analysis, and show Ananke results in small overheads, close to those of existing tools for fine-grained backward provenance.

Posted in Data Streaming, Research

YouTube tutorial: The Role of Event-Time Analysis Order in Data Streaming

Our tutorial, titled “The Role of Event-Time Analysis Order in Data Streaming”, will be presented next week at the 14th ACM International Conference on Distributed and Event-Based Systems (DEBS) conference. We have recorded the tutorial, and you can find the videos at the following links:

Part 1: https://youtu.be/SW_WS6ULsdY

Part 2: https://youtu.be/bq3ECNvPwOU

You can find the slides, as well as the code examples, here. The slides are also available at SlideShare (here)

Abstract:

The data streaming paradigm was introduced around the year 2000 to overcome the limitations of traditional store-then-process paradigms found in relational databases (DBs). Opposite to DBs’ “first-the-data-then-the-query” approach, data streaming applications build on the “first-the-query then-the-data” alternative. More concretely, data streaming applications do not rely on storage to initially persist data and later query it, but rather build on continuous single-pass analysis in which incoming streams of data are processed on the fly and result in continuous streams of outputs.

In contrast with traditional batch processing, data streaming applications require the user to reason about an additional dimension in the data: event-time. Numerous models have been proposed in the literature to reason about event-time, each with different guarantees and trade-offs. Since it is not always clear which of these models is appropriate for a particular application, this tutorial studies the relevant concepts and compares the available options. This study can be highly relevant for people working with data streaming applications, both researchers and industrial practitioners.

Posted in Data Streaming, Presentation, Research, Teaching