Paper accepted at the 47th International Conference on Very Large Data Bases (VLDB)!

Our paper titled “Ananke: A Streaming Framework for Live Forward Provenance” (Dimitris Palyvos-Giannas, Bastian Havers, Marina Papatriantafilou, Vincenzo Gulisano) has been accepted at the 47th International Conference on Very Large Data Bases (VLDB)!

This work, conducted within the scope of the VR project Haren and the Vinnova project AutoSPADA (in collaboration with Volvo), introduces the first streaming framework providing fine-grained forward provenance. As we explain in the paper, such a tool is valuable for distributed and parallel analysis in edge to cloud infrastructures, since it eases the retrieval of source data connected to analysis outcomes, while being able to discriminate whether each piece of source data could still contribute to future analysis outcomes or not.

Ananke is available in Github at the following link: https://github.com/dmpalyvos/ananke and implemented on top of the Apache Flink streaming processing engine (a framework used by Alibaba and Amazon Kinesis Data Analytics, among others). All the experiments we present in the paper can be reproduced with the scripts we made available in our repository.

The abstract follows:

Data streaming enables online monitoring of large and continuous event streams in Cyber-Physical Systems (CPSs). In such scenarios, fine-grained backward provenance tools can connect streaming query results to the source data producing them, allowing analysts to study the dependency/causality of CPS events. While CPS monitoring commonly produces many events, backward provenance does not help prioritize event inspection since it does not specify if an event’s provenance could still contribute to future results. To cover this gap, we introduce Ananke, a framework to extend any fine-grained backward provenance tool and deliver a live bi-partite graph of fine-grained forward provenance. With Ananke, analysts can prioritize the analysis of provenance data based on whether such data is still potentially being processed by the monitoring queries. We prove our solution is correct, discuss multiple implementations, including one leveraging streaming APIs for parallel analysis, and show Ananke results in small overheads, close to those of existing tools for fine-grained backward provenance.

Posted in Data Streaming, Research

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: