Our paper about Lachesis, a framework to customize OS-threads scheduling, has been accepted at ACM/IFIP Middleware Conference!

Our paper titled “Lachesis: A Middleware for Customizing OS Scheduling of Stream Processing Queries” (Dimitris Palyvos-Giannas, Gabriele Mencagli, Marina Papatriantafilou, Vincenzo Gulisano) has been accepted at the 22ns ACM/IFIP International Middleware Conference.

This work, in collaboration with Gabriele Mencagli, from the University of Pisa, studies how the scheduling kernel threads can be orchestrated, through mechanisms as nice and cgroups, to customize the execution of stream processing applications. Lachesis makes a novel key contribution: the ability to customize the scheduling goals of streaming applications without altering the architecture of the Stream Processing Engine running them!

The abstract follows:
Data streaming applications in Cyber-Physical Systems enable high-throughput, low-latency transformations of raw data into value. The performance of such applications, run by Stream Processing Engines (SPEs), can be boosted through custom CPU scheduling. Previous schedulers in the literature require alterations to SPEs to control the scheduling through user-level threads. While such alterations allow for fine-grained control, they hinder the adoption of such schedulers due to the high implementation cost and potential limitations in application semantics (e.g., blocking I/O).
Motivated by the above, we explore the feasibility and benefits of custom scheduling without alterations to SPEs but, instead, by orchestrating the OS scheduler (e.g., using nice and cgroup) to enforce the scheduling goals. We propose Lachesis, a standalone scheduling middleware, decoupled from any specific SPE, that can schedule multiple streaming applications, run in one or many nodes, and possibly multiple SPEs. Our evaluation with real-world and synthetic workloads, several SPEs and hardware setups, shows its benefits over default OS scheduling and other state-of-the-art schedulers: up to 75\% higher throughput, and 1130x lower average latency once such SPEs reach their peak processing capacity.

Tagged with: , , ,
Posted in Data Streaming, Research

Journal paper about efficient data localization in fleets of connected vehicles accepted at IEEE Access

Our paper titled “Time- and Computation-Efficient Data Localization at Vehicular Networks’ Edge.” (Romaric Duvignau, Bastian Havers, Vincenzo Gulisano and Marina Papatriantafilou) has been accepted at the IEEE Access journal.

This work, conducted within the scope of the Vinnova project AutoSPADA (in collaboration with Volvo) and the VR project Haren, introduces a novel approach for localizing data in an evolving distributed system of connected vehicles, and extends our previous work titled “Querying Large Vehicular Networks: How to Balance On-Board Workload and Queries Response Time?” (Romaric Duvignau, Bastian Havers, Vincenzo Gulisano, Marina Papatriantafilou) which was published at the 22nd Intelligent Transportation Systems Conference (ITSC) in 2019.

The paper (open access) is available here: https://ieeexplore.ieee.org/abstract/document/9562541. The abstract follows:

As Vehicular Networks rely increasingly on sensed data to enhance functionality and safety, efficient and distributed data analysis is needed to effectively leverage new technologies in real-world applications. Considering the tens of GBs per hour sensed by modern connected vehicles, traditional analysis, based on global data accumulation, can rapidly exhaust the capacity of the underlying network, becoming increasingly costly, slow, or even infeasible.
Employing the edge processing paradigm, which aims at alleviating this drawback by leveraging vehicles’ computational power, we are the first to study how to localize, efficiently and distributively, relevant data in a vehicular fleet for analysis applications. This is achieved by appropriate methods to spread requests across the fleet, while efficiently balancing the time needed to identify relevant vehicles, and the computational overhead induced on the Vehicular Network.
We evaluate our techniques using two large sets of real-world data in a realistic environment where vehicles join or leave the fleet during the distributed data localization process. As we show, our algorithms are both efficient and configurable, outperforming the baseline algorithms by up to a $40\times$ speedup while reducing computational overhead by up to $3\times$, while providing good estimates for the fraction of vehicles with relevant data and fairly spreading the workload over the fleet. All code as well as detailed instructions are available at \url{https://github.com/dcs-chalmers/dataloc_vn}.

Posted in Uncategorized

Paper accepted at the 47th International Conference on Very Large Data Bases (VLDB)!

Our paper titled “Ananke: A Streaming Framework for Live Forward Provenance” (Dimitris Palyvos-Giannas, Bastian Havers, Marina Papatriantafilou, Vincenzo Gulisano) has been accepted at the 47th International Conference on Very Large Data Bases (VLDB)!

This work, conducted within the scope of the VR project Haren and the Vinnova project AutoSPADA (in collaboration with Volvo), introduces the first streaming framework providing fine-grained forward provenance. As we explain in the paper, such a tool is valuable for distributed and parallel analysis in edge to cloud infrastructures, since it eases the retrieval of source data connected to analysis outcomes, while being able to discriminate whether each piece of source data could still contribute to future analysis outcomes or not.

Ananke is available in Github at the following link: https://github.com/dmpalyvos/ananke and implemented on top of the Apache Flink streaming processing engine (a framework used by Alibaba and Amazon Kinesis Data Analytics, among others). All the experiments we present in the paper can be reproduced with the scripts we made available in our repository.

The abstract follows:

Data streaming enables online monitoring of large and continuous event streams in Cyber-Physical Systems (CPSs). In such scenarios, fine-grained backward provenance tools can connect streaming query results to the source data producing them, allowing analysts to study the dependency/causality of CPS events. While CPS monitoring commonly produces many events, backward provenance does not help prioritize event inspection since it does not specify if an event’s provenance could still contribute to future results. To cover this gap, we introduce Ananke, a framework to extend any fine-grained backward provenance tool and deliver a live bi-partite graph of fine-grained forward provenance. With Ananke, analysts can prioritize the analysis of provenance data based on whether such data is still potentially being processed by the monitoring queries. We prove our solution is correct, discuss multiple implementations, including one leveraging streaming APIs for parallel analysis, and show Ananke results in small overheads, close to those of existing tools for fine-grained backward provenance.

Posted in Data Streaming, Research