STRETCH accepted @ IEEE TPDS!

Our paper titled “STRETCH: Virtual Shared-Nothing Parallelism for Scalable and Elastic Stream Processing” has been accepted at the IEEE Transactions on Parallel and Distributed Processing (TPDS) journal!

STRETCH is the result of many years of collaborations with several researchers. As we discuss in the paper, STRETCH defines a general stateful streaming operator that encapsulates and extends the semantics of the stateful operators commonly found in Stream Processing Engines (Aggregates and Joins). Furthermore, it takes advantage of shared memory to boost the scaling up of such an operator and to provide ultra-fast state-transfer-free elastic reconfigurations.

The full abstract follows:

Stream processing applications extract value from raw data through Directed Acyclic Graphs of data analysis tasks. Shared-nothing (SN) parallelism is the de-facto standard to scale stream processing applications. Given an application, SN parallelism instantiates several copies of each analysis task, making each instance responsible for a dedicated portion of the overall analysis, and relies on dedicated queues to exchange data among connected instances. On the one hand, SN parallelism can scale the execution of applications both up and out since threads can run task instances within and across processes/nodes. On the other hand, its lack of sharing can cause unnecessary overheads and hinder the scaling up when threads operate on data that could be jointly accessed in shared memory. This trade-off motivated us in studying a way for stream processing applications to leverage shared memory and boost the scale up (before the scale out) while adhering to the widely-adopted and SN-based APIs for stream processing applications.

We introduce STRETCH, a framework that maximizes the scale up and offers instantaneous elastic reconfigurations (without state transfer) for stream processing applications. We propose the concept of Virtual Shared-Nothing (VSN) parallelism and elasticity and provide formal definitions and correctness proofs for the semantics of the analysis tasks supported by STRETCH, showing they extend the ones found in common Stream Processing Engines. We also provide a fully implemented prototype and show that \xxx{}’s performance exceeds that of state-of-the-art frameworks such as Apache Flink and offers, to the best of our knowledge, unprecedented ultra-fast reconfigurations, taking less than 40 ms even when provisioning tens of new task instances.

Posted in Concurrent Data Structures, Data Streaming, Research, ScaleGate

Our paper “IP-LSH-DBSCAN: Integrated Parallel Density-Based Clustering through Locality-Sensitive Hashing” has been accepted at Euro-Par 2022

Our paper titled “IP-LSH-DBSCAN: Integrated Parallel Density-Based Clustering through Locality-Sensitive Hashing” (Amir Keramatian, Vincenzo Gulisano, Marina Papatriantafilou, and Philippas Tsigas) has been accepted at Euro-Par 2022.

The abstract follows:

Locality-sensitive hashing (LSH) is an established method for fast data indexing and approximate similarity search, with useful parallelism properties. Although indexes and similarity measures are key for data clustering, little has been investigated on the benefits of LSH in the problem. Our proposition is that LSH can be extremely beneficial for parallelizing high-dimensional density-based clustering e.g., DBSCAN, a versatile method able to detect clusters of different shapes and sizes.

We contribute to fill the gap between the advancements in LSH and density-based clustering. In particular, we show how approximate DBSCAN clustering can be fused into the process of creating an LSH index structure, and, through data parallelization and fine-grained synchronization, also utilize efficiently available computing capacity as needed for massive data-sets. The resulting method, IP.LSH.DBSCAN, can effectively support a wide range of applications with diverse distance functions, as well as data distributions and dimensionality. Furthermore, IP.LSH.DBSCAN facilitates adjustable accuracy through LSH parameters. We analyse its properties and also evaluate our prototype implementation on a 36-core machine with 2-way hyper threading on massive data-sets with various numbers of dimensions. Our results show that IP.LSH.DBSCAN effectively complements established state-of-the-art methods by up to several orders of magnitude of speed-up on higher dimensional datasets, with tunable high clustering accuracy.

Posted in Uncategorized

Our paper about Lachesis, a framework to customize OS-threads scheduling, has been accepted at ACM/IFIP Middleware Conference!

Our paper titled “Lachesis: A Middleware for Customizing OS Scheduling of Stream Processing Queries” (Dimitris Palyvos-Giannas, Gabriele Mencagli, Marina Papatriantafilou, Vincenzo Gulisano) has been accepted at the 22ns ACM/IFIP International Middleware Conference.

This work, in collaboration with Gabriele Mencagli, from the University of Pisa, studies how the scheduling kernel threads can be orchestrated, through mechanisms as nice and cgroups, to customize the execution of stream processing applications. Lachesis makes a novel key contribution: the ability to customize the scheduling goals of streaming applications without altering the architecture of the Stream Processing Engine running them!

The abstract follows:
Data streaming applications in Cyber-Physical Systems enable high-throughput, low-latency transformations of raw data into value. The performance of such applications, run by Stream Processing Engines (SPEs), can be boosted through custom CPU scheduling. Previous schedulers in the literature require alterations to SPEs to control the scheduling through user-level threads. While such alterations allow for fine-grained control, they hinder the adoption of such schedulers due to the high implementation cost and potential limitations in application semantics (e.g., blocking I/O).
Motivated by the above, we explore the feasibility and benefits of custom scheduling without alterations to SPEs but, instead, by orchestrating the OS scheduler (e.g., using nice and cgroup) to enforce the scheduling goals. We propose Lachesis, a standalone scheduling middleware, decoupled from any specific SPE, that can schedule multiple streaming applications, run in one or many nodes, and possibly multiple SPEs. Our evaluation with real-world and synthetic workloads, several SPEs and hardware setups, shows its benefits over default OS scheduling and other state-of-the-art schedulers: up to 75\% higher throughput, and 1130x lower average latency once such SPEs reach their peak processing capacity.

Tagged with: , , ,
Posted in Data Streaming, Research