Erebus accepted at VLDB!

Our paper, titled “Erebus: Explaining the Outputs of Data Streaming Queries”, written by Dimitris Palyvos-Giannas (Chalmers), Katerina Tzompanaki (CY Cergy Paris University), Marina Papatriantafilou (Chalmers), and Vincenzo Gulisano (Chalmers) has been accepted at the 49th International Conference on Very Large Data Bases (VLDB)!!!

The paper introduces the novel concept (and the theoretical foundations) for why-not provenance in the context of stream processing, supporting analysts in understanding why some expected results are not observed in the outcome of their streaming applications.

The abstract follows:

In data streaming, why-provenance can explain why a given outcome is observed but offers no help in understanding why an expected outcome is missing. Explaining missing answers has been addressed in DBMSs, but these solutions are not directly applicable to the streaming setting, because of the extra challenges posed by limited storage and by the unbounded nature of data streams.
With our framework, Erebus, we tackle the unaddressed challenges behind explaining missing answers in streaming applications. Erebus allows users to define expectations about the results of a query, verifying at runtime if such expectations hold, and also providing explanations when expected and observed outcomes diverge (missing answers). To the best of our knowledge, Erebus is the first such solution in data streaming. Our thorough evaluation on real data shows that Erebus can explain the (missing) answers with small overheads, both in low- and higher-end devices, even when large portions of the processed data are part of such explanations.

Posted in Uncategorized

STRETCH accepted @ IEEE TPDS!

Our paper titled “STRETCH: Virtual Shared-Nothing Parallelism for Scalable and Elastic Stream Processing” has been accepted at the IEEE Transactions on Parallel and Distributed Processing (TPDS) journal!

STRETCH is the result of many years of collaborations with several researchers. As we discuss in the paper, STRETCH defines a general stateful streaming operator that encapsulates and extends the semantics of the stateful operators commonly found in Stream Processing Engines (Aggregates and Joins). Furthermore, it takes advantage of shared memory to boost the scaling up of such an operator and to provide ultra-fast state-transfer-free elastic reconfigurations.

The full abstract follows:

Stream processing applications extract value from raw data through Directed Acyclic Graphs of data analysis tasks. Shared-nothing (SN) parallelism is the de-facto standard to scale stream processing applications. Given an application, SN parallelism instantiates several copies of each analysis task, making each instance responsible for a dedicated portion of the overall analysis, and relies on dedicated queues to exchange data among connected instances. On the one hand, SN parallelism can scale the execution of applications both up and out since threads can run task instances within and across processes/nodes. On the other hand, its lack of sharing can cause unnecessary overheads and hinder the scaling up when threads operate on data that could be jointly accessed in shared memory. This trade-off motivated us in studying a way for stream processing applications to leverage shared memory and boost the scale up (before the scale out) while adhering to the widely-adopted and SN-based APIs for stream processing applications.

We introduce STRETCH, a framework that maximizes the scale up and offers instantaneous elastic reconfigurations (without state transfer) for stream processing applications. We propose the concept of Virtual Shared-Nothing (VSN) parallelism and elasticity and provide formal definitions and correctness proofs for the semantics of the analysis tasks supported by STRETCH, showing they extend the ones found in common Stream Processing Engines. We also provide a fully implemented prototype and show that \xxx{}’s performance exceeds that of state-of-the-art frameworks such as Apache Flink and offers, to the best of our knowledge, unprecedented ultra-fast reconfigurations, taking less than 40 ms even when provisioning tens of new task instances.

Posted in Concurrent Data Structures, Data Streaming, Research, ScaleGate

Our paper “IP-LSH-DBSCAN: Integrated Parallel Density-Based Clustering through Locality-Sensitive Hashing” has been accepted at Euro-Par 2022

Our paper titled “IP-LSH-DBSCAN: Integrated Parallel Density-Based Clustering through Locality-Sensitive Hashing” (Amir Keramatian, Vincenzo Gulisano, Marina Papatriantafilou, and Philippas Tsigas) has been accepted at Euro-Par 2022.

The abstract follows:

Locality-sensitive hashing (LSH) is an established method for fast data indexing and approximate similarity search, with useful parallelism properties. Although indexes and similarity measures are key for data clustering, little has been investigated on the benefits of LSH in the problem. Our proposition is that LSH can be extremely beneficial for parallelizing high-dimensional density-based clustering e.g., DBSCAN, a versatile method able to detect clusters of different shapes and sizes.

We contribute to fill the gap between the advancements in LSH and density-based clustering. In particular, we show how approximate DBSCAN clustering can be fused into the process of creating an LSH index structure, and, through data parallelization and fine-grained synchronization, also utilize efficiently available computing capacity as needed for massive data-sets. The resulting method, IP.LSH.DBSCAN, can effectively support a wide range of applications with diverse distance functions, as well as data distributions and dimensionality. Furthermore, IP.LSH.DBSCAN facilitates adjustable accuracy through LSH parameters. We analyse its properties and also evaluate our prototype implementation on a 36-core machine with 2-way hyper threading on massive data-sets with various numbers of dimensions. Our results show that IP.LSH.DBSCAN effectively complements established state-of-the-art methods by up to several orders of magnitude of speed-up on higher dimensional datasets, with tunable high clustering accuracy.

Posted in Uncategorized