2 papers and 1 poster accepted at the ACM International Conference on Distributed Event-Based Systems (DEBS) 2019!

We got two papers and one poster accepted at the ACM International Conference on Distributed Event-Based Systems (DEBS)!

Our two papers are:
STRETCH: Scalable and Elastic Deterministic Streaming Analysis with Virtual Shared-Nothing Parallelism (Hannaneh Najdataei, Yiannis Nikolakopoulos, Marina Papatriantafilou, Philippas Tsigas, Vincenzo Gulisano)
Haren: A Framework for Ad-Hoc Thread Scheduling Policies for Data Streaming Applications (Dimitris Palyvos-Giannas, Vincenzo Gulisano, Marina Papatriantafilou)

while the poster is:
Mimir – Streaming Operators Classification with Artificial Neural Networks (Victor Gustafsson, Hampus Nilsson, Karl Bäckström, Marina Papatriantafilou, Vincenzo Gulisano)

The first paper presents a generic framework for parallel and elastic streaming analysis that supports what we introduced as virtual shared-nothing parallelism. In a nutshell, virtual shared-nothing parallelism allows to program parallel stateful analysis using the shared-nothing parallelism model (which is convenient because, among other reasons, does not require programmers to worry about concurrent accesses to the local data managed by each parallel thread). Under the hood, its ”virtual” nature is due to the fact that the overall state managed by the threads is indeed shared. As a result, this allows for ultra-fast elastic reconfigurations (we can move from 30 to 60 threads, for instance, in approximately 10 milliseconds!) and does not require any programming of state transfer protocols!

The second paper is also introducing a novel framework, which in this case allows for easy “plug and play”-like use of custom thread-scheduling policies for streaming applications. More concretely, our framework (named Haren) provides a middleware-like abstraction that decouples thread-scheduling tasks from other components of a Stream Processing Engine and allows users to define (1) how to map operators to threads and (2) how to sort operators assigned to the same thread based on the user-defined priority. As we show in our paper, Haren could be used to define rich and complex policies in which distinct queries deployed to the same SPE instance have different priority levels, each queries of different priority level are also scheduled with different performance goals (e.g., minimize maximum latency vs. average latency). We implemented Haren on top of Liebre, the SPE developed at my research group (you can find the updated documentation here and the code here).

Finally, our accepted poster is the result of Victor’s and Hampus’ master thesis (which I supervised together with Karl and Marina). In this work, we study how Neural Networks can be used to classify the operators of streaming applications based on features such as input rates, output rates, selectivity, and so on… The rationale is that, by being able to classify operators, a third-party observer does not need to depend on the specific SPE the user chooses to use in order to find out which operators are actually deployed in his/her application to trigger or suggest performance-improving actions. As we show, NNs can in this case achieve a classification accuracy above 95%.

You can find the abstracts of these three works in the following.

STRETCH: Scalable and Elastic Deterministic Streaming Analysis with Virtual Shared-Nothing Parallelism
Despite the established scientific knowledge on efficient parallel and elastic data stream processing, it is challenging to combine generality and high level of abstraction (targeting ease of use) with fine-grained processing aspects (targeting efficiency) in stream processing frameworks. Towards this goal, we propose STRETCH, a framework that aims at guaranteeing (i) high efficiency in throughput and latency of stateful analysis and (ii) fast elastic reconfigurations (without requiring state transfer) for intra-node streaming applications. To achieve these, we introduce virtual shared-nothing Parallelization and propose a scheme to implement it in STRETCH, enabling users to leverage parallelization techniques while also taking advantage of shared-memory synchronization, which has been proven to boost the scaling-up of streaming applications while supporting determinism. We provide a fully-implemented prototype and, together with a thorough evaluation, correctness proofs for its underlying claims supporting determinism and a model (also validated empirically) of virtual shared-nothing and pure shared-nothing scalability behavior. As we show, STRETCH can match the throughput and latency figures of the front of state-of-the-art solutions, while also achieving fast elastic reconfigurations (taking only a few milliseconds). 

Haren: A Framework for Ad-Hoc Thread Scheduling Policies for Data Streaming Applications 
In modern Stream Processing Engines (SPEs), numerous diverse applications, which can differ in aspects such as cost, criticality or latency sensitivity, can co-exist in the same computing node. When these differences need to be considered to control the performance of each application, custom scheduling of operators to threads is of key importance (e.g., when a smart vehicle needs to ensure that safety-critical applications always have access to computational power, while other applications are given lower, variable priorities).
Many solutions have been proposed regarding schedulers that allocate threads to operators to optimize specific metrics (e.g., latency) but there is still lack of a tool that allows arbitrarily complex scheduling strategies to be seamlessly plugged on top of an SPE. We propose Haren to fill this gap. More specifically, we (1) formalize the thread scheduling problem in stream processing in a general way, allowing to define ad-hoc scheduling policies, (2) identify the bottlenecks and the opportunities of scheduling in stream processing, (3) distill a compact interface to connect Haren with SPEs, enabling rapid testing of various scheduling policies, (4) illustrate the usability of the framework by integrating it into an actual SPE and (5) provide a thorough evaluation. As we show, Haren makes it is possible to adapt the use of computational resources over time to meet the goals of a variety of scheduling policies.

Mimir – Streaming Operators Classification with Artificial Neural Networks
Streaming applications are used for analysing large volumes of continuous data. Achieving efficiency and effectiveness in data streaming imply challenges that all the more important when different parties (i) define applications’ semantics, (ii) choose the stream Processing Engine (SPE) to use, and (iii) provide the processing infrastructure (e.g., cloud or fog), and when one party’s decisions (e.g., how to deploy applications or when to trigger adaptive reconfigurations) depend on information held by a distinct one (and possibly hard to retrieve). In this context, machine learning can bridge the involved parties (e.g., SPEs and cloud providers) by offering tools that learn from the behavior of streaming applications and help take decisions.
Such a tool, the focus of our ongoing work, can be used to learn which operators are run by a streaming application running in a certain SPE, without relying on the SPE itself to provide such information. More concretely, to classify the type of operator based on a desired level of granularity (from a coarse-grained characterization into stateless/stateful, to a fine-grained operator classification) based on general application-related metrics. As an example application, this tool could help a Cloud provider decide which infrastructure to assign to a certain streaming application (run by a certain SPE), based on the type (and thus cost) of its operators.


Posted in Data Streaming, Research

Opportunities for post-doctoral positions at Chalmers!

Two calls for post-doctoral research are currently open at my group.

The call for the first one, which will focus on distributed adaptive method for digital energy systems, is available here.
The call for the second one, focusing on online intrusion and anomaly detection for smart grids, is available here.

Posted in Uncategorized

Accepted paper at the 2018 ACM/IFIP/USENIX International Middleware Conference!

Great news!

Our paper titled “GeneaLog: Fine-Grained Data Streaming Provenance at the Edge” has been accepted at the 2018 ACM/IFIP/USENIX International Middleware Conference.

The abstract follows:
Fine-grained data provenance in stream processing allows linking each result tuple back to the source data that contributed to its generation, something beneficial for many big data applications; e.g., in security and safety-related applications, it can help debug analytical queries, thus facilitating the inspection of the conditions triggering an alert. Furthermore, when data transmission or storage has to be minimized, such as in edge computing and cyber-physical systems, it can help to identify which fraction of the source data should be prioritized.
The memory and processing time costs of fine-grained data provenance, which can be afforded by high-end servers, can nonetheless be prohibitive for the resource-constrained devices deployed in edge computing and cyber-physical systems. Motivated by this challenge, we present GeneaLog, a novel fine-grained data provenance technique for data streaming applications. Leveraging the logical dependencies of the data, GeneaLog takes advantage of cross-layer properties of the software stack and incurs a minimal, constant size per-tuple overhead. Furthermore, it allows for a modular and efficient algorithmic implementation using only standard data streaming operators. This is particularly useful for streaming applications distributed at different physical nodes since the provenance processing can be executed at third nodes, orthogonal to the data-processing. We evaluate a full-fledged implementation of GeneaLog using vehicular and smart grid applications, confirming it efficiently captures fine-grained provenance data with minimal overhead.

Posted in Uncategorized