Portfolio item number 1
Short description of portfolio item number 1 Read more
Short description of portfolio item number 1 Read more
Published in Systems for Post-Moore Architectures (SPMA), co-located with Eurosys 2022, Rennes, 2022
GiantVM is the state of the art distributed hypervisor (VEE 2020) which is based on KVM. It allows to start a guest OS whose CPU and memory are provided by several physical machines. In this paper, we study the origin of performance overhead of GiantVM – which is the DSM, hence, in term of number of page faults. Then we propose several optimizations which allow to improve performances by about 39%. Moreover, based on our study and proposed optimizations, we laid out guidelines to build a distributed hypervisor that is independent and flexible – easy to evolve, from a distributed shared memory (DSM) system.. Read more
Recommended citation: Mohamed Karaoui, Brice Teguia, Bernabe Batchakui,Alain Tchana Analysis of a modern distributed hypervisor: what we learn from our experiments, SPMA 2022 https://drive.google.com/file/d/1tVcswD44EEXhFoa0gMwYY87t5PHzHZxY/view?usp=sharing
Published in Compas 2022, 2022
This abstract presents the vPIM project that is virtualization of processing-In-Memory Read more
Recommended citation: Dufy Teguia, Alain Tchana, vPIM – Virtualization of Processing-in-Memory, Compas 2022 https://brisco007.github.io/files/vPIM_Virtualization_of_Processing_in_Memory.pdf
Published in Middleware 2024, 2024
Data movement is the leading cause of performance degradation and energy consumption in modern data centers. Processing in-memory (PIM) is an architecture that addresses data movement by bringing computation inside the memory chips. This paper is the first to study the virtualization of PIM devices by designing and implementing vPIM, an open-source UPMEM-based virtualization system for the cloud. Our vPIM design considers four requirements: Compatibility such that no hardware and no hypervisor changes are needed; Multiplexing and isolation for a higher utilization ratio; Utilizability and transparency such that applications written for PIM can be efficiently run out-of-the-box, leading to rapid adoption; Minimalization of virtualization performance overhead. We prototype vPIM in Firecracker, expanding the virtio standard. Our experimental evaluation uses 16 applications provided by PrIM, a recent PIM benchmark suite. The virtualization overhead is between 1.01× and 2.07× for untouched PrIM applications. To keep overhead low, vPIM introduces several optimizations: zero-copy from guest OS to Firecracker, efficient virtio queues management, efficient Guest Physical Address to Host Virtual Address translation, parallel processing on multiple ranks, automatic data batching and prefetching, and the reimplementation of some specific functionalities in C instead of Rust. We hope this work will lay the foundation for future research on PIM for cloud computing. Read more
Recommended citation: Dufy Teguia, Jiaxuan Chen, Stella Bitchebe, Oana Balmau, Alain Tchana, vPIM – Virtualization of Processing-in-Memory, Middleware 2024
Published:
Link to the conference website Link to the related slides This presentation aimed to present the study that We did on GiantVM and the different optimizations useful to improve the performances of the hypervisor. Read more
Published:
Link to the conference website Link to the related slides The presentation was about the virtualization of Processing-In-Memory : The different envisionned approaches for the virtualization of such a hardware. Read more
Published:
Link to the mini symposium website This presentation aimed to present The work on a framwork that will allow virtual machine owners to build observers that will be able to monitor the virtual machine from within an isolated virtual machine in the same VMM. Read more
Published:
Link to the mini symposium website Link to the related slides This presentation aimed to present the first work on virtualization of Processing-In-Memory hardware. Read more