Co-located Workshops
Authors are invited to submit papers electronically, through EasyChair. The papers should be submitted in PDF, following the Springer LNCS format . Paper length must not exceed 12 pages (including references). All submitted manuscripts will be checked for originality by Springer iThenticate (papers that show an insufficient originality might be rejected without a review).
Camera ready versions will be collected after the conference.
Heterogeneity is emerging as one of the most profound and challenging characteristics of today's parallel environments. From the macro level, where networks of distributed computers composed of different node architectures are interconnected with potentially heterogeneous networks, to the micro level, where deeper memory hierarchies and different accelerator architectures are increasingly common, the impact of heterogeneity on all computing tasks is rapidly increasing. Traditional parallel algorithms, programming environments and tools designed for older homogeneous multiprocessors will at best achieve a small fraction of the efficiency and potential performance that we should expect from parallel computing in tomorrow's highly diverse and mixed environments. New ideas, innovative algorithms, and specialised programming environments and tools are needed to efficiently exploit these new and diverse parallel architectures. The workshop aims to provide a forum for researchers working on algorithms, programming languages, tools, and theoretical models aimed at efficiently solving problems on heterogeneous platforms.
Supercomputers are now beginning to break the exascale barrier, and tremendous amounts of work have been invested in identifying and overcoming the challenges leading up to this moment. These challenges include load-balancing, fast data transfers, and efficient resource utilization. Task-based models and runtime systems have shown that it is possible to address these challenges by providing additional mechanisms such as oversubscription, task/data locality, shared memory, and data dependence-driven execution. This workshop explores the advantages of task-based programming on modern and future HPC systems. It will gather developers, users, and proponents of these models and systems to share experience, discuss how they meet the challenges posed by Exascale system architectures, and explore opportunities for increased performance, robustness, and full-system utilization.
Quantum Computing (QC) offers a disruptive landscape to High-Performance Computing (HPC). The integration of both is still a research topic, including how to tackle hybrid classical-quantum applications.
The first experiences of parallel quantum computing have been executed during the last few years, and a modular architecture has been proposed by some researchers and vendors. At the application level, although not new because the first proposed algorithms are more than 20 years old, parallel quantum computing is a research topic that is gaining momentum. In the short term, it can include only QPUs with classical communications, but in the future could integrate quantum communications as well. A modification to MPI has been proposed to include new primitives for quantum operations. This workshop intends to showcase research works related to topics linked with this classical-quantum integration objective, at the different levels of the hardware/software stack, from applications to hardware.
Research in natural sciences like physics and chemistry is increasingly moving towards massive use of high-performance computing systems to numerically solve problems for which no analytical solutions are known. We are currently at an unprecedented favorable juncture in Europe due to the investments made through the Horizon Europe Programme, which has invested substantial resources in constructing high-performance computing infrastructure to achieve Exa-scale capabilities. In this scenario, it is crucial to design codes to exploit the overall set of available resources optimally, permitting them to scale to wider and finer domains in both space and time, thus enabling more insightful applications to natural sciences. Scientific codes sometimes suffer from the lack of state-of-the-art solutions proposed by computer science, like optimized data structures reducing memory space footprint, fast stencil algorithms, and conflict-free write procedures specifically developed for many-core accelerators like GPUs or load-balancing routines for both when and how to load balance for high-performance computing clusters. Nevertheless, such codes often embed original algorithmic solutions and data structures that could inspire the development of new algorithms and tools or improve existing ones in computer science. The encounter between natural science and computer science could represent an enriching opportunity for all disciplines involved. Aiming to feed this collaboration, the workshop is intended to be an interdisciplinary forum for comparison and discussion between natural science and computer science to establish a synergy between these disciplines and favour the advancement of knowledge in each of them. Authors are encouraged to submit original, unpublished research or critical reviews on algorithms, models and tools for parallel computing applied to scientific problems with emphasis on original solutions and open issues.
RISC-V is an open standard Instruction Set Architecture (ISA) which enables the open development of CPUs and a shared common software ecosystem. There are already over 15 billion RISC-V cores, which is accelerating rapidly. Nonetheless, for all the successes that RISC-V has faced, it is yet to become popular in HPC. Recent advances however, such as the vectorisation standard and high performance RISC-V-based CPUs and accelerators, mean that this technology is becoming a more realistic proposition for our workloads. This workshop aims to connect those currently involved in RISC-V with the wider HPC community. We look to bring together RISC-V experts with scientific software developers, vendors, and supercomputing center operators to explore the advantages, challenges, and opportunities that RISC-V can bring to HPC. This is being organised by the RISC-V HPC SIG, which we aim to expand with a range of HPC expertise and viewpoints, enabling interested attendees to then participate in this field beyond the workshop and help drive one of the most exciting open-source technological activities of our time.
The current static usage model of HPC systems is becoming increasingly inefficient. This is driven by the continuously growing complexity and heterogeneity of system architectures, combined with the increased usage of coupled applications, the need for strong scaling with extreme scale parallelism, and the increasing reliance on complex and dynamic workflows. Consequently, we see a rise in research on systems supporting dynamically varying resources, middleware software, and applications, which can adjust resource usage dynamically to extract maximum efficiency. By providing intelligent global coordination of resource usage through runtime scheduling of computation, network usage, and I/O across all components of the system architecture, dynamic resource HPC systems can maximize the exploitation of the available resources while at the same time minimizing the overall makespan of applications in many, if not most, cases. Dynamic resource management on large-scale HPC platforms is part of several EuroHPC projects. It poses highly interdisciplinary challenges concerning, among others, the applications supporting dynamic resources, extending dynamic resource provisioning to the full system stack, I/O malleable systems, new applications, and I/O scheduling, etc. This workshop is planned to be an amplifier of collaborations around these ideas.
This workshop sets the goal of presenting the visionary papers and talks from research and practice in the evolution of the computing continuum encompassing IoT, edge and cloud. From the challenges of system performance, management and ambitious vision towards a single MetaOS in the continuum, there is a broad set of topics to be addressed. This call invites for the technical presentations and interactive workshop sessions, including industrial leaders, researchers, developers and academics fostering the exchange of research findings and ideas as well as enabling research cooperation as well as industrial leadership. This workshop plans to push the boundaries of what is currently possible with existing Cloud-Edge-IoT orchestration solutions, paving the way for the future “MetaOS” in a compute continuum.
The LLVM framework is a vast ecosystem that stretches far beyond a “simple” C/C++ compiler. The variety of programming language and toolchain-related parts help to support most programming models for GPUs, including CUDA, HIP, OpenACC, OpenCL, OpenMP, SYCL, and offloading C++ and Fortran native parallelism. In addition, LLVM serves as a vehicle for various languages in which parallelism is a first-class citizen, such as Julia or Chapel. Summarized, LLVM plays a central role in the GPU offloading landscape, and with the creation of the LLVM/Offload subproject, we expect features and collaborations in this space to grow even further and faster. In this workshop, held in conjunction with the EURO-PAR 2024 conference, researchers are invited to speak about experiences, extensions, and ideas for GPU usage, especially those related to LLVM. Through this forum, we believe industry and academia can come together and exchange thoughts on the future of (GPU) offloading
The “Compute Continuum” paradigm promises to manage the heterogeneity and dynamism of widespread computing resources, aiming to simplify the execution of distributed applications improving data locality, performance, availability, adaptability, energy management as well as other non-functional features. This is made possible by overcoming resource fragmentation and segregation in tiers, enabling applications to be seamlessly executed and relocated along a continuum of resources spanning from the edge to the cloud. Besides consolidated vertical and horizontal scaling patterns, this paradigm also offers more detailed adaptation actions that strictly depend on the specific infrastructure components (e.g., to reduce energy consumption, or to exploit specific hardware such as GPUs and FPGAs). This enables the enhancement of latency-sensitive applications, the reduction of network bandwidth consumption, the improvement of privacy protection, and the development of novel services aimed at improving living, health, safety, and mobility. All of this should be achievable by application developers without having to worry about how and where the developed components will be executed. Therefore, to unleash the true potential offered by the Compute Continuum, proactive, autonomous, and infrastructure-aware management is desirable, if not mandatory, calling for novel interdisciplinary approaches that exploit optimization theory, control theory, machine learning, and artificial intelligence methods.
In recent years, the convergence of high-performance computing and eScience has opened new avenues for scientific discovery and innovation. Researchers across various domains leverage advanced computational techniques to analyze vast data, simulate complex systems, and accelerate scientific workflows. The Workshop on High-Performance eScience Tools and Applications aims to bring together experts from academia, industry, and government organizations to discuss the latest developments in this rapidly evolving field and explore emerging trends, challenges, and opportunities.
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC) brings together researchers and industrial practitioners to discuss the challenges posed by virtualization in HPC and Cloud scenarios. It covers various aspects of virtualization across the software stack, with a focus on the intersection of HPC, containers, virtualization, and cloud computing. The workshop aims to foster collaboration, knowledge exchange, and the development of novel solutions for virtualized computing systems. This year we are calling the timely topic of GPU hypervisor memory virtualization in support of high memory LLM training workloads in addition to regular topics including but not limited to container platforms and unikernel frameworks, heterogeneous HPC virtualized environments and service orchestration in virtualized cloud & HPC infrastructures. The workshop features paper presentations, discussion sections, and lightning talks. Accepted papers will be published in a Springer LNCS volume.
The goal of HCQS is to bring quantum computing to the HPC research community, helping scientists working in this field to find common ground in this interdisciplinary field. In this workshop, we want to attract submissions from computer scientists, mathematicians and physicists, working on the integration of quantum into HPC infrastructures. Works on benchmarking hybrid classical quantum systems, hybrid classical quantum applications, and integration of classical HPC and quantum hardware are especially welcome.