TY - GEN AU - Löcke, Thomas ID - 5416 TI - Instance-Specific Computing in Hard- and Software for Faster Solving of Complex Problems ER - TY - GEN AU - Wallaschek, Felix ID - 5419 TI - Accelerating Programmable Logic Controllers with the use of FPGAs ER - TY - THES AB - The use of heterogeneous computing resources, such as graphics processing units or other specialized co-processors, has become widespread in recent years because of their performance and energy efficiency advantages. Operating system approaches that are limited to optimizing CPU usage are no longer sufficient for the efficient utilization of systems that comprise diverse resource types. Enabling task preemption on these architectures and migration of tasks between different resource types at run-time is not only key to improving the performance and energy consumption but also to enabling automatic scheduling methods for heterogeneous compute nodes. This thesis proposes novel techniques for run-time management of heterogeneous resources and enabling tasks to migrate between diverse hardware. It provides fundamental work towards future operating systems by discussing implications, limitations, and chances of the heterogeneity and introducing solutions for energy- and performance-efficient run-time systems. Scheduling methods to utilize heterogeneous systems by the use of a centralized scheduler are presented that show benefits over existing approaches in varying case studies. AU - Beisel, Tobias ID - 10624 SN - 978-3-8325-4155-2 TI - Management and Scheduling of Accelerators for Heterogeneous High-Performance Computing ER - TY - JOUR AB - FPGAs are known to permit huge gains in performance and efficiency for suitable applications but still require reduced design efforts and shorter development cycles for wider adoption. In this work, we compare the resulting performance of two design concepts that in different ways promise such increased productivity. As common starting point, we employ a kernel-centric design approach, where computational hotspots in an application are identified and individually accelerated on FPGA. By means of a complex stereo matching application, we evaluate two fundamentally different design philosophies and approaches for implementing the required kernels on FPGAs. In the first implementation approach, we designed individually specialized data flow kernels in a spatial programming language for a Maxeler FPGA platform; in the alternative design approach, we target a vector coprocessor with large vector lengths, which is implemented as a form of programmable overlay on the application FPGAs of a Convey HC-1. We assess both approaches in terms of overall system performance, raw kernel performance, and performance relative to invested resources. After compensating for the effects of the underlying hardware platforms, the specialized dataflow kernels on the Maxeler platform are around 3x faster than kernels executing on the Convey vector coprocessor. In our concrete scenario, due to trade-offs between reconfiguration overheads and exposed parallelism, the advantage of specialized dataflow kernels is reduced to around 2.5x. AU - Kenter, Tobias AU - Schmitz, Henning AU - Plessl, Christian ID - 296 JF - International Journal of Reconfigurable Computing (IJRC) TI - Exploring Tradeoffs between Specialized Kernels and a Reusable Overlay in a Stereo-Matching Case Study VL - 2015 ER - TY - CONF AB - This paper introduces Binary Acceleration At Runtime(BAAR), an easy-to-use on-the-fly binary acceleration mechanismwhich aims to tackle the problem of enabling existentsoftware to automatically utilize accelerators at runtime. BAARis based on the LLVM Compiler Infrastructure and has aclient-server architecture. The client runs the program to beaccelerated in an environment which allows program analysisand profiling. Program parts which are identified as suitable forthe available accelerator are exported and sent to the server.The server optimizes these program parts for the acceleratorand provides RPC execution for the client. The client transformsits program to utilize accelerated execution on the server foroffloaded program parts. We evaluate our work with a proofof-concept implementation of BAAR that uses an Intel XeonPhi 5110P as the acceleration target and performs automaticoffloading, parallelization and vectorization of suitable programparts. The practicality of BAAR for real-world examples is shownbased on a study of stencil codes. Our results show a speedup ofup to 4 without any developer-provided hints and 5.77 withhints over the same code compiled with the Intel Compiler atoptimization level O2 and running on an Intel Xeon E5-2670machine. Based on our insights gained during implementationand evaluation we outline future directions of research, e.g.,offloading more fine-granular program parts than functions, amore sophisticated communication mechanism or introducing onstack-replacement. AU - Damschen, Marvin AU - Plessl, Christian ID - 303 T2 - Proceedings of the 5th International Workshop on Adaptive Self-tuning Computing Systems (ADAPT) TI - Easy-to-Use On-The-Fly Binary Program Acceleration on Many-Cores ER - TY - CONF AU - Schumacher, Jörn AU - T. Anderson, J. AU - Borga, A. AU - Boterenbrood, H. AU - Chen, H. AU - Chen, K. AU - Drake, G. AU - Francis, D. AU - Gorini, B. AU - Lanni, F. AU - Lehmann-Miotto, Giovanna AU - Levinson, L. AU - Narevicius, J. AU - Plessl, Christian AU - Roich, A. AU - Ryu, S. AU - P. Schreuder, F. AU - Vandelli, Wainer AU - Vermeulen, J. AU - Zhang, J. ID - 1773 T2 - Proc. Int. Conf. on Distributed Event-Based Systems (DEBS) TI - Improving Packet Processing Performance in the ATLAS FELIX Project – Analysis and Optimization of a Memory-Bounded Algorithm ER - TY - JOUR AU - Plessl, Christian AU - Platzner, Marco AU - Schreier, Peter J. ID - 1768 IS - 5 JF - Informatik Spektrum KW - approximate computing KW - survey TI - Aktuelles Schlagwort: Approximate Computing ER - TY - CONF AB - In this paper, we study how binary applications can be transparently accelerated with novel heterogeneous computing resources without requiring any manual porting or developer-provided hints. Our work is based on Binary Acceleration At Runtime (BAAR), our previously introduced binary acceleration mechanism that uses the LLVM Compiler Infrastructure. BAAR is designed as a client-server architecture. The client runs the program to be accelerated in an environment, which allows program analysis and profiling and identifies and extracts suitable program parts to be offloaded. The server compiles and optimizes these offloaded program parts for the accelerator and offers access to these functions to the client with a remote procedure call (RPC) interface. Our previous work proved the feasibility of our approach, but also showed that communication time and overheads limit the granularity of functions that can be meaningfully offloaded. In this work, we motivate the importance of a lightweight, high-performance communication between server and client and present a communication mechanism based on the Message Passing Interface (MPI). We evaluate our approach by using an Intel Xeon Phi 5110P as the acceleration target and show that the communication overhead can be reduced from 40% to 10%, thus enabling even small hotspots to benefit from offloading to an accelerator. AU - Damschen, Marvin AU - Riebler, Heinrich AU - Vaz, Gavin Francis AU - Plessl, Christian ID - 238 T2 - Proceedings of the 2015 Conference on Design, Automation and Test in Europe (DATE) TI - Transparent offloading of computational hotspots from binary code to Xeon Phi ER - TY - JOUR AB - The ATLAS experiment at CERN is planning full deployment of a new unified optical link technology for connecting detector front end electronics on the timescale of the LHC Run 4 (2025). It is estimated that roughly 8000 GBT (GigaBit Transceiver) links, with transfer rates up to 10.24 Gbps, will replace existing links used for readout, detector control and distribution of timing and trigger information. A new class of devices will be needed to interface many GBT links to the rest of the trigger, data-acquisition and detector control systems. In this paper FELIX (Front End LInk eXchange) is presented, a PC-based device to route data from and to multiple GBT links via a high-performance general purpose network capable of a total throughput up to O(20 Tbps). FELIX implies architectural changes to the ATLAS data acquisition system, such as the use of industry standard COTS components early in the DAQ chain. Additionally the design and implementation of a FELIX demonstration platform is presented and hardware and software aspects will be discussed. AU - Anderson, J AU - Borga, A AU - Boterenbrood, H AU - Chen, H AU - Chen, K AU - Drake, G AU - Francis, D AU - Gorini, B AU - Lanni, F AU - Lehmann Miotto, G AU - Levinson, L AU - Narevicius, J AU - Plessl, Christian AU - Roich, A AU - Ryu, S AU - Schreuder, F AU - Schumacher, Jörn AU - Vandelli, Wainer AU - Vermeulen, J AU - Zhang, J ID - 1775 JF - Journal of Physics: Conference Series TI - FELIX: a High-Throughput Network Approach for Interfacing to Front End Electronics for ATLAS Upgrades VL - 664 ER - TY - CHAP AB - Im Bereich der Computersysteme ist die Festlegung der Grenze zwischen Hardware und Software eine zentrale Problemstellung. Diese Grenze hat in den letzten Jahrzehnten nicht nur die Entwicklung von Computersystemen bestimmt, sondern auch die Strukturierung der Ausbildung in den Computerwissenschaften beeinflusst und sogar zur Entstehung von neuen Forschungsrichtungen gef{\"u}hrt. In diesem Beitrag besch{\"a}ftigen wir uns mit Verschiebungen an der Grenze zwischen Hardware und Software und diskutieren insgesamt drei qualitativ unterschiedliche Formen solcher Verschiebungen. Wir beginnen mit der Entwicklung von Computersystemen im letzten Jahrhundert und der Entstehung dieser Grenze, die Hardware und Software erst als eigenst{\"a}ndige Produkte differenziert. Dann widmen wir uns der Frage, welche Funktionen in einem Computersystem besser in Hardware und welche besser in Software realisiert werden sollten, eine Fragestellung die zu Beginn der 90er-Jahre zur Bildung einer eigenen Forschungsrichtung, dem sogenannten Hardware/Software Co-design, gef{\"u}hrt hat. Im Hardware/Software Co-design findet eine Verschiebung von Funktionen an der Grenze zwischen Hardware und Software w{\"a}hrend der Entwicklung eines Produktes statt, um Produkteigenschaften zu optimieren. Im fertig entwickelten und eingesetzten Produkt hingegen k{\"o}nnen wir dann eine feste Grenze zwischen Hardware und Software beobachten. Im dritten Teil dieses Beitrags stellen wir mit selbst-adaptiven Systemen eine hochaktuelle Forschungsrichtung vor. In unserem Kontext bedeutet Selbstadaption, dass ein System Verschiebungen von Funktionen an der Grenze zwischen Hardware und Software autonom w{\"a}hrend der Betriebszeit vornimmt. Solche Systeme beruhen auf rekonfigurierbarer Hardware, einer relativ neuen Technologie mit der die Hardware eines Computers w{\"a}hrend der Laufzeit ver{\"a}ndert werden kann. Diese Technologie f{\"u}hrt zu einer durchl{\"a}ssigen Grenze zwischen Hardware und Software bzw. l{\"o}st sie die herk{\"o}mmliche Vorstellung einer festen Hardware und einer flexiblen Software damit auf. AU - Platzner, Marco AU - Plessl, Christian ED - Künsemöller, Jörn ED - Eke, Norber Otto ED - Foit, Lioba ED - Kaerlein, Timo ID - 335 SN - 978-3-7705-5730-1 T2 - Logiken strukturbildender Prozesse: Automatismen TI - Verschiebungen an der Grenze zwischen Hardware und Software ER -