TY - CONF AU - Lass, Michael AU - Kühne, Thomas AU - Plessl, Christian ID - 25 T2 - Workshop on Approximate Computing (AC) TI - Using Approximate Computing in Scientific Codes ER - TY - CONF AB - Hardware accelerators are becoming popular in academia and industry. To move one step further from the state-of-the-art multicore plus accelerator approaches, we present in this paper our innovative SAVEHSA architecture. It comprises of a heterogeneous hardware platform with three different high-end accelerators attached over PCIe (GPGPU, FPGA and Intel MIC). Such systems can process parallel workloads very efficiently whilst being more energy efficient than regular CPU systems. To leverage the heterogeneity, the workload has to be distributed among the computing units in a way that each unit is well-suited for the assigned task and executable code must be available. To tackle this problem we present two software components; the first can perform resource allocation at runtime while respecting system and application goals (in terms of throughput, energy, latency, etc.) and the second is able to analyze an application and generate executable code for an accelerator at runtime. We demonstrate the first proof-of-concept implementation of our framework on the heterogeneous platform, discuss different runtime policies and measure the introduced overheads. AU - Riebler, Heinrich AU - Vaz, Gavin Francis AU - Plessl, Christian AU - Trainiti, Ettore M. G. AU - Durelli, Gianluca C. AU - Del Sozzo, Emanuele AU - Santambrogio, Marco D. AU - Bolchini, Christina ID - 138 T2 - Proceedings of International Forum on Research and Technologies for Society and Industry (RTSI) TI - Using Just-in-Time Code Generation for Transparent Resource Management in Heterogeneous Systems ER - TY - CHAP AB - Many modern compute nodes are heterogeneous multi-cores that integrate several CPU cores with fixed function or reconfigurable hardware cores. Such systems need to adapt task scheduling and mapping to optimise for performance and energy under varying workloads and, increasingly important, for thermal and fault management and are thus relevant targets for self-aware computing. In this chapter, we take up the generic reference architecture for designing self-aware and self-expressive computing systems and refine it for heterogeneous multi-cores. We present ReconOS, an architecture, programming model and execution environment for heterogeneous multi-cores, and show how the components of the reference architecture can be implemented on top of ReconOS. In particular, the unique feature of dynamic partial reconfiguration supports self-expression through starting and terminating reconfigurable hardware cores. We detail a case study that runs two applications on an architecture with one CPU and 12 reconfigurable hardware cores and present self-expression strategies for adapting under performance, temperature and even conflicting constraints. The case study demonstrates that the reference architecture as a model for self-aware computing is highly useful as it allows us to structure and simplify the design process, which will be essential for designing complex future compute nodes. Furthermore, ReconOS is used as a base technology for flexible protocol stacks in Chapter 10, an approach for self-aware computing at the networking level. AU - Agne, Andreas AU - Happe, Markus AU - Lösch, Achim AU - Plessl, Christian AU - Platzner, Marco ID - 156 T2 - Self-aware Computing Systems TI - Self-aware Compute Nodes ER - TY - JOUR AB - A broad spectrum of applications can be accelerated by offloading computation intensive parts to reconfigurable hardware. However, to achieve speedups, the number of loop it- erations (trip count) needs to be sufficiently large to amortize offloading overheads. Trip counts are frequently not known at compile time, but only at runtime just before entering a loop. Therefore, we propose to generate code for both the CPU and the coprocessor, and defer the offloading decision to the application runtime. We demonstrate how a toolflow, based on the LLVM compiler framework, can automatically embed dynamic offloading de- cisions into the application code. We perform in-depth static and dynamic analysis of pop- ular benchmarks, which confirm the general potential of such an approach. We also pro- pose to optimize the offloading process by decoupling the runtime decision from the loop execution (decision slack). The feasibility of our approach is demonstrated by a toolflow that automatically identifies suitable data-parallel loops and generates code for the FPGA coprocessor of a Convey HC-1. We evaluate the integrated toolflow with representative loops executed for different input data sizes. AU - Vaz, Gavin Francis AU - Riebler, Heinrich AU - Kenter, Tobias AU - Plessl, Christian ID - 165 JF - Computers and Electrical Engineering SN - 0045-7906 TI - Potential and Methods for Embedding Dynamic Offloading Decisions into Application Code VL - 55 ER - TY - CONF AB - The use of heterogeneous computing resources, such as Graphic Processing Units or other specialized coprocessors, has become widespread in recent years because of their per- formance and energy efficiency advantages. Approaches for managing and scheduling tasks to heterogeneous resources are still subject to research. Although queuing systems have recently been extended to support accelerator resources, a general solution that manages heterogeneous resources at the operating system- level to exploit a global view of the system state is still missing.In this paper we present a user space scheduler that enables task scheduling and migration on heterogeneous processing resources in Linux. Using run queues for available resources we perform scheduling decisions based on the system state and on task characterization from earlier measurements. With a pro- gramming pattern that supports the integration of checkpoints into applications, we preempt tasks and migrate them between three very different compute resources. Considering static and dynamic workload scenarios, we show that this approach can gain up to 17% performance, on average 7%, by effectively avoiding idle resources. We demonstrate that a work-conserving strategy without migration is no suitable alternative. AU - Lösch, Achim AU - Beisel, Tobias AU - Kenter, Tobias AU - Plessl, Christian AU - Platzner, Marco ID - 168 T2 - Proceedings of the 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE) TI - Performance-centric scheduling with task migration for a heterogeneous compute node in the data center ER - TY - CONF AU - Kenter, Tobias AU - Vaz, Gavin Francis AU - Riebler, Heinrich AU - Plessl, Christian ID - 171 T2 - Workshop on Reconfigurable Computing (WRC) TI - Opportunities for deferring application partitioning and accelerator synthesis to runtime (extended abstract) ER - TY - JOUR AU - Torresen, Jim AU - Plessl, Christian AU - Yao, Xin ID - 1772 IS - 7 JF - IEEE Computer KW - self-awareness KW - self-expression TI - Self-Aware and Self-Expressive Systems – Guest Editor's Introduction VL - 48 ER - TY - GEN AB - Demands for computational power and energy efficiency of computing devices are steadily increasing. At the same time, following classic methods to increase speed and reduce energy consumption of these devices becomes increasingly difficult, bringing alternative methods into focus. One of these methods is approximate computing which utilizes the fact that small errors in computations are acceptable in many applications in order to allow acceleration of these computations or to increase energy efficiency. This thesis develops elements of a workflow that can be followed to apply approximate computing to existing applications. It proposes a novel heuristic approach to the localization of code paths that are suitable to approximate computing based on findings in recent research. Additionally, an approach to identification of approximable instructions within these code paths is proposed and used to implement simulation of approximation. The parts of the workflow are implemented with the goal to lay the foundation for a partly automated toolflow. Evaluation of the developed techniques shows that the proposed methods can help providing a convenient workflow, facilitating the first steps into the application of approximate computing. AU - Lass, Michael ID - 1794 TI - Localization and Analysis of Code Paths Suitable for Acceleration using Approximate Computing ER - TY - CONF AB - The first year of studying has been extensively researched applying different theoretical lenses to better understand the transition into Higher Education (HE). It is of particular interest to investigate how students deal with frictions between themselves as individuals and what they perceive to be dominant features of the first-year culture of their studies. To tackle this question, a qualitative longitudinal study was conducted. Based on a sociocultural understanding of attitudes and motivations, its aim was to closely follow a relatively small but highly diverse sample of students throughout their first year at a business school in order to develop an in-depth understanding of each individual’s motivational and attitudinal development. AU - Jenert, Tobias AU - Brahm, Taiga ID - 4465 KW - Enculturation KW - first-year students KW - beginning students KW - retention KW - drop-out TI - How Do They Find Their Place? A Longitudinal Study of Management Students' Attitudes and Motivations During Their First Year at Business School ER - TY - GEN AU - Funke, Lukas ID - 5413 TI - An LLVM Based Toolchain for Transparent Acceleration of Digital Image Processing Applications using FPGA Overlay Architectures ER - TY - GEN AU - Löcke, Thomas ID - 5416 TI - Instance-Specific Computing in Hard- and Software for Faster Solving of Complex Problems ER - TY - GEN AU - Wallaschek, Felix ID - 5419 TI - Accelerating Programmable Logic Controllers with the use of FPGAs ER - TY - THES AB - The use of heterogeneous computing resources, such as graphics processing units or other specialized co-processors, has become widespread in recent years because of their performance and energy efficiency advantages. Operating system approaches that are limited to optimizing CPU usage are no longer sufficient for the efficient utilization of systems that comprise diverse resource types. Enabling task preemption on these architectures and migration of tasks between different resource types at run-time is not only key to improving the performance and energy consumption but also to enabling automatic scheduling methods for heterogeneous compute nodes. This thesis proposes novel techniques for run-time management of heterogeneous resources and enabling tasks to migrate between diverse hardware. It provides fundamental work towards future operating systems by discussing implications, limitations, and chances of the heterogeneity and introducing solutions for energy- and performance-efficient run-time systems. Scheduling methods to utilize heterogeneous systems by the use of a centralized scheduler are presented that show benefits over existing approaches in varying case studies. AU - Beisel, Tobias ID - 10624 SN - 978-3-8325-4155-2 TI - Management and Scheduling of Accelerators for Heterogeneous High-Performance Computing ER - TY - JOUR AB - FPGAs are known to permit huge gains in performance and efficiency for suitable applications but still require reduced design efforts and shorter development cycles for wider adoption. In this work, we compare the resulting performance of two design concepts that in different ways promise such increased productivity. As common starting point, we employ a kernel-centric design approach, where computational hotspots in an application are identified and individually accelerated on FPGA. By means of a complex stereo matching application, we evaluate two fundamentally different design philosophies and approaches for implementing the required kernels on FPGAs. In the first implementation approach, we designed individually specialized data flow kernels in a spatial programming language for a Maxeler FPGA platform; in the alternative design approach, we target a vector coprocessor with large vector lengths, which is implemented as a form of programmable overlay on the application FPGAs of a Convey HC-1. We assess both approaches in terms of overall system performance, raw kernel performance, and performance relative to invested resources. After compensating for the effects of the underlying hardware platforms, the specialized dataflow kernels on the Maxeler platform are around 3x faster than kernels executing on the Convey vector coprocessor. In our concrete scenario, due to trade-offs between reconfiguration overheads and exposed parallelism, the advantage of specialized dataflow kernels is reduced to around 2.5x. AU - Kenter, Tobias AU - Schmitz, Henning AU - Plessl, Christian ID - 296 JF - International Journal of Reconfigurable Computing (IJRC) TI - Exploring Tradeoffs between Specialized Kernels and a Reusable Overlay in a Stereo-Matching Case Study VL - 2015 ER - TY - CONF AB - This paper introduces Binary Acceleration At Runtime(BAAR), an easy-to-use on-the-fly binary acceleration mechanismwhich aims to tackle the problem of enabling existentsoftware to automatically utilize accelerators at runtime. BAARis based on the LLVM Compiler Infrastructure and has aclient-server architecture. The client runs the program to beaccelerated in an environment which allows program analysisand profiling. Program parts which are identified as suitable forthe available accelerator are exported and sent to the server.The server optimizes these program parts for the acceleratorand provides RPC execution for the client. The client transformsits program to utilize accelerated execution on the server foroffloaded program parts. We evaluate our work with a proofof-concept implementation of BAAR that uses an Intel XeonPhi 5110P as the acceleration target and performs automaticoffloading, parallelization and vectorization of suitable programparts. The practicality of BAAR for real-world examples is shownbased on a study of stencil codes. Our results show a speedup ofup to 4 without any developer-provided hints and 5.77 withhints over the same code compiled with the Intel Compiler atoptimization level O2 and running on an Intel Xeon E5-2670machine. Based on our insights gained during implementationand evaluation we outline future directions of research, e.g.,offloading more fine-granular program parts than functions, amore sophisticated communication mechanism or introducing onstack-replacement. AU - Damschen, Marvin AU - Plessl, Christian ID - 303 T2 - Proceedings of the 5th International Workshop on Adaptive Self-tuning Computing Systems (ADAPT) TI - Easy-to-Use On-The-Fly Binary Program Acceleration on Many-Cores ER - TY - CONF AU - Schumacher, Jörn AU - T. Anderson, J. AU - Borga, A. AU - Boterenbrood, H. AU - Chen, H. AU - Chen, K. AU - Drake, G. AU - Francis, D. AU - Gorini, B. AU - Lanni, F. AU - Lehmann-Miotto, Giovanna AU - Levinson, L. AU - Narevicius, J. AU - Plessl, Christian AU - Roich, A. AU - Ryu, S. AU - P. Schreuder, F. AU - Vandelli, Wainer AU - Vermeulen, J. AU - Zhang, J. ID - 1773 T2 - Proc. Int. Conf. on Distributed Event-Based Systems (DEBS) TI - Improving Packet Processing Performance in the ATLAS FELIX Project – Analysis and Optimization of a Memory-Bounded Algorithm ER - TY - JOUR AU - Plessl, Christian AU - Platzner, Marco AU - Schreier, Peter J. ID - 1768 IS - 5 JF - Informatik Spektrum KW - approximate computing KW - survey TI - Aktuelles Schlagwort: Approximate Computing ER - TY - CONF AB - In this paper, we study how binary applications can be transparently accelerated with novel heterogeneous computing resources without requiring any manual porting or developer-provided hints. Our work is based on Binary Acceleration At Runtime (BAAR), our previously introduced binary acceleration mechanism that uses the LLVM Compiler Infrastructure. BAAR is designed as a client-server architecture. The client runs the program to be accelerated in an environment, which allows program analysis and profiling and identifies and extracts suitable program parts to be offloaded. The server compiles and optimizes these offloaded program parts for the accelerator and offers access to these functions to the client with a remote procedure call (RPC) interface. Our previous work proved the feasibility of our approach, but also showed that communication time and overheads limit the granularity of functions that can be meaningfully offloaded. In this work, we motivate the importance of a lightweight, high-performance communication between server and client and present a communication mechanism based on the Message Passing Interface (MPI). We evaluate our approach by using an Intel Xeon Phi 5110P as the acceleration target and show that the communication overhead can be reduced from 40% to 10%, thus enabling even small hotspots to benefit from offloading to an accelerator. AU - Damschen, Marvin AU - Riebler, Heinrich AU - Vaz, Gavin Francis AU - Plessl, Christian ID - 238 T2 - Proceedings of the 2015 Conference on Design, Automation and Test in Europe (DATE) TI - Transparent offloading of computational hotspots from binary code to Xeon Phi ER - TY - JOUR AB - The ATLAS experiment at CERN is planning full deployment of a new unified optical link technology for connecting detector front end electronics on the timescale of the LHC Run 4 (2025). It is estimated that roughly 8000 GBT (GigaBit Transceiver) links, with transfer rates up to 10.24 Gbps, will replace existing links used for readout, detector control and distribution of timing and trigger information. A new class of devices will be needed to interface many GBT links to the rest of the trigger, data-acquisition and detector control systems. In this paper FELIX (Front End LInk eXchange) is presented, a PC-based device to route data from and to multiple GBT links via a high-performance general purpose network capable of a total throughput up to O(20 Tbps). FELIX implies architectural changes to the ATLAS data acquisition system, such as the use of industry standard COTS components early in the DAQ chain. Additionally the design and implementation of a FELIX demonstration platform is presented and hardware and software aspects will be discussed. AU - Anderson, J AU - Borga, A AU - Boterenbrood, H AU - Chen, H AU - Chen, K AU - Drake, G AU - Francis, D AU - Gorini, B AU - Lanni, F AU - Lehmann Miotto, G AU - Levinson, L AU - Narevicius, J AU - Plessl, Christian AU - Roich, A AU - Ryu, S AU - Schreuder, F AU - Schumacher, Jörn AU - Vandelli, Wainer AU - Vermeulen, J AU - Zhang, J ID - 1775 JF - Journal of Physics: Conference Series TI - FELIX: a High-Throughput Network Approach for Interfacing to Front End Electronics for ATLAS Upgrades VL - 664 ER - TY - CHAP AB - Im Bereich der Computersysteme ist die Festlegung der Grenze zwischen Hardware und Software eine zentrale Problemstellung. Diese Grenze hat in den letzten Jahrzehnten nicht nur die Entwicklung von Computersystemen bestimmt, sondern auch die Strukturierung der Ausbildung in den Computerwissenschaften beeinflusst und sogar zur Entstehung von neuen Forschungsrichtungen gef{\"u}hrt. In diesem Beitrag besch{\"a}ftigen wir uns mit Verschiebungen an der Grenze zwischen Hardware und Software und diskutieren insgesamt drei qualitativ unterschiedliche Formen solcher Verschiebungen. Wir beginnen mit der Entwicklung von Computersystemen im letzten Jahrhundert und der Entstehung dieser Grenze, die Hardware und Software erst als eigenst{\"a}ndige Produkte differenziert. Dann widmen wir uns der Frage, welche Funktionen in einem Computersystem besser in Hardware und welche besser in Software realisiert werden sollten, eine Fragestellung die zu Beginn der 90er-Jahre zur Bildung einer eigenen Forschungsrichtung, dem sogenannten Hardware/Software Co-design, gef{\"u}hrt hat. Im Hardware/Software Co-design findet eine Verschiebung von Funktionen an der Grenze zwischen Hardware und Software w{\"a}hrend der Entwicklung eines Produktes statt, um Produkteigenschaften zu optimieren. Im fertig entwickelten und eingesetzten Produkt hingegen k{\"o}nnen wir dann eine feste Grenze zwischen Hardware und Software beobachten. Im dritten Teil dieses Beitrags stellen wir mit selbst-adaptiven Systemen eine hochaktuelle Forschungsrichtung vor. In unserem Kontext bedeutet Selbstadaption, dass ein System Verschiebungen von Funktionen an der Grenze zwischen Hardware und Software autonom w{\"a}hrend der Betriebszeit vornimmt. Solche Systeme beruhen auf rekonfigurierbarer Hardware, einer relativ neuen Technologie mit der die Hardware eines Computers w{\"a}hrend der Laufzeit ver{\"a}ndert werden kann. Diese Technologie f{\"u}hrt zu einer durchl{\"a}ssigen Grenze zwischen Hardware und Software bzw. l{\"o}st sie die herk{\"o}mmliche Vorstellung einer festen Hardware und einer flexiblen Software damit auf. AU - Platzner, Marco AU - Plessl, Christian ED - Künsemöller, Jörn ED - Eke, Norber Otto ED - Foit, Lioba ED - Kaerlein, Timo ID - 335 SN - 978-3-7705-5730-1 T2 - Logiken strukturbildender Prozesse: Automatismen TI - Verschiebungen an der Grenze zwischen Hardware und Software ER - TY - CONF AB - In order to leverage the use of reconfigurable architectures in general-purpose computing, quick and automated methods to find suitable accelerator designs are required. We tackle this challenge in both regards. In order to avoid long synthesis times, we target a vector copro- cessor, implemented on the FPGAs of a Convey HC-1. Previous studies showed that existing tools were not able to accelerate a real-world application with low effort. We present a toolflow to automatically identify suitable loops for vectorization, generate a corresponding hardware/software bipartition, and generate coprocessor code. Where applicable, we leverage outer-loop vectorization. We evaluate our tools with a set of characteristic loops, systematically analyzing different dependency and data layout properties. AU - Kenter, Tobias AU - Vaz, Gavin Francis AU - Plessl, Christian ID - 388 T2 - Proceedings of the International Symposium on Reconfigurable Computing: Architectures, Tools, and Applications (ARC) TI - Partitioning and Vectorizing Binary Applications for a Reconfigurable Vector Computer VL - 8405 ER - TY - JOUR AB - Due to the continuously shrinking device structures and increasing densities of FPGAs, thermal aspects have become the new focus for many research projects over the last years. Most researchers rely on temperature simulations to evaluate their novel thermal management techniques. However, these temperature simulations require a high computational effort if a detailed thermal model is used and their accuracies are often unclear. In contrast to simulations, the use of synthetic heat sources allows for experimental evaluation of temperature management methods. In this paper we investigate the creation of significant rises in temperature on modern FPGAs to enable future evaluation of thermal management techniques based on experiments. To that end, we have developed seven different heat-generating cores that use different subsets of FPGA resources. Our experimental results show that, according to external temperature probes connected to the FPGA’s heat sink, we can increase the temperature by an average of 81 !C. This corresponds to an average increase of 156.3 !C as measured by the built-in thermal diodes of our Virtex-5 FPGAs in less than 30 min by only utilizing about 21 percent of the slices. AU - Agne, Andreas AU - Hangmann, Hendrik AU - Happe, Markus AU - Platzner, Marco AU - Plessl, Christian ID - 363 IS - 8, Part B JF - Microprocessors and Microsystems TI - Seven Recipes for Setting Your FPGA on Fire – A Cookbook on Heat Generators VL - 38 ER - TY - CONF AB - In this paper, we study how AES key schedules can be reconstructed from decayed memory. This operation is a crucial and time consuming operation when trying to break encryption systems with cold-boot attacks. In software, the reconstruction of the AES master key can be performed using a recursive, branch-and-bound tree-search algorithm that exploits redundancies in the key schedule for constraining the search space. In this work, we investigate how this branch-and-bound algorithm can be accelerated with FPGAs. We translated the recursive search procedure to a state machine with an explicit stack for each recursion level and create optimized datapaths to accelerate in particular the processing of the most frequently accessed tree levels. We support two different decay models, of which especially the more realistic non-idealized asymmetric decay model causes very high runtimes in software. Our implementation on a Maxeler dataflow computing system outperforms a software implementation for this model by up to 27x, which makes cold-boot attacks against AES practical even for high error rates. AU - Riebler, Heinrich AU - Kenter, Tobias AU - Plessl, Christian AU - Sorge, Christoph ID - 377 KW - coldboot T2 - Proceedings of Field-Programmable Custom Computing Machines (FCCM) TI - Reconstructing AES Key Schedules from Decayed Memory with FPGAs ER - TY - JOUR AB - Self-aware computing is a paradigm for structuring and simplifying the design and operation of computing systems that face unprecedented levels of system dynamics and thus require novel forms of adaptivity. The generality of the paradigm makes it applicable to many types of computing systems and, previously, researchers started to introduce concepts of self-awareness to multicore architectures. In our work we build on a recent reference architectural framework as a model for self-aware computing and instantiate it for an FPGA-based heterogeneous multicore running the ReconOS reconfigurable architecture and operating system. After presenting the model for self-aware computing and ReconOS, we demonstrate with a case study how a multicore application built on the principle of self-awareness, autonomously adapts to changes in the workload and system state. Our work shows that the reference architectural framework as a model for self-aware computing can be practically applied and allows us to structure and simplify the design process, which is essential for designing complex future computing systems. AU - Agne, Andreas AU - Happe, Markus AU - Lösch, Achim AU - Plessl, Christian AU - Platzner, Marco ID - 365 IS - 2 JF - ACM Transactions on Reconfigurable Technology and Systems (TRETS) TI - Self-awareness as a Model for Designing and Operating Heterogeneous Multicores VL - 7 ER - TY - JOUR AB - The ReconOS operating system for reconfigurable computing offers a unified multi-threaded programming model and operating system services for threads executing in software and threads mapped to reconfigurable hardware. The operating system interface allows hardware threads to interact with software threads using well-known mechanisms such as semaphores, mutexes, condition variables, and message queues. By semantically integrating hardware accelerators into a standard operating system environment, ReconOS allows for rapid design space exploration, supports a structured application development process and improves the portability of applications AU - Agne, Andreas AU - Happe, Markus AU - Keller, Ariane AU - Lübbers, Enno AU - Plattner, Bernhard AU - Platzner, Marco AU - Plessl, Christian ID - 328 IS - 1 JF - IEEE Micro TI - ReconOS - An Operating System Approach for Reconfigurable Computing VL - 34 ER - TY - CONF AU - C. Durelli, Gianluca AU - Pogliani, Marcello AU - Miele, Antonio AU - Plessl, Christian AU - Riebler, Heinrich AU - Vaz, Gavin Francis AU - D. Santambrogio, Marco AU - Bolchini, Cristiana ID - 1778 T2 - Proc. Int. Symp. on Parallel and Distributed Processing with Applications (ISPA) TI - Runtime Resource Management in Heterogeneous System Architectures: The SAVE Approach ER - TY - CONF AB - Reconfigurable architectures provide an opportunityto accelerate a wide range of applications, frequentlyby exploiting data-parallelism, where the same operations arehomogeneously executed on a (large) set of data. However, whenthe sequential code is executed on a host CPU and only dataparallelloops are executed on an FPGA coprocessor, a sufficientlylarge number of loop iterations (trip counts) is required, such thatthe control- and data-transfer overheads to the coprocessor canbe amortized. However, the trip count of large data-parallel loopsis frequently not known at compile time, but only at runtime justbefore entering a loop. Therefore, we propose to generate codeboth for the CPU and the coprocessor, and to defer the decisionwhere to execute the appropriate code to the runtime of theapplication when the trip count of the loop can be determinedjust at runtime. We demonstrate how an LLVM compiler basedtoolflow can automatically insert appropriate decision blocks intothe application code. Analyzing popular benchmark suites, weshow that this kind of runtime decisions is often applicable. Thepractical feasibility of our approach is demonstrated by a toolflowthat automatically identifies loops suitable for vectorization andgenerates code for the FPGA coprocessor of a Convey HC-1. Thetoolflow adds decisions based on a comparison of the runtimecomputedtrip counts to thresholds for specific loops and alsoincludes support to move just the required data to the coprocessor.We evaluate the integrated toolflow with characteristic loopsexecuted on different input data sizes. AU - Vaz, Gavin Francis AU - Riebler, Heinrich AU - Kenter, Tobias AU - Plessl, Christian ID - 439 T2 - Proceedings of the International Conference on ReConFigurable Computing and FPGAs (ReConFig) TI - Deferring Accelerator Offloading Decisions to Application Runtime ER - TY - CONF AB - Stereo-matching algorithms recently received a lot of attention from the FPGA acceleration community. Presented solutions range from simple, very resource efficient systems with modest matching quality for small embedded systems to sophisticated algorithms with several processing steps, implemented on big FPGAs. In order to achieve high throughput, most implementations strongly focus on pipelining and data reuse between different computation steps. This approach leads to high efficiency, but limits the supported computation patterns and due the high integration of the implementation, adaptions to the algorithm are difficult. In this work, we present a stereo-matching implementation, that starts by offloading individual kernels from the CPU to the FPGA. Between subsequent compute steps on the FPGA, data is stored off-chip in on-board memory of the FPGA accelerator card. This enables us to accelerate the AD-census algorithm with cross-based aggregation and scanline optimization for the first time without algorithmic changes and for up to full HD image dimensions. Analyzing throughput and bandwidth requirements, we outline some trade-offs that are involved with this approach, compared to tighter integration of more kernel loops into one design. AU - Kenter, Tobias AU - Schmitz, Henning AU - Plessl, Christian ID - 406 T2 - Proceedings of the International Conference on ReConFigurable Computing and FPGAs (ReConFig) TI - Kernel-Centric Acceleration of High Accuracy Stereo-Matching ER - TY - CONF AU - C. Durelli, Gianluca AU - Copolla, Marcello AU - Djafarian, Karim AU - Koranaros, George AU - Miele, Antonio AU - Paolino, Michele AU - Pell, Oliver AU - Plessl, Christian AU - D. Santambrogio, Marco AU - Bolchini, Cristiana ID - 1780 T2 - Proc. Int. Conf. on Reconfigurable Computing: Architectures, Tools and Applications (ARC) TI - SAVE: Towards efficient resource management in heterogeneous system architectures ER - TY - JOUR AU - Giefers, Heiner AU - Plessl, Christian AU - Förstner, Jens ID - 1779 IS - 5 JF - ACM SIGARCH Computer Architecture News KW - funding-maxup KW - tet_topic_hpc SN - 0163-5964 TI - Accelerating Finite Difference Time Domain Simulations with Reconfigurable Dataflow Computers VL - 41 ER - TY - GEN AU - Riebler, Heinrich ID - 521 KW - coldboot TI - Identifikation und Wiederherstellung von kryptographischen Schlüsseln mit FPGAs ER - TY - CONF AB - Cold-boot attacks exploit the fact that DRAM contents are not immediately lost when a PC is powered off. Instead the contents decay rather slowly, in particular if the DRAM chips are cooled to low temperatures. This effect opens an attack vector on cryptographic applications that keep decrypted keys in DRAM. An attacker with access to the target computer can reboot it or remove the RAM modules and quickly copy the RAM contents to non-volatile memory. By exploiting the known cryptographic structure of the cipher and layout of the key data in memory, in our application an AES key schedule with redundancy, the resulting memory image can be searched for sections that could correspond to decayed cryptographic keys; then, the attacker can attempt to reconstruct the original key. However, the runtime of these algorithms grows rapidly with increasing memory image size, error rate and complexity of the bit error model, which limits the practicability of the approach.In this work, we study how the algorithm for key search can be accelerated with custom computing machines. We present an FPGA-based architecture on a Maxeler dataflow computing system that outperforms a software implementation up to 205x, which significantly improves the practicability of cold-attacks against AES. AU - Riebler, Heinrich AU - Kenter, Tobias AU - Sorge, Christoph AU - Plessl, Christian ID - 528 KW - coldboot T2 - Proceedings of the International Conference on Field-Programmable Technology (FPT) TI - FPGA-accelerated Key Search for Cold-Boot Attacks against AES ER - TY - CONF AB - In this paper we introduce “On-The-Fly Computing”, our vision of future IT services that will be provided by assembling modular software components available on world-wide markets. After suitable components have been found, they are automatically integrated, configured and brought to execution in an On-The-Fly Compute Center. We envision that these future compute centers will continue to leverage three current trends in large scale computing which are an increasing amount of parallel processing, a trend to use heterogeneous computing resources, and—in the light of rising energy cost—energy-efficiency as a primary goal in the design and operation of computing systems. In this paper, we point out three research challenges and our current work in these areas. AU - Happe, Markus AU - Kling, Peter AU - Plessl, Christian AU - Platzner, Marco AU - Meyer auf der Heide, Friedhelm ID - 505 T2 - Proceedings of the 9th IEEE Workshop on Software Technology for Future embedded and Ubiquitous Systems (SEUS) TI - On-The-Fly Computing: A Novel Paradigm for Individualized IT Services ER - TY - CONF AU - Suess, Tim AU - Schoenrock, Andrew AU - Meisner, Sebastian AU - Plessl, Christian ID - 1787 SN - 978-0-7695-4979-8 T2 - Proc. Int. Symp. on Parallel and Distributed Processing Workshops (IPDPSW) TI - Parallel Macro Pipelining on the Intel SCC Many-Core Computer ER - TY - CONF AU - Grunzke, Richard AU - Birkenheuer, Georg AU - Blunk, Dirk AU - Breuers, Sebastian AU - Brinkmann, André AU - Gesing, Sandra AU - Herres-Pawlis, Sonja AU - Kohlbacher, Oliver AU - Krüger, Jens AU - Kruse, Martin AU - Müller-Pfefferkorn, Ralph AU - Schäfer, Patrick AU - Schuller, Bernd AU - Steinke, Thomas AU - Zink, Andreas ID - 2107 T2 - Proc. UNICORE Summit TI - A Data Driven Science Gateway for Computational Workflows ER - TY - GEN AU - Plessl, Christian AU - Platzner, Marco AU - Agne, Andreas AU - Happe, Markus AU - Lübbers, Enno ID - 587 TI - Programming models for reconfigurable heterogeneous multi-cores ER - TY - CONF AB - Although the benefits of FPGAs for accelerating scientific codes are widely acknowledged, the use of FPGA accelerators in scientific computing is not widespread because reaping these benefits requires knowledge of hardware design methods and tools that is typically not available with domain scientists. A promising but hardly investigated approach is to develop tool flows that keep the common languages for scientific code (C,C++, and Fortran) and allow the developer to augment the source code with OpenMPlike directives for instructing the compiler which parts of the application shall be offloaded the FPGA accelerator. In this work we study whether the promise of effective FPGA acceleration with an OpenMP-like programming effort can actually be held. Our target system is the Convey HC-1 reconfigurable computer for which an OpenMP-like programming environment exists. As case study we use an application from computational nanophotonics. Our results show that a developer without previous FPGA experience could create an FPGA-accelerated application that is competitive to an optimized OpenMP-parallelized CPU version running on a two socket quad-core server. Finally, we discuss our experiences with this tool flow and the Convey HC-1 from a productivity and economic point of view. AU - Meyer, Björn AU - Schumacher, Jörn AU - Plessl, Christian AU - Förstner, Jens ID - 2106 KW - funding-upb-forschungspreis KW - funding-maxup KW - tet_topic_hpc T2 - Proc. Int. Conf. on Field Programmable Logic and Applications (FPL) TI - Convey Vector Personalities – FPGA Acceleration with an OpenMP-like Effort? ER - TY - JOUR AU - Schumacher, Tobias AU - Plessl, Christian AU - Platzner, Marco ID - 2108 IS - 2 JF - Microprocessors and Microsystems KW - funding-altera SN - 0141-9331 TI - IMORC: An Infrastructure and Architecture Template for Implementing High-Performance Reconfigurable FPGA Accelerators VL - 36 ER - TY - CONF AB - Due to the continuously shrinking device structures and increasing densities of FPGAs, thermal aspects have become the new focus for many research projects over the last years. Most researchers rely on temperature simulations to evaluate their novel thermal management techniques. However, the accuracy of the simulations is to some extent questionable and they require a high computational effort if a detailed thermal model is used.For experimental evaluation of real-world temperature management methods, often synthetic heat sources are employed. Therefore, in this paper we investigated the question if we can create significant rises in temperature on modern FPGAs to enable future evaluation of thermal management techniques based on experiments in contrast to simulations. Therefore, we have developed eight different heat-generating cores that use different subsets of the FPGA resources. Our experimental results show that, according to the built-in thermal diode of our Xilinx Virtex-5 FPGA, we can increase the chip temperature by 134 degree C in less than 12 minutes by only utilizing about 21% of the slices. AU - Happe, Markus AU - Hangmann, Hendrik AU - Agne, Andreas AU - Plessl, Christian ID - 615 T2 - Proceedings of the International Conference on Reconfigurable Computing and FPGAs (ReConFig) TI - Eight Ways to put your FPGA on Fire – A Systematic Study of Heat Generators ER - TY - CONF AB - One major obstacle for a wide spread FPGA usage in general-purpose computing is the development tool flow that requires much higher effort than for pure software solutions. Convey Computer promises a solution to this problem for their HC-1 platform, where the FPGAs are configured to run as a vector processor and the software source code can be annotated with pragmas that guide an automated vectorization process. We investigate this approach for a stereo matching algorithm that has abundant parallelism and a number of different computational patterns. We note that for this case study the automated vectorization in its current state doesn’t hold its productivity promise. However, we also show that using the Vector Personality can yield a significant speedups compared to CPU implementations in two of three investigated phases of the algorithm. Those speedups don’t match custom FPGA implementations, but can come with much reduced development effort. AU - Kenter, Tobias AU - Plessl, Christian AU - Schmitz, Henning ID - 591 T2 - Proceedings of the International Conference on ReConFigurable Computing and FPGAs (ReConFig) TI - Pragma based parallelization - Trading hardware efficiency for ease of use? ER - TY - CONF AB - Today's design and operation principles and methods do not scale well with future reconfigurable computing systems due to an increased complexity in system architectures and applications, run-time dynamics and corresponding requirements. Hence, novel design and operation principles and methods are needed that possibly break drastically with the static ones we have built into our systems and the fixed abstraction layers we have cherished over the last decades. Thus, we propose a HW/SW platform that collects and maintains information about its state and progress which enables the system to reason about its behavior (self-awareness) and utilizes its knowledge to effectively and autonomously adapt its behavior to changing requirements (self-expression).To enable self-awareness, our compute nodes collect information using a variety of sensors, i.e. performance counters and thermal diodes, and use internal self-awareness models that process these information. For self-awareness, on-line learning is crucial such that the node learns and continuously updates its models at run-time to react to changing conditions. To enable self-expression, we break with the classic design-time abstraction layers of hardware, operating system and software. In contrast, our system is able to vertically migrate functionalities between the layers at run-time to exploit trade-offs between abstraction and optimization.This paper presents a heterogeneous multi-core architecture, that enables self-awareness and self-expression, an operating system for our proposed hardware/software platform and a novel self-expression method. AU - Happe, Markus AU - Agne, Andreas AU - Plessl, Christian AU - Platzner, Marco ID - 609 T2 - Proceedings of the Workshop on Self-Awareness in Reconfigurable Computing Systems (SRCS) TI - Hardware/Software Platform for Self-aware Compute Nodes ER - TY - CONF AB - Heterogeneous machines are gaining momentum in the High Performance Computing field, due to the theoretical speedups and power consumption. In practice, while some applications meet the performance expectations, heterogeneous architectures still require a tremendous effort from the application developers. This work presents a code generation method to port codes into heterogeneous platforms, based on transformations of the control flow into function calls. The results show that the cost of the function-call mechanism is affordable for the tested HPC kernels. The complete toolchain, based on the LLVM compiler infrastructure, is fully automated once the sequential specification is provided. AU - Barrio, Pablo AU - Carreras, Carlos AU - Sierra, Roberto AU - Kenter, Tobias AU - Plessl, Christian ID - 567 T2 - Proceedings of the International Conference on High Performance Computing and Simulation (HPCS) TI - Turning control flow graphs into function calls: Code generation for heterogeneous architectures ER - TY - CONF AB - While numerous publications have presented ring oscillator designs for temperature measurements a detailed study of the ring oscillator's design space is still missing. In this work, we introduce metrics for comparing the performance and area efficiency of ring oscillators and a methodology for determining these metrics. As a result, we present a systematic study of the design space for ring oscillators for a Xilinx Virtex-5 platform FPGA. AU - Rüthing, Christoph AU - Happe, Markus AU - Agne, Andreas AU - Plessl, Christian ID - 612 T2 - Proceedings of the International Conference on Field Programmable Logic and Applications (FPL) TI - Exploration of Ring Oscillator Design Space for Temperature Measurements on FPGAs ER - TY - CONF AU - Beisel, Tobias AU - Wiersema, Tobias AU - Plessl, Christian AU - Brinkmann, André ID - 2180 KW - funding-enhance T2 - Proc. Workshop on Computer Architecture and Operating System Co-design (CAOS) TI - Programming and Scheduling Model for Supporting Heterogeneous Accelerators in Linux ER - TY - JOUR AU - Grad, Mariusz AU - Plessl, Christian ID - 2177 JF - Int. Journal of Reconfigurable Computing (IJRC) TI - On the Feasibility and Limitations of Just-In-Time Instruction Set Extension for FPGA-based Reconfigurable Processors ER - TY - CONF AU - Kenter, Tobias AU - Plessl, Christian AU - Platzner, Marco AU - Kauschke, Michael ID - 2191 KW - funding-intel T2 - Intel European Research and Innovation Conference TI - Estimation and Partitioning for CPU-Accelerator Architectures ER - TY - CHAP AU - Plessl, Christian AU - Platzner, Marco ED - Khalgui, Mohamed ED - Hanisch, Hans-Michael ID - 2202 SN - 978-1-60960-086-0 T2 - Reconfigurable Embedded Control Systems: Applications for Flexibility and Agility TI - Hardware Virtualization on Dynamically Reconfigurable Embedded Processors ER - TY - CHAP AU - Sekanina, Lukas AU - Walker, James Alfred AU - Kaufmann, Paul AU - Plessl, Christian AU - Platzner, Marco ID - 10737 T2 - Cartesian Genetic Programming TI - Evolution of Electronic Circuits ER - TY - CONF AU - Meyer, Björn AU - Plessl, Christian AU - Förstner, Jens ID - 2194 KW - tet_topic_hpc T2 - Symp. on Application Accelerators in High Performance Computing (SAAHPC) TI - Transformation of scientific algorithms to parallel computing code: subdomain support in a MPI-multi-GPU backend ER - TY - CONF AU - Beisel, Tobias AU - Wiersema, Tobias AU - Plessl, Christian AU - Brinkmann, André ID - 2193 T2 - Proc. Int. Conf. on Application-Specific Systems, Architectures, and Processors (ASAP) TI - Cooperative multitasking for heterogeneous accelerators in the Linux Completely Fair Scheduler ER - TY - CONF AB - In the next decades, hybrid multi-cores will be the predominant architecture for reconfigurable FPGA-based systems. Temperature-aware thread mapping strategies are key for providing dependability in such systems. These strategies rely on measuring the temperature distribution and redicting the thermal behavior of the system when there are changes to the hardware and software running on the FPGA. While there are a number of tools that use thermal models to predict temperature distributions at design time, these tools lack the flexibility to autonomously adjust to changing FPGA configurations. To address this problem we propose a temperature-aware system that empowers FPGA-based reconfigurable multi-cores to autonomously predict the on-chip temperature distribution for pro-active thread remapping. Our system obtains temperature measurements through a self-calibrating grid of sensors and uses area constrained heat-generating circuits in order to generate spatial and temporal temperature gradients. The generated temperature variations are then used to learn the free parameters of the system's thermal model. The system thus acquires an understanding of its own thermal characteristics. We implemented an FPGA system containing a net of 144 temperature sensors on a Xilinx Virtex-6 LX240T FPGA that is aware of its thermal model. Finally, we show that the temperature predictions vary less than 0.72 degree C on average compared to the measured temperature distributions at run-time. AU - Happe, Markus AU - Agne, Andreas AU - Plessl, Christian ID - 656 T2 - Proceedings of the 2011 International Conference on Reconfigurable Computing and FPGAs (ReConFig) TI - Measuring and Predicting Temperature Distributions on FPGAs at Run-Time ER - TY - CONF AU - Kenter, Tobias AU - Platzner, Marco AU - Plessl, Christian AU - Kauschke, Michael ID - 2200 KW - design space exploration KW - LLVM KW - partitioning KW - performance KW - estimation KW - funding-intel SN - 978-1-4503-0554-9 T2 - Proc. Int. Symp. on Field-Programmable Gate Arrays (FPGA) TI - Performance Estimation Framework for Automated Exploration of CPU-Accelerator Architectures ER - TY - JOUR AU - Schumacher, Tobias AU - Süß, Tim AU - Plessl, Christian AU - Platzner, Marco ID - 2201 JF - Int. Journal of Recon- figurable Computing (IJRC) KW - funding-altera TI - FPGA Acceleration of Communication-bound Streaming Applications: Architecture Modeling and a 3D Image Compositing Case Study ER - TY - CONF AU - Grad, Mariusz AU - Plessl, Christian ID - 2198 T2 - Proc. Reconfigurable Architectures Workshop (RAW) TI - Just-in-time Instruction Set Extension – Feasibility and Limitations for an FPGA-based Reconfigurable ASIP Architecture ER - TY - CONF AU - Lübbers, Enno AU - Platzner, Marco AU - Plessl, Christian AU - Keller, Ariane AU - Plattner, Bernhard ID - 2223 SN - 1-60132-140-6 T2 - Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA) TI - Towards Adaptive Networking for Embedded Devices based on Reconfigurable Hardware ER - TY - CONF AU - Grad, Mariusz AU - Plessl, Christian ID - 2216 T2 - Proc. Int. Conf. on ReConFigurable Computing and FPGAs (ReConFig) TI - Pruning the Design Space for Just-In-Time Processor Customization ER - TY - CONF AU - Grad, Mariusz AU - Plessl, Christian ID - 2224 SN - 1-60132-140-6 T2 - Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA) TI - An Open Source Circuit Library with Benchmarking Facilities ER - TY - CONF AU - Andrews, David AU - Plessl, Christian ID - 2220 SN - 1-60132-140-6 T2 - Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA) TI - Configurable Processor Architectures: History and Trends ER - TY - GEN ED - Plaks, Toomas P. ED - Andrews, David ED - DeMara, Ronald ED - Lam, Herman ED - Lee, Jooheung ED - Plessl, Christian ED - Stitt, Greg ID - 2222 SN - 1-60132-140-6 TI - Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA) ER - TY - CONF AU - Beisel, Tobias AU - Niekamp, Manuel AU - Plessl, Christian ID - 2226 SN - 978-1-4244-6965-9 T2 - Proc. Int. Conf. on Application-Specific Systems, Architectures, and Processors (ASAP) TI - Using Shared Library Interposing for Transparent Acceleration in Systems with Heterogeneous Hardware Accelerators ER - TY - CONF AU - Keller, Ariane AU - Plattner, Bernhard AU - Lübbers, Enno AU - Platzner, Marco AU - Plessl, Christian ID - 2206 SN - 978-1-4244-8864-3 T2 - Proc. IEEE Globecom Workshop on Network of the Future (FutureNet) TI - Reconfigurable Nodes for Future Networks ER - TY - CONF AU - Woehrle, Matthias AU - Plessl, Christian AU - Thiele, Lothar ID - 2227 SN - 978-1-4244-7911-5 T2 - Proc. Int. Conf. Networked Sensing Systems (INSS) TI - Rupeas: Ruby Powered Event Analysis DSL ER - TY - CONF AU - Kenter, Tobias AU - Platzner, Marco AU - Plessl, Christian AU - Kauschke, Michael ED - Hammami, Omar ED - Larrabee, Sandra ID - 2228 T2 - Proc. Workshop on Architectural Research Prototyping (WARP), International Symposium on Computer Architecture (ISCA) TI - Performance Estimation for the Exploration of CPU-Accelerator Architectures ER - TY - GEN AB - Wireless Sensor Networks (WSNs) are unique embedded computation systems for distributed sensing of a dispersed phenomenon. While being a strongly concurrent distributed system, its embedded aspects with severe resource limitations and the wireless communication requires a fusion of technologies and methodologies from very different fields. As WSNs are deployed in remote locations for long-term unattended operation, assurance of correct functioning of the system is of prime concern. Thus, the design and development of WSNs requires specialized tools to allow for testing and debugging the system. To this end, we present a framework for analyzing and checking WSNs based on collected events during system operation. It allows for abstracting from the event trace by means of behavioral queries and uses assertions for checking the accordance of an execution to its specification. The framework is independent from WSN test platforms, applications and logging semantics and thus generally applicable for analyzing event logs of WSN test executions. AU - Woehrle, Matthias AU - Plessl, Christian AU - Thiele, Lothar ID - 2353 KW - Rupeas KW - DSL KW - WSN KW - testing TI - Rupeas: Ruby Powered Event Analysis DSL ER - TY - CONF AB - Mapping applications that consist of a collection of cores to FPGA accelerators and optimizing their performance is a challenging task in high performance reconfigurable computing. We present IMORC, an architectural template and highly versatile on-chip interconnect. IMORC links provide asynchronous FIFOs and bitwidth conversion which allows for flexibly composing accelerators from cores running at full speed within their own clock domains, thus facilitating the re-use of cores and portability. Further, IMORC inserts performance counters for monitoring runtime data. In this paper, we first introduce the IMORC architectural template and the on-chip interconnect, and then demonstrate IMORC on the example of accelerating the k-th nearest neighbor thinning problem on an XD1000 reconfigurable computing system. Using IMORC's monitoring infrastructure, we gain insights into the data-dependent behavior of the application which, in turn, allow for optimizing the accelerator. AU - Schumacher, Tobias AU - Plessl, Christian AU - Platzner, Marco ID - 2350 KW - IMORC KW - interconnect KW - performance SN - 978-1-4244-4450-2 T2 - Proc. Int. Symp. on Field-Programmable Custom Computing Machines (FCCM) TI - IMORC: Application Mapping, Monitoring and Optimization for High-Performance Reconfigurable Computing ER - TY - CONF AB - In this work we present EvoCache, a novel approach for implementing application-specific caches. The key innovation of EvoCache is to make the function that maps memory addresses from the CPU address space to cache indices programmable. We support arbitrary Boolean mapping functions that are implemented within a small reconfigurable logic fabric. For finding suitable cache mapping functions we rely on techniques from the evolvable hardware domain and utilize an evolutionary optimization procedure. We evaluate the use of EvoCache in an embedded processor for two specific applications (JPEG and BZIP2 compression) with respect to execution time, cache miss rate and energy consumption. We show that the evolvable hardware approach for optimizing the cache functions not only significantly improves the cache performance for the training data used during optimization, but that the evolved mapping functions generalize very well. Compared to a conventional cache architecture, EvoCache applied to test data achieves a reduction in execution time of up to 14.31% for JPEG (10.98% for BZIP2), and in energy consumption by 16.43% for JPEG (10.70% for BZIP2). We also discuss the integration of EvoCache into the operating system and show that the area and delay overheads introduced by EvoCache are acceptable. AU - Kaufmann, Paul AU - Plessl, Christian AU - Platzner, Marco ID - 2262 KW - EvoCache KW - evolvable hardware KW - computer architecture T2 - Proc. NASA/ESA Conference on Adaptive Hardware and Systems (AHS) TI - EvoCaches: Application-specific Adaptation of Cache Mapping ER - TY - CONF AU - Beutel, Jan AU - Gruber, Stephan AU - Hasler, Andi AU - Lim, Roman AU - Meier, Andreas AU - Plessl, Christian AU - Talzi, Igor AU - Thiele, Lothar AU - Tschudin, Christian AU - Woehrle, Matthias AU - Yuecel, Mustafa ID - 2352 KW - WSN KW - PermaSense SN - 978-1-4244-5108-1 T2 - Proc. Int. Conf. on Information Processing in Sensor Networks (IPSN) TI - PermaDAQ: A Scientific Instrument for Precision Sensing and Data Recovery in Environmental Extremes ER - TY - CONF AU - Schumacher, Tobias AU - Süß, Tim AU - Plessl, Christian AU - Platzner, Marco ID - 2238 KW - IMORC KW - graphics SN - 978-0-7695-3917-1 T2 - Proc. Int. Conf. on ReConFigurable Computing and FPGAs (ReConFig) TI - Communication Performance Characterization for Reconfigurable Accelerator Design on the XD1000 ER - TY - CONF AU - Schumacher, Tobias AU - Plessl, Christian AU - Platzner, Marco ID - 2261 KW - IMORC KW - NOC KW - KNN KW - accelerator SN - 1946-1488 T2 - Proc. Int. Conf. on Field Programmable Logic and Applications (FPL) TI - An Accelerator for k-th Nearest Neighbor Thinning Based on the IMORC Infrastructure ER - TY - CONF AB - In this paper, we introduce the Woolcano reconfigurable processor architecture. The architecture is based on the Xilinx Virtex-4 FX FPGA and leverages the Auxiliary Processing Unit (APU) as well as the partial reconfiguration capabilities to provide dynamically reconfigurable custom instructions. We also present a hardware tool flow that automatically translates software functions into custom instructions and a software tool flow that creates binaries using these instructions. While previous research on processors with reconfigurable functional units has been performed predominantly with simulation, the Woolcano architecture allows for exploring dynamic instruction set extension with commercially available hardware. Finally, we present a case study demonstrating a custom floating-point instruction generated with our approach, which achieves a 40x speedup over software-emulated floating-point operations and a 21% speedup over the Xilinx hardware floating-point unit. AU - Grad, Mariusz AU - Plessl, Christian ID - 2263 SN - 1-60132-101-5 T2 - Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA) TI - Woolcano: An Architecture and Tool Flow for Dynamic Instruction Set Extension on Xilinx Virtex-4 FX ER - TY - CONF AU - Woehrle, Matthias AU - Plessl, Christian AU - Lim, Roman AU - Beutel, Jan AU - Thiele, Lothar ID - 2370 KW - WSN KW - testing KW - verification SN - 978-0-7695-3158-8 T2 - IEEE Int. Conf. on Sensor Networks, Ubiquitous, and Trustworthy Computing (SUTC) TI - EvAnT: Analysis and Checking of event traces for Wireless Sensor Networks ER - TY - CONF AU - Schumacher, Tobias AU - Meiche, Robert AU - Kaufmann, Paul AU - Lübbers, Enno AU - Plessl, Christian AU - Platzner, Marco ID - 2364 SN - 1-60132-064-7 T2 - Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA) TI - A Hardware Accelerator for k-th Nearest Neighbor Thinning ER - TY - CONF AU - Schumacher, Tobias AU - Plessl, Christian AU - Platzner, Marco ID - 2372 KW - IMORC KW - IP core KW - interconnect T2 - Many-core and Reconfigurable Supercomputing Conference (MRSC) TI - IMORC: An infrastructure for performance monitoring and optimization of reconfigurable computers ER - TY - GEN AU - Beutel, Jan AU - Plessl, Christian AU - Woehrle, Matthias ID - 2394 TI - Increasing the Reliability of Wireless Sensor Networks with a Unit Testing Framework ER - TY - CONF AU - Woehrle, Matthias AU - Plessl, Christian AU - Beutel, Jan AU - Thiele, Lothar ID - 2392 KW - WSN KW - testing KW - distributed KW - embedded SN - 978-1-59593-694-3 T2 - Proc. Workshop on Embedded Networked Sensors (EmNets) TI - Increasing the Reliability of Wireless Sensor Networks with a Distributed Testing Framework ER - TY - CONF AU - Beutel, Jan AU - Dyer, Matthias AU - Lim, Roman AU - Plessl, Christian AU - Woehrle, Matthias AU - Yuecel, Mustafa AU - Thiele, Lothar ID - 2393 KW - WSN KW - testing KW - verification SN - 1-4244-1231-5 T2 - Proc. Int. Conf. Networked Sensing Systems (INSS) TI - Automated Wireless Sensor Network Testing ER - TY - THES AB - In this thesis, we propose to use a reconfigurable processor as main computation element in embedded systems for applications from the multi-media and communications domain. A reconfigurable processor integrates an embedded CPU core with a Reconfigurable Processing Unit (RPU). Many of our target applications require real-time signal-processing of data streams and expose a high computational demand. The key challenge in designing embedded systems for these applications is to find an implementation that satisfies the performance goals and is adaptable to new applications, while the system cost is minimized. Implementations that solely use an embedded CPU are likely to miss the performance goals. Application-Specific Integrated Circuit (ASIC)-based coprocessors can be used for some high-volume products with fixed functions, but fall short for systems with varying applications. We argue that a reconfigurable processor with a coarse-grained, dynamically reconfigurable array of modest size provides an attractive implementation platform for our application domain. The computational intensive application kernels are executed on the RPU, while the remaining parts of the application are executed on the CPU. Reconfigurable hardware allows for implementing application specific coprocessors with a high performance, while the function of the coprocessor can still be adapted due to the programmability. So far, reconfigurable technology is used in embedded systems primarily with static configurations, e.g., for implementing glue-logic, replacing ASICs, and for implementing fixed-function coprocessors. Changing the configuration at runtime enables a number of interesting application modes, e.g., on-demand loading of coprocessors and time-multiplexed execution of coprocessors, which is commonly denoted as hardware virtualization. While the use of static configurations is well understood and supported by design-tools, the role of dynamic reconfiguration is not well investigated yet. Current application specification methods and design-tools do not provide an end-to-end tool-flow that considers dynamic reconfiguration. A key idea of our approach is to reduce system cost by keeping the size of the reconfigurable array small and to use hardware virtualization techniques to compensate for the limited hardware resources. The main contribution of this thesis is the codesign of a reconfigurable processor architecture named ZIPPY, the corresponding hardware and software implementation tools, and an application specification model which explicitly considers hardware virtualization. The ZIPPY architecture is widely parametrized and allows for specifying a whole family of processor architectures. The implementation tools are also parametrized and can target any architectural variant. We evaluate the performance of the architecture with a system-level, cycle-accurate cosimulation framework. This framework enables us to perform design-space exploration for a variety of reconfigurable processor architectures. With two case studies, we demonstrate, that hardware virtualization on the Zippy architecture is feasible and enables us to trade-off performance for area in embedded systems. Finally, we present a novel method for optimal temporal partitioning of sequential circuits, which is an important form of hardware virtualization. The method based on Slowdown and Retiming allows us to decompose any sequential circuit into a number of smaller, communicating subcircuits that can be executed on a dynamically reconfigurable architecture. AU - Plessl, Christian ID - 2404 KW - Zippy SN - 978-3-8322-5561-3 TI - Hardware virtualization on a coarse-grained reconfigurable processor ER - TY - CONF AB - This paper presents a novel method for optimal temporal partitioning of sequential circuits for time-multiplexed reconfigurable architectures. The method bases on slowdown and retiming and maximizes the circuit's performance during execution while restricting the size of the partitions to respect the resource constraints of the reconfigurable architecture. We provide a mixed integer linear program (MILP) formulation of the problem, which can be solved exactly. In contrast to related work, our approach optimizes performance directly, takes structural modifications of the circuit into account, and is extensible. We present the application of the new method to temporal partitioning for a coarse-grained reconfigurable architecture. AU - Plessl, Christian AU - Platzner, Marco AU - Thiele, Lothar ID - 2401 KW - temporal partitioning KW - retiming KW - ILP T2 - Proc. Int. Conf. on Field Programmable Technology (ICFPT) TI - Optimal Temporal Partitioning based on Slowdown and Retiming ER - TY - CONF AB - This paper motivates the use of hardware virtualization on coarse-grained reconfigurable architectures. We introduce Zippy, a coarse-grained multi-context hybrid CPU with architectural support for efficient hardware virtualization. The architectural details and the corresponding tool flow are outlined. As a case study, we compare the non-virtualized and the virtualized execution of an ADPCM decoder. AU - Plessl, Christian AU - Platzner, Marco ID - 2411 KW - Zippy T2 - Proc. Int. Conf. on Application-Specific Systems, Architectures, and Processors (ASAP) TI - Zippy – A coarse-grained reconfigurable array with support for hardware virtualization ER - TY - JOUR AB - Reconfigurable architectures that tightly integrate a standard CPU core with a field-programmable hardware structure have recently been receiving impact of these design decisions on the overall system performance is a challenging task. In this paper, we first present a framework for the cycle-accurate performance evaluation of hybrid reconfigurable processors on the system level. Then, we discuss a reconfigurable processor for data-streaming applications, which attaches a coarse-grained reconfigurable unit to the coprocessor interface of a standard embedded CPU core. By means of a case study we evaluate the system-level impact of certain design features for the reconfigurable unit, such as multiple contexts, register replication, and hardware context scheduling. The results illustrate that a system-level evaluation framework is of paramount importance for studying the architectural trade-offs and optimizing design parameters for reconfigurable processors. AU - Enzler, Rolf AU - Plessl, Christian AU - Platzner, Marco ID - 2412 IS - 2-3 JF - Microprocessors and Microsystems KW - FPGA KW - reconfigurable computing KW - co-simulation KW - Zippy TI - System-level performance evaluation of reconfigurable processors VL - 29 ER - TY - CONF AB - In this paper we introduce to virtualization of hardware on reconfigurable devices. We identify three main approaches denoted with temporal partitioning, virtualized execution, and virtual machine. For each virtualization approach, we discuss the application models, the required execution architectures, the design tools and the run-time systems. Then, we survey a selection of important projects in the field. AU - Plessl, Christian AU - Platzner, Marco ID - 2415 KW - hardware virtualization T2 - Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA) TI - Virtualization of Hardware – Introduction and Survey ER - TY - CONF AB - This paper presents TKDM, a PC-based high-performance reconfigurable computing environment. The TKDM hardware consists of an FPGA module that uses the DIMM (dual inline memory module) bus for high-bandwidth and low-latency communication with the host CPU. The system's firmware is integrated with the Linux host operating system and offers functions for data communication and FPGA reconfiguration. The intended use of TKDM is that of a dynamically reconfigurable co-processor for data streaming applications. The system's firmware can be customized for specific application domains to facilitate simple and easy-to-use programming interfaces. AU - Plessl, Christian AU - Platzner, Marco ID - 2418 KW - coprocessor KW - DIMM KW - memory bus KW - FPGA KW - high performance computing T2 - Proc. Int. Conf. on Field Programmable Technology (ICFPT) TI - TKDM – A Reconfigurable Co-processor in a PC's Memory Slot ER - TY - JOUR AB - Wearable computers are embedded into the mobile environment of their users. A design challenge for wearable systems is to combine the high performance required for tasks such as video decoding with the low energy consumption required to maximise battery runtimes and the flexibility demanded by the dynamics of the environment and the applications. In this paper, we demonstrate that reconfigurable hardware technology is able to answer this challenge. We present the concept and the prototype implementation of an autonomous wearable unit with reconfigurable modules (WURM). We discuss experiments that show the uses of reconfigurable hardware in WURM: ASICs-on-demand and adaptive interfaces. Finally, we present an experiment with an operating system layer for WURM. AU - Plessl, Christian AU - Enzler, Rolf AU - Walder, Herbert AU - Beutel, Jan AU - Platzner, Marco AU - Thiele, Lothar AU - Tröster, Gerhard ID - 2419 IS - 5 JF - Personal and Ubiquitous Computing TI - The Case for Reconfigurable Hardware in Wearable Computing VL - 7 ER - TY - JOUR AB - This paper presents the acceleration of minimum-cost covering problems by instance-specific hardware. First, we formulate the minimum-cost covering problem and discuss a branch \& bound algorithm to solve it. Then we describe instance-specific hardware architectures that implement branch \& bound in 3-valued logic and use reduction techniques similar to those found in software solvers. We further present prototypical accelerator implementations and a corresponding design tool flow. Our experiments reveal significant raw speedups up to five orders of magnitude for a set of smaller unate covering problems. Provided that hardware compilation times can be reduced, we conclude that instance-specific acceleration of hard minimum-cost covering problems will lead to substantial overall speedups. AU - Plessl, Christian AU - Platzner, Marco ID - 2420 IS - 2 JF - Journal of Supercomputing KW - reconfigurable computing KW - instance-specific acceleration KW - minimum covering SN - 0920-8542 TI - Instance-Specific Accelerators for Minimum Covering VL - 26 ER - TY - CONF AB - In contrast to processors, current reconfigurable devices totally lack programming models that would allow for device independent compilation and forward compatibility. The key to overcome this limitation is hardware virtualization. In this paper, we resort to a macro-pipelined execution model to achieve hardware virtualization for data streaming applications. As a hardware implementation we present a hybrid multi-context architecture that attaches a coarse-grained reconfigurable array to a host CPU. A co-simulation framework enables cycle-accurate simulation of the complete architecture. As a case study we map an FIR filter to our virtualized hardware model and evaluate different designs. We discuss the impact of the number of contexts and the feature of context state on the speedup and the CPU load. AU - Enzler, Rolf AU - Plessl, Christian AU - Platzner, Marco ID - 2421 KW - Zippy KW - multi-context KW - FPGA T2 - Proc. Int. Conf. on Field Programmable Logic and Applications (FPL) TI - Virtualizing Hardware with Multi-Context Reconfigurable Arrays VL - 2778 ER - TY - CONF AB - Reconfigurable computing architectures aim to dynamically adapt their hardware to the application at hand. As research shows, the time it takes to reconfigure the hardware forms an overhead that can significantly impair the benefits of hardware customization. Multi-context devices are one promising approach to overcome the limitations posed by long reconfiguration times. In contrast to more traditional reconfigurable architectures, multi-context devices hold several configurations on-chip. On demand, the device can quickly switch to another context. In this paper we present a co-simulation environment to investigate design trade-offs for hybrid multi-context architectures. Our architectural model comprises a reconfigurable unit closely coupled to a CPU core. As a case study, we discuss the implementation of a FIR filter partitioned into several contexts. We outline the mapping process and present simulation results for single- and multi-context reconfigurable units coupled with both embedded and high-end CPUs. AU - Enzler, Rolf AU - Plessl, Christian AU - Platzner, Marco ID - 2422 KW - Zippy KW - co-simulation SN - 1-932415-05-X T2 - Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA) TI - Co-simulation of a Hybrid Multi-Context Architecture ER - TY - CONF AB - Wearable computers are embedded into the mobile environment of the human body. A design challenge for wearable systems is to combine the high performance required for tasks such as video decoding with low energy consumption required to maximize battery runtimes and the flexibility demanded by the dynamics of the environment and the applications. In this paper, we demonstrate that reconfigurable hardware technology is able to answer this challenge. We present the concept and the prototype implementation of an autonomous wearable unit with reconfigurable modules (WURM). We discuss two experiments that show the uses of reconfigurable hardware in WURM: ASICs-on-demand and adaptive interfaces. Finally, we develop and evaluate task placement techniques used in the operating system layer of WURM. AU - Plessl, Christian AU - Enzler, Rolf AU - Walder, Herbert AU - Beutel, Jan AU - Platzner, Marco AU - Thiele, Lothar ID - 2423 KW - wearable computing SN - 0-7695-1816-8 T2 - Proc. Int. Symp. on Wearable Computers (ISWC) TI - Reconfigurable Hardware in Wearable Computing Nodes ER - TY - CONF AB - Recent generations of high-density and high-speed FPGAs provide a sufficient capacity for implementing complete configurable systems on a chip (CSoCs). Hybrid CPUs that combine standard CPU cores with reconfigurable coprocessors are an important subclass of CSoCs. With partially reconfigurable FPGAs, coprocessors can be loaded on demand while the CPU remains running. However, the lack of high-level design tools for partial reconfiguration makes practical implementations a challenging task. In this paper, we introduce a design flow to implement hybrid processors on Xilinx Virtex. The design flow is based on two techniques, virtual sockets and feed-through components, and can efficiently generate partial configurations from industry-quality cores. We discuss the design flow and present a fully operational audio streaming prototype to demonstrate its feasibility. AU - Dyer, Matthias AU - Plessl, Christian AU - Platzner, Marco ID - 2424 KW - partial reconfiguration T2 - Proc. Int. Conf. on Field Programmable Logic and Applications (FPL) TI - Partially Reconfigurable Cores for Xilinx Virtex VL - 2438 ER - TY - CONF AB - We present instance-specific custom computing machines for the set covering problem. Four accelerator architectures are developed that implement branch \& bound in 3-valued logic and many of the deduction techniques found in software solvers. We use set covering benchmarks from two-level logic minimization and Steiner triple systems to derive and discuss experimental results. The resulting raw speedups are in the order of four magnitudes on average. Finally, we propose a hybrid solver architecture that combines the raw speed of instance-specific reconfigurable hardware with flexible bounding schemes implemented in software. AU - Plessl, Christian AU - Platzner, Marco ID - 2425 T2 - Proc. Int. Symp. on Field-Programmable Custom Computing Machines (FCCM) TI - Custom Computing Machines for the Set Covering Problem ER - TY - CONF AB - In this paper we present instance-specific accelerators for minimum-cost covering problems. We first define the covering problem and discuss a branch&bound algorithm to solve it. Then we describe an instance-specific hardware architecture that implements branch&bound in 3-valued logic and uses reduction techniques usually found in software solvers. Results for small unate covering problems reveal significant raw speedups. AU - Plessl, Christian AU - Platzner, Marco ID - 2428 KW - minimum covering KW - accelerator KW - funding-sundance T2 - Proc. Int. Conf. on Engineering of Reconfigurable Systems and Algorithms (ERSA) TI - Instance-Specific Accelerators for Minimum Covering ER - TY - JOUR AU - Plessl, Christian AU - Wilde, Erik ID - 2429 JF - iX TI - Server-Side-Techniken im Web – ein Überblick ER - TY - GEN AB - In this report the design and implementation of an instance-specific accelerator for solving minimum covering problems will be presented. After an introduction to configurable computing in general, the minimum covering problem is defined and a branch and bound algorithm to solve it in software is presented. The remainder of the report shows how this branch and bound algorithm can be adopted to hardware. Specifically it is stressed how the various sophisticated strategies for deducing conditions for variables used by software solvers can be adopted to hardware and how a system which uses 3-valued logic to solve this problem can be designed. In addition to these considerations focusing on the architecture of the system, some important details of the actual implementation are given. A prototype has been implemented for showing the feasibility of the concept and for gaining information about speed and size of the hardware implementation. Cycle-accurate simulations for a set of benchmark problems have been done for determining the performance of the accelerator. The speed of the resulting accelerators has been compared to the time a reference software solver (espresso) needs and the resulting speedups have been calculated. I have shown that a raw speedup of several orders of maginitude can be achieved for many problems; for some problems no speedup is achieved yet. After a discussion of the results, ideas for future work are presented. AU - Plessl, Christian ID - 2430 TI - Reconfigurable Accelerators for Minimum Covering ER - TY - CONF AB - In this paper, we present the analysis of applications from the domain of handheld and wearable computing. This analysis is the first step to derive and evaluate design parameters for dynamically reconfigurable processors. We discuss the selection of representative benchmarks for handhelds and wearables and group the applications into multimedia, communications, and cryptography programs. We simulate the applications on a cycle-accurate processor simulator and gather statistical data such as instruction mix, cache hit rates and memory requirements for an embedded processor model. A breakdown of the executed cycles into different functions identifies the most compute-intensive code sections - the kernels. Then, we analyze the applications and discuss parameters that strongly influence the design of dynamically reconfigurable processors. Finally, we outline the construction of a parameterizable simulation model for a reconfigurable unit that is attached to a processor core. AU - Enzler, Rolf AU - Platzner, Marco AU - Plessl, Christian AU - Thiele, Lothar AU - Tröster, Gerhard ID - 2432 KW - benchmark T2 - Reconfigurable Technology: FPGAs and Reconfigurable Processors for Computing and Communications III TI - Reconfigurable Processors for Handhelds and Wearables: Application Analysis VL - 4525 ER - TY - GEN AU - Plessl, Christian AU - Maurer, Simon ID - 2433 KW - co-design KW - speech processing TI - Hardware/Software Codesign in Speech Compression Applications ER -