Wellcome

Embedded computing for high performance : efficient mapping of computations using customization, code transformations and compilation / Jo�ao M.P. Cardoso, Jos�e Gabriel F. Coutinho, Pedro C. Diniz.

By: Cardoso, Jo�ao M. P [author.]Contributor(s): Coutinho, Jos�e Gabriel de Figueiredo [author.] | Diniz, Pedro C [author.]Material type: TextTextPublisher: Cambridge, MA : Morgan Kaufmann Publishers, an imprint of Elsevier, [2017]Copyright date: �2017Description: 1 online resource (xxi, 297 pages) : illustrations (some color)Content type: text Media type: computer Carrier type: online resourceISBN: 9780128041994; 0128041994Subject(s): Embedded computer systems | High performance computing | COMPUTERS -- Computer Literacy | COMPUTERS -- Computer Science | COMPUTERS -- Data Processing | COMPUTERS -- Hardware -- General | COMPUTERS -- Information Technology | COMPUTERS -- Machine Theory | COMPUTERS -- Reference | Embedded computer systems | High performance computingGenre/Form: Electronic books.Additional physical formats: Print version:: Embedded computing for high performance.DDC classification: 004.16 LOC classification: TK7895.E42Online resources: ScienceDirect
Contents:
Front Cover; Embedded Computing for High Performance: Efficient Mapping of Computations Using Customization, CodeTransformations and Com ... ; Copyright; Dedication; Contents; About the Authors; Preface; Acknowledgments; Abbreviations; Chapter 1: Introduction; 1.1. Overview; 1.2. Embedded Systems in Society and Industry; 1.3. Embedded Computing Trends; 1.4. Embedded Systems: Prototyping and Production; 1.5. About LARA: An Aspect-Oriented Approach; 1.6. Objectives and Target Audience; 1.7. Complementary Bibliography; 1.8. Dependences in Terms of Knowledge; 1.9. Examples and Benchmarks.
1.10. Book Organization1.11. Intended Use; 1.12. Summary; References; Chapter 2: High-performance embedded computing; 2.1. Introduction; 2.2. Target Architectures; 2.2.1. Hardware Accelerators as Coprocessors; 2.2.2. Multiprocessor and Multicore Architectures; 2.2.3. Heterogeneous Multiprocessor/Multicore Architectures; 2.2.4. OpenCL Platform Model; 2.3. Core-Based Architectural Enhancements; 2.3.1. Single Instruction, Multiple Data Units; 2.3.2. Fused Multiply-Add Units; 2.3.3. Multithreading Support; 2.4. Common Hardware Accelerators; 2.4.1. GPU Accelerators.
2.4.2. Reconfigurable Hardware Accelerators2.4.3. SoCs With Reconfigurable Hardware; 2.5. Performance; 2.5.1. Amdahl's Law; 2.5.2. The Roofline Model; 2.5.3. Worst-Case Execution Time Analysis; 2.6. Power and Energy Consumption; 2.6.1. Dynamic Power Management; 2.6.2. Dynamic Voltage and Frequency Scaling; 2.6.3. Dark Silicon; 2.7. Comparing Results; 2.8. Summary; 2.9. Further Reading; References; Chapter 3: Controlling the design and development cycle; 3.1. Introduction; 3.2. Specifications in MATLAB and C: Prototyping and Development; 3.2.1. Abstraction Levels.
3.2.2. Dealing With Different Concerns3.2.3. Dealing With Generic Code; 3.2.4. Dealing With Multiple Targets; 3.3. Translation, Compilation, and Synthesis Design flows; 3.4. Hardware/Software Partitioning; 3.4.1. Static Partitioning; 3.4.2. Dynamic Partitioning; 3.5. LARA: a language for Specifying Strategies; 3.5.1. Select and Apply; 3.5.2. Insert Action; 3.5.3. Exec and Def Actions; 3.5.4. Invoking Aspects; 3.5.5. Executing External Tools; 3.5.6. Compilation and Synthesis Strategies in LARA; 3.6. Summary; 3.7. Further Reading; References; Chapter 4: Source code analysis and instrumentation.
4.1. Introduction4.2. Analysis and Metrics; 4.3. Static Source Code Analysis; 4.3.1. Data Dependences; 4.3.2. Code Metrics; 4.4. Dynamic Analysis: The Need for Instrumentation; 4.4.1. Information From Profiling; 4.4.2. Profiling Example; 4.5. Custom Profiling Examples; 4.5.1. Finding Hotspots; 4.5.2. Loop Metrics; 4.5.3. Dynamic Call Graphs; 4.5.4. Branch Frequencies; 4.5.5. Heap Memory; 4.6. Summary; 4.7. Further Reading; References; Chapter 5: Source code transformations and optimizations; 5.1. Introduction; 5.2. Basic Transformations; 5.3. Data Type Conversions; 5.4. Code Reordering.
Summary: Embedded Computing for High Performance: Design Exploration and Customization Using High-level Compilation and Synthesis Tools provides a set of real-life example implementations that migrate traditional desktop systems to embedded systems. Working with popular hardware, including Xilinx and ARM, the book offers a comprehensive description of techniques for mapping computations expressed in programming languages such as C or MATLAB to high-performance embedded architectures consisting of multiple CPUs, GPUs, and reconfigurable hardware (FPGAs). The authors demonstrate a domain-specific language (LARA) that facilitates retargeting to multiple computing systems using the same source code. In this way, users can decouple original application code from transformed code and enhance productivity and program portability. After reading this book, engineers will understand the processes, methodologies, and best practices needed for the development of applications for high-performance embedded computing systems.
Tags from this library: No tags from this library for this title. Log in to add tags.
Holdings
Item type Current library Call number Status Date due Barcode
Ebooks Ebooks Mysore University Main Library
Not for loan EBKELV965

Includes bibliographical references and index.

Online resource; title from PDF title page (EBSCO, viewed June 26, 2017).

Embedded Computing for High Performance: Design Exploration and Customization Using High-level Compilation and Synthesis Tools provides a set of real-life example implementations that migrate traditional desktop systems to embedded systems. Working with popular hardware, including Xilinx and ARM, the book offers a comprehensive description of techniques for mapping computations expressed in programming languages such as C or MATLAB to high-performance embedded architectures consisting of multiple CPUs, GPUs, and reconfigurable hardware (FPGAs). The authors demonstrate a domain-specific language (LARA) that facilitates retargeting to multiple computing systems using the same source code. In this way, users can decouple original application code from transformed code and enhance productivity and program portability. After reading this book, engineers will understand the processes, methodologies, and best practices needed for the development of applications for high-performance embedded computing systems.

Front Cover; Embedded Computing for High Performance: Efficient Mapping of Computations Using Customization, CodeTransformations and Com ... ; Copyright; Dedication; Contents; About the Authors; Preface; Acknowledgments; Abbreviations; Chapter 1: Introduction; 1.1. Overview; 1.2. Embedded Systems in Society and Industry; 1.3. Embedded Computing Trends; 1.4. Embedded Systems: Prototyping and Production; 1.5. About LARA: An Aspect-Oriented Approach; 1.6. Objectives and Target Audience; 1.7. Complementary Bibliography; 1.8. Dependences in Terms of Knowledge; 1.9. Examples and Benchmarks.

1.10. Book Organization1.11. Intended Use; 1.12. Summary; References; Chapter 2: High-performance embedded computing; 2.1. Introduction; 2.2. Target Architectures; 2.2.1. Hardware Accelerators as Coprocessors; 2.2.2. Multiprocessor and Multicore Architectures; 2.2.3. Heterogeneous Multiprocessor/Multicore Architectures; 2.2.4. OpenCL Platform Model; 2.3. Core-Based Architectural Enhancements; 2.3.1. Single Instruction, Multiple Data Units; 2.3.2. Fused Multiply-Add Units; 2.3.3. Multithreading Support; 2.4. Common Hardware Accelerators; 2.4.1. GPU Accelerators.

2.4.2. Reconfigurable Hardware Accelerators2.4.3. SoCs With Reconfigurable Hardware; 2.5. Performance; 2.5.1. Amdahl's Law; 2.5.2. The Roofline Model; 2.5.3. Worst-Case Execution Time Analysis; 2.6. Power and Energy Consumption; 2.6.1. Dynamic Power Management; 2.6.2. Dynamic Voltage and Frequency Scaling; 2.6.3. Dark Silicon; 2.7. Comparing Results; 2.8. Summary; 2.9. Further Reading; References; Chapter 3: Controlling the design and development cycle; 3.1. Introduction; 3.2. Specifications in MATLAB and C: Prototyping and Development; 3.2.1. Abstraction Levels.

3.2.2. Dealing With Different Concerns3.2.3. Dealing With Generic Code; 3.2.4. Dealing With Multiple Targets; 3.3. Translation, Compilation, and Synthesis Design flows; 3.4. Hardware/Software Partitioning; 3.4.1. Static Partitioning; 3.4.2. Dynamic Partitioning; 3.5. LARA: a language for Specifying Strategies; 3.5.1. Select and Apply; 3.5.2. Insert Action; 3.5.3. Exec and Def Actions; 3.5.4. Invoking Aspects; 3.5.5. Executing External Tools; 3.5.6. Compilation and Synthesis Strategies in LARA; 3.6. Summary; 3.7. Further Reading; References; Chapter 4: Source code analysis and instrumentation.

4.1. Introduction4.2. Analysis and Metrics; 4.3. Static Source Code Analysis; 4.3.1. Data Dependences; 4.3.2. Code Metrics; 4.4. Dynamic Analysis: The Need for Instrumentation; 4.4.1. Information From Profiling; 4.4.2. Profiling Example; 4.5. Custom Profiling Examples; 4.5.1. Finding Hotspots; 4.5.2. Loop Metrics; 4.5.3. Dynamic Call Graphs; 4.5.4. Branch Frequencies; 4.5.5. Heap Memory; 4.6. Summary; 4.7. Further Reading; References; Chapter 5: Source code transformations and optimizations; 5.1. Introduction; 5.2. Basic Transformations; 5.3. Data Type Conversions; 5.4. Code Reordering.

There are no comments on this title.

to post a comment.

No. of hits (from 9th Mar 12) :

Powered by Koha