Data movement between the memory units and the compute units of current computing systems is a major performance and energy bottleneck. From large-scale servers to mobile devices, data movement costs dominate computation costs in terms of both performance and energy consumption. For example, data movement between the main memory and the processing cores accounts for 62% of the total system energy in consumer applications. As a result, the data movement bottleneck is a huge burden that greatly limits the energy efficiency and performance of modern computing systems. This phenomenon is an undesired effect of the dichotomy between memory and the processor, which leads to the data movement bottleneck.
Many modern and important workloads such as machine learning, computational biology, graph processing, databases, video analytics, and real-time data analytics suffer greatly from the data movement bottleneck. These workloads are exemplified by irregular memory accesses, relatively low data reuse, low cache line utilization, low arithmetic intensity (i.e., ratio of operations per accessed byte), and large datasets that greatly exceed the main memory size. The computation in these workloads cannot usually compensate for the data movement costs. In order to alleviate this data movement bottleneck, we need a paradigm shift from the traditional processor-centric design, where all computation takes place in the compute units, to a more data-centric design where processing elements are placed closer to or inside where the data resides. This paradigm of computing is known as Processing-in-Memory (PIM).
This is your perfect P&S if you want to become familiar with the main PIM technologies, which represent “the next big thing” in Computer Architecture. You will work hands-on with the first real-world PIM architecture, will explore different PIM architecture designs for important workloads, and will develop tools to enable research of future PIM systems. Projects in this course span software and hardware as well as the software/hardware interface. You can potentially work on developing and optimizing new workloads for the first real-world PIM hardware or explore new PIM designs in simulators, or do something else that can forward our understanding of the PIM paradigm.
Prerequisites of the course:
The course is conducted in English.
The course has two main parts:
1. Short lectures on different aspects of processing-in-memory.
2. Hands-on project: Each student develops his/her own project.
Name | Office | ||
---|---|---|---|
Lead Supervisor | Juan Gómez Luna | juan.gomez@inf.ethz.ch | ETZ H 64 |
Supervisor | Haiyu Mao | haiyu.mao@inf.ethz.ch | ETZ H 64 |
Supervisor | Geraldo Francisco De Oliveira Junior | geraldod@inf.ethz.ch | ETZ H 64 |
Supervisor | Konstantinos Kanellopoulos | konstantinos.kanellopoulos@inf.ethz.ch | ETZ H 64 |
Supervisor | Nika Mansouri Ghiasi | mnika@student.ethz.ch | ETZ H 64 |
Week | Date | Livestream | Meeting | Learning Materials | Assignments |
---|---|---|---|---|---|
W1 | 05.10 Tue. | Live | M1: P&S PIM Course Presentation (PDF) (PPT) | Required Materials Recommended Materials | HW 0 Out |
W2 | 12.10 Tue. | Live | M2: Real-World PIM Architectures (PDF) (PPT) | ||
W3 | 19.10 Tue. | Live | M3: Real-World PIM Architectures II (PDF) (PPT) | ||
W4 | 26.10 Tue. | Live | M4: Real-World PIM Architectures III (PDF) (PPT) | ||
W5 | 02.11 Tue. | Live | M5: Real-World PIM Architectures IV (PDF) (PPT) | ||
W6 | 09.11 Tue. | Live | M6: End-to-End Framework for Processing-using-Memory (PDF) (PPT) | ||
W7 | 16.11 Tue. | Live | M7: How to Evaluate Data Movement Bottlenecks (PDF) (PPT) | ||
W8 | 23.11 Tue. | Live | M8: Programming PIM Architectures (PDF) (PPT) | ||
W9 | 30.11 Tue. | Live | M9: Benchmarking and Workload Suitability on PIM (PDF) (PPT) | ||
W10 | 07.12 Tue. | Live | M10: Bit-Serial SIMD Processing using DRAM (PDF) (PPT) | ||
W11 | 14.12 Tue. | Live | M11: Synchronization Support for PIM Architectures (PDF) (PPT) | ||
W12 | 21.12 Tue. | Live Premiere | M12: How to Enable the Adoption of PIM? (PDF) (PPT) |