User Tools

Site Tools


processing_in_memory

Data-Centric Architectures: Fundamentally Improving Performance and Energy (227-0085-37L)

Course Description

Data movement between the memory units and the compute units of current computing systems is a major performance and energy bottleneck. From large-scale servers to mobile devices, data movement costs dominate computation costs in terms of both performance and energy consumption. For example, data movement between the main memory and the processing cores accounts for 62% of the total system energy in consumer applications. As a result, the data movement bottleneck is a huge burden that greatly limits the energy efficiency and performance of modern computing systems. This phenomenon is an undesired effect of the dichotomy between memory and the processor, which leads to the data movement bottleneck.

Many modern and important workloads such as machine learning, computational biology, graph processing, databases, video analytics, and real-time data analytics suffer greatly from the data movement bottleneck. These workloads are exemplified by irregular memory accesses, relatively low data reuse, low cache line utilization, low arithmetic intensity (i.e., ratio of operations per accessed byte), and large datasets that greatly exceed the main memory size. The computation in these workloads cannot usually compensate for the data movement costs. In order to alleviate this data movement bottleneck, we need a paradigm shift from the traditional processor-centric design, where all computation takes place in the compute units, to a more data-centric design where processing elements are placed closer to or inside where the data resides. This paradigm of computing is known as Processing-in-Memory (PIM).

This is your perfect P&S if you want to become familiar with the main PIM technologies, which represent “the next big thing” in Computer Architecture. You will work hands-on with the first real-world PIM architecture, will explore different PIM architecture designs for important workloads, and will develop tools to enable research of future PIM systems. Projects in this course span software and hardware as well as the software/hardware interface. You can potentially work on developing and optimizing new workloads for the first real-world PIM hardware or explore new PIM designs in simulators, or do something else that can forward our understanding of the PIM paradigm.

Prerequisites of the course:

  • Digital Design and Computer Architecture (or equivalent course).
  • Familiarity with C/C++ programming.
  • Interest in future computer architectures and computing paradigms.
  • Interest in discovering why things do or do not work and solving problems
  • Interest in making systems efficient and usable

The course is conducted in English.

The course has two main parts:
1. Weekly lectures on processing-in-memory.
2. Hands-on project: Each student develops his/her own project.

Course description page Moodle

Mentors

Name E-mail Office
Lead Supervisor Mohammad Sadrosadati mohammad.sadrosadati@safari.ethz.ch ETZ F 76
Lead Supervisor Geraldo Francisco De Oliveira Junior geraldod@inf.ethz.ch ETZ F 76
Supervisor Ismail Emir Yuksel ETZ F 78
Supervisor Kangqi Chen ETZ F 76
Supervisor Rakesh Nadig ETZ F 76

Lecture Video Playlist on YouTube

Fall 2024 Meetings/Schedule

Week Date Livestream Meeting Learning Materials Assignments
W1 10.10
Tue.
Live
M1: P&S PIM Course Presentation
(PDF) (PPT)
Required Materials
Recommended Materials
HW 0 Out
W2 16.10
Wed.
Live
M2: How to Evaluate Data Movement Bottlenecks
(PDF) (PPT)
W3 23.10
Wed.
Live
M3: Processing-Near-Memory
(PDF) (PPT)
W4 30.10
Wed.
Live
M4: Processing-Using-Memory for Data Manipulation
(PDF) (PPT)
W5 06.11
Wed.
Live
MICRO 2024: Memory-Centric Computing Tutorial
W6 13.11
Wed.
Live
M5: Processing-Using-Memory for Bulk Bitwise Operations
(PDF) (PPT)
W7 20.11
Wed.
Live
M6: Processing-Using-Memory for Bulk Bitwise Operations (II)
(PDF) (PPT)
W8 27.11
Wed.
Live
M7: Processing-Using-Memory in Real DRAM Chips
(PDF) (PPT)
W9 04.12
Wed.
Live
M8: In-Flash Bulk Bitwise Operations
(PDF) (PPT)
W10 11.12
Wed.
Live
M9: PIM Adoption & Programmability
(PDF) (PPT)

Past Lecture Video Playlists on YouTube

Learning Materials

Meeting 1: Required Materials

  • Processing Data Where It Makes Sense: Enabling In-Memory Computation (summary paper about recent research in PIM):
  • Mutlu O., Memory-Centric Computing (Keynote Talk at the Thoughtworks Engineering for Research Symposium (E4R), February 2022):

Meeting 1: Recommended Materials

  • Mutlu, O., Ghose, S., Gómez-Luna, J., and Ausavarungnirun, R. A Modern Primer on Processing in Memory. In Emerging Computing: From Devices to Systems, 2023.
  • Processing-in-memory: A workload-driven perspective (summary paper about recent research in PIM):
  • Gómez-Luna, J., El Hajj, I., Fernandez, I., Giannoula, C., Oliveira, G. F., and Mutlu, O. (2022). Benchmarking a New Paradigm: Experimental Analysis and Characterization of a Real Processing-in-Memory System. IEEE Access, 2022.
  • Giannoula, C., Fernandez, I., Gómez-Luna, J., Koziris, N., Goumas, G., and Mutlu, O. SparseP: Towards Efficient Sparse Matrix Vector Multiplication on Real Processing-In-Memory Architectures. SIGMETRICS 2022.
  • Olgun, A., Gómez-Luna, J., Kanellopoulos, K., Salami, B., Hassan, H., Ergin, O., and Mutlu, O. PiDRAM: A Holistic End-to-end FPGA-based Framework for Processing-in-DRAM. ACM TACO, 2022.

More Learning Materials

Assignments

HW0: Student Information (Due: 16.10)

processing_in_memory.txt · Last modified: 2024/12/11 08:44 by geraldod