User Tools

Site Tools


processing_in_memory

This is an old revision of the document!


Exploring the Processing-in-Memory Paradigm for Future Computing Systems

Course Description

Data movement between the memory units and the compute units of current computing systems is a major performance and energy bottleneck. From large-scale servers to mobile devices, data movement costs dominate computation costs in terms of both performance and energy consumption. For example, data movement between the main memory and the processing cores accounts for 62% of the total system energy in consumer applications. As a result, the data movement bottleneck is a huge burden that greatly limits the energy efficiency and performance of modern computing systems. This phenomenon is an undesired effect of the dichotomy between memory and the processor, which leads to the data movement bottleneck.

Many modern and important workloads such as machine learning, computational biology, graph processing, databases, video analytics, and real-time data analytics suffer greatly from the data movement bottleneck. These workloads are exemplified by irregular memory accesses, relatively low data reuse, low cache line utilization, low arithmetic intensity (i.e., ratio of operations per accessed byte), and large datasets that greatly exceed the main memory size. The computation in these workloads cannot usually compensate for the data movement costs. In order to alleviate this data movement bottleneck, we need a paradigm shift from the traditional processor-centric design, where all computation takes place in the compute units, to a more data-centric design where processing elements are placed closer to or inside where the data resides. This paradigm of computing is known as Processing-in-Memory (PIM).

This is your perfect P&S if you want to become familiar with the main PIM technologies, which represent “the next big thing” in Computer Architecture. You will work hands-on with the first real-world PIM architecture, will explore different PIM architecture designs for important workloads, and will develop tools to enable research of future PIM systems. Projects in this course span software and hardware as well as the software/hardware interface. You can potentially work on developing and optimizing new workloads for the first real-world PIM hardware or explore new PIM designs in simulators, or do something else that can forward our understanding of the PIM paradigm.

Prerequisites of the course:

  • Digital Design and Computer Architecture (or equivalent course).
  • Familiarity with C/C++ programming.
  • Interest in future computer architectures and computing paradigms.
  • Interest in discovering why things do or do not work and solving problems
  • Interest in making systems efficient and usable

The course is conducted in English.

The course has two main parts:
1. Short lectures on different aspects of processing-in-memory.
2. Hands-on project: Each student develops his/her own project.

Course description page Moodle

Mentors

Lecture Video Playlist on YouTube

Fall 2021 Meetings/Schedule

Week Date Livestream Meeting Learning Materials Assignments
W1 05.10
Tue.
Live
M1: P&S PIM Course Presentation
(PDF) (PPT)
Required Materials
Recommended Materials
HW 0 Out
W2 12.10
Tue.
Live
M2: Real-World PIM Architectures
(PDF) (PPT)
W3 19.10
Tue.
Live
M3: Real-World PIM Architectures II
(PDF) (PPT)
W4 26.10
Tue.
Live
M4: Real-World PIM Architectures III
(PDF) (PPT)
W5 02.11
Tue.
Live
M5: Real-World PIM Architectures IV
(PDF) (PPT)
W6 09.11
Tue.
Live
M6: End-to-End Framework for Processing-using-Memory
(PDF) (PPT)
W7 16.11
Tue.
Live
M7: How to Evaluate Data Movement Bottlenecks
(PDF) (PPT)
W8 23.11
Tue.
Live
M8: Programming PIM Architectures
(PDF) (PPT)
W9 30.11
Tue.
Live
M9: Benchmarking and Workload Suitability on PIM
(PDF) (PPT)
W10 07.12
Tue.
Live
M10: Bit-Serial SIMD Processing using DRAM
(PDF) (PPT)
W11 14.12
Tue.
Live
M11: Synchronization Support for PIM Architectures
(PDF) (PPT)
W12 21.12
Tue.
Live
M12: How to Enable the Adoption of PIM?
(PDF) (PPT)

Learning Materials

Meeting 1: Required Materials

  • Processing Data Where It Makes Sense: Enabling In-Memory Computation (summary paper about recent research in PIM):
  • Processing Data Where It Makes Sense in Modern Computing Systems: Enabling In-Memory Computation (keynote talk ICCD 2019):

Meeting 1: Recommended Materials

  • Processing-in-memory: A workload-driven perspective (summary paper about recent research in PIM):
  • Computation in Memory (Professor Onur Mutlu, lecture, Fall 2019):
  • Computation in Memory II (Professor Onur Mutlu, lecture, Fall 2019):
  • Computation in Memory III (Professor Onur Mutlu, lecture, Fall 2019):

More Learning Materials

Assignments

HW0: Student Information (Due: 12.10)

processing_in_memory.1640029676.txt.gz · Last modified: 2021/12/20 19:47 by juang