User Tools

Site Tools


processing_in_memory

This is an old revision of the document!


Exploring the Processing-in-Memory Paradigm for Future Computing Systems

Course Description

Data movement between the memory units and the compute units of current computing systems is a major performance and energy bottleneck. From large-scale servers to mobile devices, data movement costs dominate computation costs in terms of both performance and energy consumption. For example, data movement between the main memory and the processing cores accounts for 62% of the total system energy in consumer applications. As a result, the data movement bottleneck is a huge burden that greatly limits the energy efficiency and performance of modern computing systems. This phenomenon is an undesired effect of the dichotomy between memory and the processor, which leads to the data movement bottleneck.

Many modern and important workloads such as machine learning, computational biology, graph processing, databases, video analytics, and real-time data analytics suffer greatly from the data movement bottleneck. These workloads are exemplified by irregular memory accesses, relatively low data reuse, low cache line utilization, low arithmetic intensity (i.e., ratio of operations per accessed byte), and large datasets that greatly exceed the main memory size. The computation in these workloads cannot usually compensate for the data movement costs. In order to alleviate this data movement bottleneck, we need a paradigm shift from the traditional processor-centric design, where all computation takes place in the compute units, to a more data-centric design where processing elements are placed closer to or inside where the data resides. This paradigm of computing is known as Processing-in-Memory (PIM).

This is your perfect P&S if you want to become familiar with the main PIM technologies, which represent “the next big thing” in Computer Architecture. You will work hands-on with the first real-world PIM architecture, will explore different PIM architecture designs for important workloads, and will develop tools to enable research of future PIM systems. Projects in this course span software and hardware as well as the software/hardware interface. You can potentially work on developing and optimizing new workloads for the first real-world PIM hardware or explore new PIM designs in simulators, or do something else that can forward our understanding of the PIM paradigm.

Prerequisites of the course:

  • Digital Design and Computer Architecture (or equivalent course).
  • Familiarity with C/C++ programming.
  • Interest in future computer architectures and computing paradigms.
  • Interest in discovering why things do or do not work and solving problems
  • Interest in making systems efficient and usable

The course is conducted in English.

The course has two main parts:
1. Short lectures on different aspects of processing-in-memory.
2. Hands-on project: Each student develops his/her own project.

Course description page Moodle

Mentors

Lecture Video Playlists on YouTube

Spring 2022 Meetings/Schedule

Week Date Livestream Meeting Learning Materials Assignments
W1 10.03
Thu.
Live
M1: P&S PIM Course Presentation
(PDF) (PPT)
Required Materials
Recommended Materials
HW 0 Out
W2 15.03
Tue.
Hands-on Project Proposals
17.03
Thu.
Premiere
M2: Real-world PIM: UPMEM PIM
(PDF) (PPT)
W3 24.03
Thu.
Live
M3: Real-world PIM: Microbenchmarking of UPMEM PIM
(PDF) (PPT)
W4 31.03
Thu.
Live
M4: Real-world PIM: Samsung HBM-PIM
(PDF) (PPT)
W5 07.04
Thu.
Live
M5: How to Evaluate Data Movement Bottlenecks
(PDF) (PPT)
W6 14.04
Thu.
Live
M6: Real-world PIM: SK Hynix AiM
(PDF) (PPT)
W7 21.04
Thu.
Premiere
M7: Programming PIM Architectures
(PDF) (PPT)
W8 28.04
Thu.
Premiere
M8: Benchmarking and Workload Suitability on PIM
(PDF) (PPT)
W9 05.05
Thu.
Premiere
M9: Real-world PIM: Samsung AxDIMM
(PDF) (PPT)
W10 12.05
Thu.
Premiere
M10: Real-world PIM: Alibaba HB-PNM
(PDF) (PPT)
W11 19.05
Thu.
Live
M11: SpMV on a Real PIM Architecture
(PDF) (PPT)
W12 26.05
Thu.
Live
M12: End-to-End Framework for Processing-using-Memory
(PDF) (PPT)
W13 02.06
Thu.
Live
M13: Bit-Serial SIMD Processing using DRAM
(PDF) (PPT)
W14 09.06
Thu.
Live
M14: Analyzing and Mitigating ML Inference Bottlenecks
(PDF) (PPT)
W15 15.06
Thu.
M15: In-Memory HTAP Databases with HW/SW Co-design
(PDF) (PPT)

Learning Materials

Meeting 1: Required Materials

  • Processing Data Where It Makes Sense: Enabling In-Memory Computation (summary paper about recent research in PIM):
  • Processing Data Where It Makes Sense in Modern Computing Systems: Enabling In-Memory Computation (keynote talk ICCD 2019):

Meeting 1: Recommended Materials

  • Processing-in-memory: A workload-driven perspective (summary paper about recent research in PIM):
  • Computation in Memory (Professor Onur Mutlu, lecture, Fall 2019):
  • Computation in Memory II (Professor Onur Mutlu, lecture, Fall 2019):
  • Computation in Memory III (Professor Onur Mutlu, lecture, Fall 2019):

More Learning Materials

Assignments

HW0: Student Information (Due: 17.03)

processing_in_memory.1655327725.txt.gz · Last modified: 2022/06/15 21:15 by juang