A sampling of our research areas includes:
A. New Memory & Storage Architectures
DRAM (Dynamic Random Access Memory) is the predominant technology used for computer memory. It is facing significant challenges in technology scaling, reliability, data retention, latency, bandwidth and power consumption. These greatly affect performance, energy, security/safety/reliability & scalability of computing platforms and applications. We rigorously understand and solve these challenges via novel techniques across the computing stack. To this end, we build hardware infrastructures (see cover figure) and follow two key directions.
A.1. Fundamentally Better DRAM Architectures
We research all aspects of improving DRAM. Two examples:
RowHammer. We experimentally demonstrated, analyzed and proposed solutions for the RowHammer problem that affects most modern DRAM chips. We were the first to show that, by repeatedly accessing a DRAM row, one can induce errors in adjacent rows. A malicious attacker can use this to circumvent memory protection and gain complete control over an otherwise-secure system. RowHammer is the first example of a circuit failure mechanism that causes a practical, widespread system security vulnerability. Our RowHammer work (ISCA’14’20, S&P’20, HPCA’21) continues to have widespread impact on security & hardware communities. E.g., our work led to inclusion of new tests in widely-used memtest programs; Apple cited our work in its security releases; Intel implemented our major solution; our 2020 works re-ignited industry-wide task groups to solve RowHammer.
Scalable DRAM. We pioneer architectural research on solving critical DRAM scaling problems. (refresh, latency, variability, power, energy, reliability) by analyzing real chips. Our diverse work improves DRAM in all directions with large impact. E.g., Intel & Samsung advocated several of our ideas for future DRAM standards. Our work to eliminate memory refresh and data retention failures influenced academic & industrial directions (e.g., works on DRAM Error Correcting Codes in DSN’19 & MICRO’20 won Best Paper Awards).
A.2. Enabling Emerging Memory Technologies
Emerging technologies, e.g., Phase Change Memory, magnetic memory and memristors, have promising properties as memory/storage devices but also large downsides. We do research to enable & exploit such technologies by designing intelligent architectures. Our work served as a precursor of Intel’s Optane Memory and other technologies being designed for hybrid memory.
B. Data-Centric Architectures: Processing-in-Memory Paradigm
Modern computing systems are processor-centric, i.e., overwhelmingly designed to move data to computation. This greatly exacerbates performance, energy and scalability problems because data movement is orders of magnitude more costly than computation (in latency & energy).
We do research to fundamentally change the design paradigm of computers: to enable computation near data, i.e., Processing-in-Memory (PIM). PIM places computation in/near where data is stored (inside memory chips, in logic layers of 3D-stacked memory, in memory controllers), so that data movement is reduced. We develop at least two new approaches to PIM: 1) processing using memory by exploiting analog operational properties of memory to perform massively-parallel processing, 2) processing near memory by exploiting 3D-stacking technologies to provide near-memory logic.
Our group pioneers modern PIM research. We tackle essentially all aspects of how to enable and design PIM systems: cross-layer research, design, and adoption challenges in devices, architecture, systems, applications & programming models. Our work has influenced academia and industry (e.g., ISCA’15; see Fig. 1). We work closely with industry (e.g., UPMEM, Google, Microsoft, Facebook, ASML, SRC) to enable adoption of our new PIM paradigms.
C. Fast & Efficient Genome Analysis, Medicine, and Machine Learning
Genome analysis is the foundation of many scientific and medical discoveries, and a key enabler of personalized medicine. Current systems are too slow & too energy-inefficient. Our goal is to design fundamentally better genome analysis systems, enabling decisions within seconds/minutes (vs. days/weeks), using minimal energy. Such systems can revolutionize medicine, public health and scientific discovery. To this end, we develop novel algorithms & architectures. We do leading research in fast DNA read mapping [NatureGenetics’09, Bioinformatics’15’17’19’20] and approximate string matching [MICRO’20]; see Fig. 2. Our efforts are expanding to more dimensions, e.g., privacy, security & mobile/embedded genomics.
Our research is supported by several industry partners including:
Alibaba
ASML
Facebook
FUTUREWEI
Huawei
Google
HiSilicon
Intel
imec
Microsoft
SRC
VMware
We would like to thank all of our industry partners for their new and continued support of our research.
Fun SAFARI fact: Have you ever wondered what SAFARI actually means? You are not alone. Many people have asked us, and we would like to share the meaning of SAFARI with you. SAFARI is the name first given to the research group Onur Mutlu started at Carnegie Mellon University in 2009. It originally stood for the research vision of the group at the time: SAfe, FAir, Robust and Intelligent computer architectures! The vision still forms a part of the research we do, but the group’s focus has expanded over the years. Onur likes to think about the group’s research as a SAFARI for new ideas and breakthroughs in computer architecture and bioinformatics.

Figure 1: Our Tesseract Processing in Memory system for Graph Processing (ISCA’15) provides more than 13X performance improvement and 8X energy reduction over state-of-the-art systems. Many works have been built on Tesseract, which provides a blueprint for future PIM systems.

Figure 2: Our bioinformatics work covers the entire genome analysis pipeline. Collectively, our algorithm-architecture co-design techniques provide >100X performance improvement & energy reduction over state-of-the-art systems. Figure replicated from our IEEE Micro 2020 invited paper “Accelerating Genome Analysis: A Primer on an Ongoing Journey”.