User Tools

Site Tools


buzzword

This is an old revision of the document!


Buzzwords

Buzzwords are terms that are mentioned during the lecture which are particularly important to understand thoroughly. This page tracks the buzzwords for each of the lectures and can be used as a reference for finding gaps in your understanding of course material.

Lecture 1 (17.09 Thu.)

  • Computer Architecture
  • Redundancy
  • Bahnhof Stadelhofen
  • Santiago Calatrava
  • Oculus
  • Design constraints
  • Falling Water
  • Frank Lloyd Wright
  • Sustainability
  • RowHammer
  • Opportunities at the Bottom
  • Opportunities at the Top
  • Evaluation criteria for designs
    • Functionality
    • Reliability
    • Space requirement
    • Expandability
  • Principled design
  • Role of the (Computer) Architect
  • Systems programming
  • Digital design
  • Levels of transformation
    • Algorithm
    • System software
    • Instruction Set Architecture (ISA)
    • Microarchitecture
    • Logic
  • Abstraction layers
  • Hamming code
  • Hamming distance
  • User-centric view
  • Productivity
  • Multi-core systems
  • Caches
  • DRAM memory controller
  • DRAM banks
  • Energy efficiency
  • Memory performance hog
  • Slowdown
  • Consolidation
  • QoS guarantees
  • Unfairness
  • Row decoder
  • Column address
  • Row buffer hit/miss
  • Row buffer locality
  • FR-FCFS
  • Stream/Random access patterns
  • Memory scheduling policies
  • Scheduling priority
  • DRAM cell
  • Access transistor
  • DRAM refresh
  • DRAM retention time
  • Variable retention time
  • Retention time profile
  • Manufacturing process variation
  • Bloom filter
  • Data pattern dependence
  • Variable retention time
  • Error Correcting Codes (ECC)

Lecture 2a (19.09 Fri.)

  • Levels of transformation
  • Abstraction layers
  • Multi-core systems
  • Single-core
  • Interface
  • DRAM
  • Caches
  • Memory controller
  • Parallel processing
  • GPU
  • Slowdown
  • Quality of service
  • DRAM Bank
  • Row buffer
  • Row hit/miss
  • FR-FCFS
  • DRAM scheduling
  • Random memory accesses
  • Sequential memory accesses
  • Memory performance hog
  • Compiler

Lecture 2b (19.09 Fri.)

  • Data retention
  • Memory refresh
  • DRAM cell
  • DRAM capacitor
  • Manufacturing Process variation
  • Bloom filter
  • Hash function
  • Data pattern dependence
  • Variable retention time (VRT)
  • Error Correcting Code (ECC)
  • Retention failure
  • Flash memory
  • Retention errors
  • NAND Flash
  • P/E cycle
  • SSDs

Lecture 3a (24.09 Thu.)

  • Genome analysis
  • DNA
  • Cell information
  • Genetic content
  • Human genome
  • DNA genotypes
  • RNA
  • Protein / Phenotypes
  • Adenine (A), Thymine (T), Guanine (G), Cytosine (C)
  • Supercoiled
  • Chromosomes
  • HeLa's cells (Henrietta Lacks)
  • Reference genome
  • Sequence alignment
  • High-throughput sequencing (HTS)
  • Read mapping
  • Hash based seed-and-extend
  • K-mers
  • Burrows-Wheeler Transform
  • Ferragina-Manzini Index
  • Edit distance
  • Match / Mismatch
  • Deletion / Insertion / Substitution
  • Dynamic programming
  • MrFAST
  • Verification
  • Seed filtering
  • Adjacency filtering
  • Cheap k-mer selection
  • FastHASH
  • Pre-alignment filtering
  • Hamming distance
  • Shifted Hamming distance
  • Needleman-Wunsch
  • Neighborhood map
  • GateKeeper
  • Magnet
  • Slider
  • GRIM-filter
  • Apollo
  • Hercules
  • 3D-stacked memory (HMC)
  • Nanopore genome assembly

Lecture 3b (24.09 Thu.)

  • Fundamentally Secure/Reliable/Safe Architectures
  • Fundamentally Energy-Efficient Architectures
  • Memory-centric (Data-centric) Architectures
  • Fundamentally Low-Latency Architectures
  • Architectures for Genomics, Medicine, Health
  • Genome Sequence Analysis
  • Reference Genome
  • Read Mapping
  • Read Alignment/Verification
  • Edit Distance
  • In-Memory DNA Sequence Analysis
  • Memory Bottleneck
  • Main Memory
  • Storage (SSD/HDD)
  • The Memory Capacity Gap
  • DRAM Capacity, Bandwidth & Latency
  • Flash Memory
  • RowHammer
  • Non-Volatile Memory (NVM) (e.g., PCM, STTRAM, ReRAM, 3D Xpoint)
  • Emerging Memory Technologies
  • 3D-Stacked DRAM
  • Hybrid Main Memory
  • System-Memory Co-Design
  • Microarchitecture
  • Memory-Centric System Design
  • Memory Interference
  • Memory Controllers

Lecture 4a (25.09 Fri.)

  • Memory problem
  • DRAM
  • System-memory co-design
  • Heterogeneous memories
  • Memory scaling
  • Memory-centric system design
  • Waste management
  • Reliability
  • Intelligent memory controllers
  • Computations close to data
  • Emerging memory technologies
  • Resistive memory technologies
  • Non-volatile
  • Phase Change Memory (PCM)
  • 3DXPoint
  • Hybrid Memories
  • Error Tolerance
  • Tolerant data
  • Vulnerable data
  • ECC
  • Heterogeneous-Reliability Memory
  • Memory Interference
  • QoS-aware memory
  • Fairness
  • SLA (Service Level Agreement)
  • Performance loss
  • Resource partitioning/prioritization
  • DRAM controllers
  • Machine learning
  • DRAM scaling

Lecture 4b (25.09 Fri.)

  • Rowhammer
  • Security
  • Safety
  • Bit flip
  • Maslow Hierarchy
  • Charge-based memory
  • Data retention
  • Flash memory
  • Disturbance errors
  • Hammered row
  • Victim row
  • Electrical interference
  • Cell-to-cell coupling
  • Security attack
  • kernel privileges
  • Page Table Entry (PTE)
  • Electromagnetic coupling
  • Conductive bridges
  • Hot-Carrier injection
  • Aggressor row
  • Refresh rate
  • Data pattern
  • Victim cells
  • weak cells
  • ECC
  • SECDED
  • Variable retention time
  • Rowhammer solutions
  • PARA (Probabilistic Adjacent Row Activation)

Lecture 5a (01.10 Thu.)

  • RowHammer in 2020
  • Secure/Reliable/Safe Architectures
  • Intelligent Controller
  • TRR-protected DRAM chip
  • Many-sided RowHammer attack
  • DDR
  • Aggressor row
  • In-DRAM TRR
  • Sampler
  • Inhibitor
  • In-DRAM ECC

Lecture 5b (01.10 Thu.)

  • Dense DRAM chip
  • RowHammer mitigation mechanism
  • Vulnerable chip
  • RowHammer characterization
  • RowHammer vulnerability
  • DRAM testing infrastructure
  • DDR3
  • DDR4
  • LPDDR4
  • DRAM refresh
  • DRAM calibration event
  • Refresh window
  • Retention failure
  • Aggressor Row
  • Victim Row
  • Data pattern
  • Hammer Count
  • RowHammer bit flip rate
  • Technology node generation
  • Row Distance
  • MPKI
  • Normalized System Performance
  • DRAM-system cooperation
  • Profiling mechanism

Lecture 5c (01.10 Thu.)

  • Rowhammer Experimental Analysis
  • Reliability Challenges
  • Security Challenges
  • Large-scale failure analysis
  • SSD error analysis
  • Retention errors
  • DRAM Process Scaling
  • 3D NAND Flash Reliability
  • Byzantine Generals Problem
  • Rowhammer Retrospective

Lecture 6 (08.10 Thu.)

  • Data Movement
  • Processing in memory (PIM)
  • In-memory computation/processing
  • Near-data processing (NDP)
  • UPMEM Processing-in-DRAM Engine
  • 3D-stacked memory
  • RowClone
  • Gather/Scatter DRAM
  • Bulk data copy and initialization
  • In-Memory copy
  • Intra-subarray
  • Inter-bank
  • Memory as an accelerator
  • Low-cost Inter-linked subarrays (LISA)
  • Fine-Grained In-DRAM Copy (FIGARO)
  • Network-On-Memory
  • Bulk Bitwise in-DRAM Computation (Ambit)
  • Intelligent Memory Device
  • ComputeDRAM
  • Dual Contact Cell
  • New memory technologies

Lecture 7 (09.10 Fri.)

  • Near-Data Processing
  • RowClone
  • Ambit
  • ComputeDRAM
  • Pinatubo
  • RowHammer
  • 3D-Stacked Logic+Memory
  • In-Memory Graph Processing
  • Tesseract
  • Consumer Devices
  • Data Movement Bottleneck
  • TensorFlow Mobile
  • Chrome Browser
  • Video Playback and Capture
  • GPU Processing
  • Transparent Offloading and Mapping (TOM)
  • Linked Data Structures
  • Dependent Cache Misses
  • Runahead Execution
  • Climate Modeling
  • Approximate String Matching
  • Time Series Analysis
  • PIM-Enabled Instructions
  • Simple PIM Operations
  • Code and Data Mapping
  • Offloading Critical Code
  • Offloading Prefetch Mechanisms
  • Data Coherence Support
  • Minimal Data Movement
  • Genome Read In-Memory (GRIM) Filter
  • UPMEM
  • Principle Design

Lecture 8 (15.10 Thu.)

  • Bioinformatics
  • Genome Analysis
  • DNA Testing
  • Chromosomes
  • Genome-Wide Association Study (GWAS)
  • SNPs
  • Personalized Medicine
  • Phenotypes
  • Privacy-Preserving Genome Analysis
  • SARS-CoV-2
  • Microbiome Profiling
  • High-Throughput Sequencers
  • Genome Sequencing
  • PacBio
  • DNA
  • Reference Genome
  • Metagenomics
  • Read Mapping
  • Hashing
  • Seeds
  • Read Alignment
  • Smith-Waterman
  • Hamming distance
  • Accelerating Read Mapping
  • Seed Filtering
  • Pre-alignment Filtering
  • Read Alignment Acceleration
  • FastHASH
  • Cheap K-mer Selection
  • GateKeeper
  • Shifted Hamming Distance
  • FPGA
  • Shouji
  • SneakySnake
  • Hybrid Memory Cube
  • Ambit
  • RowClone
  • Pinatubo
  • Tesseract
  • GRIM-Filter
  • GenCache
  • Darwin
  • GenASM
  • AirLift
  • UPMEM

Lecture 10 (22.10 Thu.)

  • Maslow’s Hierarchy
  • Low latency
  • Memory bottleneck
  • Data-centric (Memory-centric) architectures
  • DRAM
  • DDR3
  • 3D-Stacked DRAM
  • Runahed Execution
  • Sense Amplifier
  • DRAM cell
  • DRAM bank
  • DRAM chip
  • Tiered Latency DRAM
  • Variable Latency DRAM
  • CROW (The Copy Row Substrate)
  • CLR-DRAM
  • SALP
  • Global Row-buffer
  • Local Row-buffer
  • Timing margins
  • Process variation
  • Worst-case
  • Adaptive-latency
  • DRAM characterization
  • SoftMC
  • Restore time
  • AL-DRAM
  • Latency variation
  • Flexible-Latency DRAM
  • Solar DRAM
  • Physical Unclonable Function (PUF)
  • True Random Number Generator
  • Refresh Latency
  • ChargeCache
  • Vampire DRAM

Lecture 11a (29.10 Thu.)

  • Memory Controller
  • DRAM Latency
  • DRAM Throughput
  • Phase Change Memory
  • Spin-Transfer Torque Magnetic Memory
  • Flash Memory
  • Solid-State Drive (SSD)
  • SSD Controller
  • Error-Correcting Code (ECC)
  • Wear Leveling
  • Garbage Collection
  • Voltage Optimization
  • Page Remapping
  • DRAM Types
  • DDR (Double Data Rate)
  • LPDDR (Low-Power DDR)
  • GDDR (Graphic DDR for High Bandwidth)
  • eDRAM (Enhanced DRAM)
  • RLDRAM (Reduced-Latency DRAM)
  • 3D Stacked DRAM
  • WIO (Wide I/O)
  • HBM (High-Bandwidth Memory)
  • HMC (Hybrid Memory Cube)
  • Ramulator
  • DRAM Controller
  • DRAM Request
  • Request Buffer
  • FCFS (First Come First Served)
  • FR-FCFS (first Ready, First Come First Served)
  • Row Buffer Management Policy
  • Open-Row Policy
  • Closed-Row Policy
  • DRAM Power Management
  • DRAM Timing Constraints
  • DRAM Refresh
  • Quality of Service (QoS)
  • Memory Contention
  • Subarray-Level Parallelism
  • Main Memory Interference
  • Self-Optimizing DRAM Controller
  • Reinforcement Learning
  • Self-Optimizing Computing Architecture
  • Data-Driven Computing Architecture
  • Intelligent Architecture

Lecture 11b (29.10 Thu.)

  • Resource Sharing
  • Multi-Core Systems
  • Partitioning
  • Resource Contention
  • Performance isolation
  • Quality of service (QoS)
  • Fairness
  • Shared Cache
  • Shared Resource Management
  • Inter-Thread/Application Interference
  • Unfair Slowdown
  • Memory Performance Attack
  • Memory Performance Hog
  • Stream Access
  • Random Access
  • Memory Scheduling Policy
  • Denial of Service (DoS)
  • Service-Level Aggreement (SLA)
  • Distributed DoS
  • Networked Multi-Core Systems
  • Interconnnect
  • QoS-Aware Memory Systems
  • Prioritization
  • Data Mapping
  • Core/Source Throttling
  • Application/Thread Scheduling
  • QoS-Aware Memory Scheduling
  • DRAM-Related Stall Time
  • Memory Slowdown
  • Stall-Time Fair Memory Scheduler (STFM)
  • Parallelism-Aware Batch Scheduling (PAR-BS)
  • Memory-Level Parallelism (MLP)
  • Out-of-Order Execution
  • Non-Blocking Cache
  • Runahead Execution
  • Bank-Level Parallelism
  • Request Batching
  • ATLAS (Adaptive per-Thread Least Attained Service Scheduling)
  • Thread Cluster Memory Scheduling (TCM)
  • Starvation
  • MPKI (Misses per Kiloinstruction)
  • Row-Buffer Locality

Lecture 12a (30.10 Fri.)

  • Error-correcting code
  • Hamming code
  • BCH code
  • Reed-Solomon code
  • On-Die ECC
  • Rank-Level ECC
  • ECC encoder
  • ECC decoder
  • Parity-check bits
  • Error syndrome
  • Error characterization
  • SAT solver
  • Data-retention error

Lecture 12b (30.10 Fri.)

  • Capacity-latency tradeoff
  • Open-bitline architecture
  • Charge sharing
  • Charge restoration
  • Refresh latency
  • Refresh rate

Lecture 12c (30.10 Fri.)

  • Virtual memory
  • Virtual address space
  • Page table
  • Address translation overhead
  • Virtual machines
  • Heterogeneous memory
  • Data mapping
  • Data migration
  • Virtual Block Interface
  • Memory translation layer

Lecture 12d (30.10 Fri.)

  • Processing in Memory
  • Real world PIM architecture
  • Accelerator model
  • UPMEM DIMM
  • DPU
  • Tasklet
  • Parallel Reduction

Lecture 13 (05.11 Thu.)

  • Memory Interference
  • Prioritization
  • Data Mapping
  • Core/Source Throttling
  • Applitcation Thread Scheduling
  • Memory Service Guarantees
  • Quality of Service
  • QoS-Aware Memory Systems
  • Stall-Time Fair Memory Scheduling
  • Parallelism-Aware Batch Scheduling
  • PAR-BS
  • ATLAS Memory Scheduler
  • BLISS (Blacklisting Memory Scheduler)
  • Thread Cluster Memory Scheduling
  • TCM
  • Throughput vs. Fairness
  • Clustering Threads
  • STFM
  • FR-FCFS
  • Staged Memory Scheduling
  • SMS
  • DASH
  • Current SoC Architectures
  • Strong Memory Service Guarantees
  • Predictable Performance
  • Handling Memory Interference In Multithreaded Applications
  • Barriers
  • Critical Sections
  • Data mapping
  • Memory Channel Partitioning
  • Parallel Application Memory Scheduling
  • Fairness via Source Throttling

Lecture 14 (12.11 Thu.)

  • Target metric
  • Theoretical proof
  • Analytical modeling/estimation
  • Abstraction
  • Accuracy
  • Workload
  • RTL simulations
  • Design choices
  • Cycle-level accuracy
  • Design space exploration
  • Flexibility
  • High-level simulations
  • Low-level models
  • Ramulator
  • Modular
  • Extensible
  • IPC (instructions per cycle)
  • 3D-stacked DRAM
  • DDR3
  • GDDR5
  • HBM
  • HMC
  • Wide I/O
  • LPDDR
  • Spatial locality
  • Bank-level parallelism

Lecture 15 (13.11 Fri.)

  • Emerging memory technologies
  • Charge memory
  • Resistive memory technologies
  • Phase Change Memory (PCM)
  • STT-MRAM
  • Memristor
  • RRAM/ReRAM
  • Non-volatile
  • Multi-Level Cell PCM (MLC-PCM)
  • Endurance
  • Reliability
  • Intel Optane Memory
  • 3D-XPoint Technology
  • Read Asymmetry
  • Magnetic Tunnel Junction (MTJ) device
  • Hybrid main memory
  • DRAM buffer/DRAM cache
  • Data placement
  • Row buffer
  • Memory-Level Parallelism (MLP)
  • Translation Lookaside Buffer (TLB)
  • Page Table
  • In-memory bulk bitwise operations
  • In-memory crossbar array operations
  • Analog computation
  • Digital to Analog Converter (DAC)
  • Analog to Digital Converter (ADC)
  • NVM-based PIM system

Lecture 16a (19.11 Thu.)

  • Emerging memory technology
  • Flash memory
  • Memory-centric system design
  • Phase change memoery
  • Charge memory
  • Resistive memory
  • Multi-level cell
  • Spin-Transfer Torque Magnetic RAM (STT-MRAM)
  • Memristors
  • Resistive RAM (RRAM or ReRAM)
  • Intel 3D Xpoint
  • Capacity-latency trade-off
  • Capacity-reliability trade-off
  • Endurance
  • Magnetic Tunnel Junction (MTJ)
  • Hybrid memory
  • Writing filtering
  • Data placement
  • Data access pattern
  • Row-buffer locality
  • Overall system performance impact
  • Memory-Level Parallelism (MLP)
  • Utility-based hybrid memory management
  • Hybrid Memory Systems
  • Large (DRAM) Cache
  • TIMBER
  • Two-Level Memory/Storage model
  • Volatile data
  • Persistent data
  • Single-level store
  • Unified Memory and storage
  • The Persistent Memory Manager (PMM)
  • ThyNVM

Lecture 16b (19.11 Thu.)

  • Heterogeneity
  • Asymmetry in design
  • Amdahl's Law
  • Synchronization overhead
  • Load imbalance overhead
  • Resource sharing overhead
  • IBM Power4
  • IBM Power5
  • Niagara Processor
  • Performance vs. parallelism
  • Asymmetric Chip Multiprocessor (ACMP)
  • MorphCore

Lecture 17 (20.11 Fri.)

  • Amdahl's Law
  • Parallelizable fraction of a program
  • Serial bottleneck
  • Synchronization overhead
  • Load imbalance overhead
  • Resource sharing overhead
  • Critical section
  • Asymmetric multi-core (ACMP)
  • Symmetric CMP (SCMP)
  • Accelerated Critical Sections (ACS)
  • Selective Acceleration of Critical Sections (SEL)
  • Critical Section Request Buffer (CSRB)
  • Cache misses for private data
  • Cache misses for shared data
  • Equal-area comparison
  • Bottleneck Identification and Scheduling (BIS)
  • Thread waiting cycles (TWC)
  • Bottleneck Table (BT)
  • Scheduling Buffers (SB)
  • Acceleration Index Tables (AIT)
  • The critical path
  • Feedback-Directed Pipelining (FDP)
  • Comprehensive fine-grained bottleneck acceleration
  • Lagging threads
  • Multiple applications
  • Criticality of code segments
  • Utility-Based Acceleration (UBA)
  • Global criticality of the segment
  • Fraction of execution time spent on segment
  • Local speedup of the segment
  • Data marshaling
  • Staged execution model
  • Segment spawning
  • Producer-Consumer Pipeline Parallelism
  • Locality of inter-segment data
  • Generator instruction
  • Marshal buffer
  • Pipeline parallelism
  • Aggressive stream prefetcher
  • Energy expended per instruction (EPI)
  • Dynamic voltage frequency scaling (DVFS)

Lecture 18 (26.11 Thu.)

  • Memory latency
  • DRAM Latency
  • Latency Reduction
  • Latency Tolerance
  • Latency Hiding
  • Caching
  • Prefetching
  • Multithreading
  • Out-of-order Execution
  • Software prefetching
  • Hardware prefetching
  • Execution-based prefetchers
  • Next-Line Prefetchers
  • Stride Prefetchers
  • Stream Buffers
  • Feedback-Directed Prefetching
  • Content Directed Prefetching

Lecture 19a (27.11 Fri.)

  • Execution-based Prefetcher
  • Speculative thread
  • Thread-Based Pre-Execution
  • Runahead Execution
  • Address-Value Delta (AVD) Prediction
  • Multi-Core Issues in Prefetching
  • Feedback Directed Prefetching
  • Bandwidth-Efficient Prefetching
  • Coordinated Prefetcher Control
  • Prefetching in GPUs

Lecture 19b (27.11 Fri.)

  • Multiprocessing
  • Memory Consistency
  • Cache Coherence
  • SISD
  • SIMD
  • MISD
  • MIMD
  • Parallelism
  • Instruction Level Parallelism
  • Data Parallelism
  • Task Level Parallelism
  • Loosely Coupled Multiprocessors
  • Tightly Coupled Multiprocessors
  • Hardware-based Multithreading
  • Parallel Speedup
  • Superlinear Speedup
  • Utilization
  • Redundancy
  • Efficiency
  • Amdahl’s Law
  • Sequential Bottleneck
  • Synchronization
  • Load Imbalance
  • Resource Contention
  • Critical Sections
  • Barriers
  • Stages of Pipelined Programs
buzzword.1606953451.txt.gz · Last modified: 2020/12/02 23:57 by rahbera