User Tools

Site Tools


buzzword

Buzzwords

Buzzwords are terms that are mentioned during lecture which are particularly important to understand thoroughly. This page tracks the buzzwords for each of the lectures and can be used as a reference for finding gaps in your understanding of course material.

Lecture 1 (22.02 Thu.)

  • Principles of Design
  • Evaluation metrics
  • Cache
  • Design Tradeoff
  • Systolic Array Architecture
  • Graphics Processing Unit (GPU)
  • Single Instruction, Multiple Data (SIMD)
  • Distributed System
  • Network-on-Chip (NoC)
  • Routers (in the context of NoC)
  • Cryptographic Engine (in the context of computer architecture)
  • Network Packet
  • Floating-Point Operations per Second (FLOPS)
  • VAX Architecture
  • ALPHA Architecture
  • Bit-Serial Adder
  • Instruction Set Architecture (ISA)
  • Microarchitecture
  • Algorithm

Lecture 2 (23.02 Fri.)

  • Transformation Hierarchy
  • Microarchitecture
  • ISA
  • Power of abstraction
  • Transmeta
  • Crossing abstraction layers
  • Meltdown
  • Spectre
  • Vulnerabilities
  • Speculative Execution
  • Cache
  • Side channel attack
  • Security
  • DRAM
  • Rowhammer
  • DRAM Refresh
  • Probabilistic Adjacent Row Activation
  • Byzantine Failures

Lecture 3 (01.03 Thu.)

  • Rapid Prototyping
  • Debugging the hardware
  • Addition
  • Comparison Operation
  • Addition Operation
  • Seven Segment Display
  • Arithmetic and Logic Unit (ALU)
  • Full System Integration
  • FPGA: Field Programmable Gate Array
  • Reconfigurable
  • FPGA Building Blocks
  • Look-Up Tables (LUT)
  • Switches
  • Multiplexers
  • Hardware Description Language (HDL)
  • Computer-Aided Design (CAD) Tools
  • Logic Synthesis
  • Placement and Routing

Lecture 4 (02.03 Fri.)

  • Abstractions layers
  • Platform hierarchy
  • Meltdown
  • Spectre
  • Memory performance attacks
  • Memories forget: Refresh
  • Multi-core systems
  • Out of order execution
  • Speculative execution
  • Parallel processing
  • Memory performance hog
  • Changing priority in the OS: nice command
  • Disparity in slowdowns
  • Fairness
  • Controllable system
  • Quality of service
  • DRAM memory controller
  • Shared DRAM memory system
  • Row buffer
  • Row decoder
  • Column mux
  • FR-FCFS scheduling policy
  • Row buffer hit / conflict
  • Row buffer locality, hit rate
  • Sense amplifier
  • Row activation, opening
  • Writing back the row: closing, precharging
  • Denial of service attacks
  • Memory intensity
  • Stream / random access patterns
  • Round robin scheduling
  • DRAM cell
  • DRAM row
  • DRAM refresh
  • Charge leakage
  • Manufacturing process variation
  • Cold boot attacks
  • RAIDR
  • Bloom filter
  • Approximate set membership

Lecture 5 (08.03 Thu.)

  • Bloom Filter
  • Approximate set membership
  • Hash function
  • False positive, false negative
  • DRAM row
  • DRAM refresh
  • Big data
  • Double-precision floating point arithmetic
  • On-chip communication
  • Off-chip link
  • FPGAs
  • Heterogeneous Processors and Accelerators
  • Persistent memory/storage
  • Hybrid Main Memory
  • General Purpose GPU
  • Computing, Communication, Storage (memory)
  • Transistors
  • Logic gates
  • Digital circuit
  • Boolean algebra
  • MOS Transistors, n-type, p-type
  • Closed circuit, open circuit
  • Gate, Drain, Source
  • Complementary MOS (CMOS) Technology
  • Pull-up and pull-down networks
  • Inverter
  • CMOS NOT Gate
  • CMOS NAND Gate
  • CMOS AND Gate
  • Buffer
  • XOR, OR, NOR, XNOR
  • Moore’s Law
  • Transistor shrinking
  • Input/Output
  • Combinational Logic
  • Sequential Logic
  • Boolean equations
  • Functional/Timing Specification
  • Full adder
  • Axioms
  • Laws and theorems
  • Duality
  • Simplification theorems
  • DeMorgan’s Law

Lecture 6 (09.03 Fri.)

  • DDRx protocol/interface
  • DRAM
  • Memory controller
  • Synchronous / Asynchronous interfaces
  • Read / write / refresh latencies
  • ACK: acknowledge, NACK: not-acknowledge
  • Complementary Logic
  • Pull-up and pull-down networks
  • Latency of serial and parallel transistors
  • Pseudo-nMOS Logic
  • Static / Dynamic power consumption
  • Capacitance, frequency, and voltage
  • Leakage current
  • DeMorgan's law
  • Truth table
  • Boolean function
  • Minterm / Maxterm
  • Canonical form
  • SoP: Sum of Products = Disjunctive normal form = Minterm expansion
  • Standard shorthand notation
  • Alternative canonical form
  • PoS: Product of Sums = Conjunctive normal form = Maxterm expansion
  • Maxterm shorthand notation
  • Combinational building blocks / modules
  • Decoder
  • Instruction
  • OpCode
  • MUX: Multiplexer = Selector
  • Full adder
  • Carry bit, sum bit
  • PLA: Programmable logic array
  • LUT: Look up table
  • Logical (functional) completeness
  • Karnaugh Map (K-Map)
  • Logic simplification
  • Uniting theorem
  • “On set”
  • Gray code
  • Bit value X: Don't care
  • Bit value Z: Floating signal = High impedance = High-Z
  • BCD: Binary coded decimal
  • HDL: Hardware description language
  • Synthesis
  • Verilog
  • VHDL
  • Primitive gates
  • Modules
  • Top-down / Bottom-up design methodologies
  • Top-level module, sub-module, leaf cell
  • Bus
  • Bit slicing
  • Concatenation
  • Duplication
  • Structural (gate-level) description
  • Behavioral / functional description
  • Instantiation
  • Bitwise operators
  • Reduction operators
  • Conditional assignments
  • Precedence of operators
  • Tri-state buffer
  • Explicit coding

Lecture 7 (15.03 Thu.)

  • Sequential Circuit
  • Storage Element
  • R-S (Reset Set) Latch
  • Transparent
  • Gated D Latch
  • Register
  • Multi-ported memory / register file
  • Metastability
  • Memory
  • Address
  • Addressability
  • Address Space
  • Wordline
  • Address Decoder
  • Multiplexer
  • State
  • Clock
  • Finite State Machine (FSM)
  • Moore Machine
  • Mealy Machine
  • Next state logic
  • State register
  • Transition diagram
  • State encoding
  • Fully encoded
  • 1-hot encoded
  • Output encoded
  • Output logic
  • D Flip Flop
  • Master latch
  • Slave latch
  • Positive edge
  • Edge-triggered device
  • Verilog
  • Always block
  • Sensitivity list
  • Posedge
  • Blocking assignment
  • Non-blocking assignment
  • Asynchronous reset
  • Synchronous reset
  • Glitches
  • Case statement
  • Rising edge
  • Falling edge

Lecture 8 (16.03 Fri.)

  • Area
  • Speed / Throughput
  • Power / Energy
  • Design time
  • Circuit timing
  • Combinational circuit timing
  • Combinational circuit delay
  • Contamination delay
  • Propagation delay
  • Longest / Shortest path
  • Critical path
  • Glitch
  • Fixing glitches with K-map
  • Sequential circuit timing
  • D flip-flop
  • Setup / Hold / Aperture time
  • Metastability
  • Non-deterministic convergence
  • Contamination delay clock-to-q
  • Propagation delay clock-to-q
  • Correct sequential operation
  • Hold time constraint
  • Timing analysis
  • Clock skew
  • Safe timing
  • Circuit verification
  • High level design
  • Circuit level
  • Functional equivalence
  • Functional tests
  • Timing constraints
  • Functional verification
  • Testbench
  • Device under test (DUT)
  • Simple / Self-checking / Automatic testbench
  • Wavefront diagrams
  • Clock generation
  • Golden model
  • Timing verification
  • Timing report / summary

Lecture 9 (22.03 Thu.)

  • Von-Neumann Model
  • Instruction Set Architecture (ISA)
  • MIPS
  • LC-3
  • LC-3b
  • Assembly Language
  • Microprogramming
  • Single-Cycle Microarchitecture
  • Multi-Cycle Microarchitecture
  • Addressing Mode
  • Instruction
  • Operate instruction
  • Movement instruction
  • Control Flow instruction
  • Addressability
  • Word-addressable
  • Byte-addressable
  • Address Space
  • Little Endian
  • Big Endian
  • Memory Address Register
  • Memory Data Register
  • Functional Unit
  • Arithmetic and Logical Unit (ALU)
  • General Purpose Register
  • Register File
  • Function Return Value
  • Function Argument
  • Stack Pointer
  • Frame Pointer
  • Function Return Address
  • I/O Peripheral
  • Instruction Register
  • Instruction Pointer / Program Counter
  • Memory Load
  • Memory Store
  • Instruction Cycle
  • Opcode
  • Operand
  • Semantic Gap
  • Immediate Operand
  • Register Operand
  • Memory Addressing Mode

Lecture 10 (23.03 Fri.)

  • Instruction Set Architecture (ISA)
  • LC-3
  • MIPS
  • Assembly
  • Von Neumann model
  • Instruction cycle
  • Instruction
  • Operate instruction
  • Data movement instruction
  • Control flow instruction
  • Unary/binary operation
  • Literal or immediate
  • Addressing mode
  • PC-relative addressing mode
  • Indirect addressing mode
  • Base+offset addressing mode
  • Immediate addressing mode
  • Source/destination register
  • Machine code
  • Conditional branch
  • Jump
  • Condition codes
  • Loop
  • LC-3 data path
  • Assembly programming
  • Programming constructs
  • Sequential construct
  • Conditional construct
  • Iterative construct
  • OS service call
  • End Of Text (EOT)
  • Sentinel
  • Debugging
  • Interactive debugging
  • Breakpoint
  • If-else statement
  • While loop
  • For loop
  • Arrays in MIPS
  • Function call
  • Caller/callee
  • Arguments/return value
  • Stack
  • Preserved/nonpreserved registers

Lecture 11 (29.03 Thu.)

  • Microarchitecture
  • Von Neumann model
  • Stored program computer
  • Sequential instruction processing
  • Instruction pointer (program counter )
  • Control transfer instructions
  • Control flow order
  • Instruction fetch
  • Dataflow model
  • Dataflow token
  • Out-of-order execution
  • Instruction and data caches
  • Programmer visible state
  • Single-cycle machine
  • Multi-cycle machine
  • Datapath
  • Control logic
  • Pipelined datapath and control
  • Register file
  • Instruction fetch
  • Instruction decode
  • Register file writeback
  • ALU (Arithmetic Logic Unit)
  • Multiplexer (MUX)
  • Instruction types (R-type, I-type, J-type)
  • Source/destination register
  • Immediate value
  • Sign-extension
  • Conditional branch instruction

Lecture 12 (12.04 Thu.)

  • ALU: Arithmetic-Logic Unit
  • Single-cycle MIPS Datapath
  • Control signals
  • Datapath configuration
  • Control logic
  • Hardwired control (combinational)
  • Sequential/Microprogrammed control
  • Performance analysis
  • CPI: Cycles per Instruction
  • Critical path
  • Slowest instruction
  • Execution time of an instruction / of a program
  • Fetch, decode, evaluate address, fetch operands, execute, store result
  • Magic memory
  • Instruction memory and data memory
  • REP MOVS and INDEX instructions
  • Microarchitecture design principles
  • Bread and butter (common case) design and Amdahl's law
  • Balanced Design
  • Key system design principles: keep it simple, keep it low cost
  • Multi-cycle microarchitectures
  • Overhead of register setup/hold times
  • Main controller FSM

Lecture 13 (13.04 Fri.)

  • ALU
  • Benchmarks (e.g., SPECINT2000)
  • Branch condition
  • Clock cycle time
  • Condition codes
  • Conditional branch (BR)
  • Control Store
  • Control block
  • Control signals
  • Critical path design
  • Cycles Per Instruction (CPI)
  • Data path
  • Execution time
  • Finite State Machine (FSM)
  • Gating
  • Hardware bugs
  • ISA
  • Instruction Register (IR)
  • Instruction Set Architecture (ISA)
  • LC-3b
  • LC-3b state machine
  • Loading
  • MIPS FSM
  • Memory
  • Memory Address Register (MAR)
  • Memory Data Register (MDR)
  • Memory Mapped I/O
  • Microcode updates
  • Microinstruction
  • Microprogrammed Multi-Cycle microarchitecture
  • Microprogrammed multi-cycle machine
  • Microprogramming
  • Microsequencer
  • Microsequencing
  • Multi-cycle critical path
  • Multi-cycle microarchitecture
  • Multi-cycle performance
  • Next-state control signals
  • Performance analysis
  • Program counter (PC)
  • Register file
  • Single-bus data path design
  • Single-cycle critical path
  • Single-cycle microarchitecture
  • State variables
  • States
  • Tri-state buffer
  • Variable Latency Memory
  • Write-enable signal to register
  • u-ISA

Lecture 14 (19.04 Thu.)

  • Pipelining
  • Limited concurrency
  • Idle resources
  • Throughput
  • Latency
  • Independent instructions
  • Ideal pipeline
  • Independent operations
  • Partitionable suboperations
  • Latch delay
  • Pipeline cost
  • Instruction processing cycle
  • Pipeline stages
  • Pipeline registers
  • Steady state (full pipeline)
  • Control points / control signals
  • Pipelined control signals
  • Pipeline stalls
  • Resource contention
  • Data and control dependences
  • Long-latency operations
  • Register file
  • Data dependences
    • Flow dependences
    • Output dependences
    • Anti dependences
  • Interlocking
  • Scoreboarding
  • Dependence detection
  • Data forwarding / bypassing
  • Control dependence
  • RAW dependence handling
  • Bubbles
  • Compile-time data dependence analysis
  • Compile-time detection and elimination
  • NOP
  • Stalling hardware

Lecture 15 (20.04 Fri.)

  • Data dependences
  • Stalling
  • Stalling hardware
  • Hazard unit
  • Control dependences
  • Branch misprediction penalty
  • Instructions flushing
  • Early branch resolution
  • Data forwarding
  • Branch prediction
  • Pipelined performance
  • SPECINT2006 benchmark
  • Average CPI
  • Software-based interlocking
  • Hardware-based interlocking
  • Pipeline bubbles
  • Software-based instruction scheduling
  • Hardware-based instruction scheduling
  • Static / dynamic scheduling
  • Variable-length operation latency
  • Profiling
  • Multi-cycle execution
  • Exceptions
  • Interrupts
  • Precise exceptions / interrupts
  • Instruction retiring
  • Exception handling
  • Precise exceptions in pipelining
  • Reorder buffer (ROB)
  • ROB entry
  • Content Addressable Memory (CAM)
  • Register Alias Table (RAT)
  • Register renaming
  • Output dependences
  • Anti dependences
  • In-order pipeline

Lecture 16 (26.04 Thu.)

  • Pipeline
  • Branch condition
  • In-order pipeline with reorder buffer
  • Exceptions
  • Branch misprediction
  • Register renaming
  • Output and anti dependencies
  • Architectural register
  • Physical register
  • Register Alias Table (RAT)
  • Reorder Buffer (ROB)
  • Content Addressable Memory (CAM)
  • Indirection
  • History file
  • Future file
  • Checkpointing
  • Dispatch
  • Reservation station
  • Independent instructions
  • Variable load latency
  • Dispatch stalls
  • In-order dispatch
  • Out-of-order dispatch
  • Compile-time code scheduling/reordering
  • Value prediction
  • Fine-grained multithreading
  • Tag / Source tag
  • Wake up and select
  • Tomasulo’s algorithm
  • Tag broadcast
  • Dataflow graph

Lecture 17 (27.04 Fri.)

  • Two humps: scheduling and reordering
  • OoO: Out of Order Execution
  • Tomasulo's algorithm
  • RATs: Register Alias Tables (aka register maps)
  • FERAT: FrontEnd Register Alias Table
  • BERAT/ARAT: BackEnd/Architecture Register Alias Table
  • PRF: Physical Register File
  • Tag/value broadcast
  • Reservation station
  • Instruction window
  • Memory disambiguation / unknown address problem
  • Store - Load dependency
  • LQ/SQ: Load Queue / Store Queue
  • Data forwarding between stores and loads
  • Content Adressable Search
  • Range Search
  • Age-Based Search
  • Reorder buffer
  • ILP: Instruction Level Parallelism
  • Dataflow at ISA level
  • Superscalar execution

Lecture 18 (03.05 Thu.)

  • Branch prediction
  • Control dependence
  • Branch direction
  • Branch target address
  • Branch misprediction
  • Misprediction penalty
  • Branch resolution latency
  • Branch delay slot
  • Taken branch
  • Not-taken branch
  • Predicate combining
  • Predicated execution
  • Direct branch
  • Indirect branch
  • Branch target buffer (BTB)
  • Static branch prediction
  • Always not-taken prediction
  • Always taken prediction
  • Backward taken, forward not taken (BTFN) prediction
  • Profile-based prediction
  • Program-based prediction
  • Programmer-based prediction
  • Dynamic branch prediction
  • Last time predictor
  • Branch history table
  • Two-bit counter based prediction (bimodal prediction)
  • Two-level prediction
  • Global branch correlation
  • Local branch correlation
  • Two-level global branch prediction
  • Global history register
  • Pattern history table
  • Gshare predictor
  • Tournament predictor
  • Hybrid branch predictors

Lecture 19 (04.05 Fri.)

  • GPU-based RowHammer
  • Out-of-Order Execution
  • Single-cycle Microarchitectures
  • Multi-cycle and Microprogrammed Microarchitectures
  • Pipelining
  • Issues in Pipelining
  • Control and Data Dependence Handling
  • State maintenance and recovery
  • Very Long Instruction Word (VLIW)
  • Fine-grained multithreading
  • SIMD processing
  • Vector and array processors
  • GPUs
  • Decoupled access execute
  • Systolic arrays
  • Dataflow
  • Superscalar execution
  • Instruction-level concurrency
  • Tournament predictor
  • Branch penalty
  • Predictor tables
  • Hybrid branch predictors
  • Loop branch detector and predictor
  • Perceptron branch predictor
  • Hybrid history length based predictor
  • Intel Pentium M Predictors
  • Binary classifier
  • Global History Register (GHR)
  • Perceptron weights
  • Bias weight
  • Branch confidence
  • Dual-path execution
  • Dynamic predication
  • Control Dependences
  • Branch delay slot
  • Predicated execution
  • Multipath execution
  • Unconditional branch
  • Conditional branch
  • Predicate combining
  • Misprediction
  • Adaptivity
  • Jump tables
  • Interface calls
  • Virtual function calls
  • Branch Target Buffer (BTB)
  • Reduced Instruction Set Computer (RISC)
  • Lockstep execution
  • Static Instruction Scheduling

Lecture 20 (11.05 Fri.)

  • Throwhammer: RowHammer over the network
  • SIMD processing
  • GPU
  • Regular parallelism
  • Single Instruction Single Data (SISD)
  • Single Instruction Multiple Data (SIMD)
  • Multiple Instruction Single Data (MISD)
  • Systolic array
  • Streaming processor
  • Multiple Instruction Multiple Data (MIMD)
  • Multiprocessor
  • Multithreaded processor
  • Data parallelism
  • Array processor
  • Vector processor
  • Very Long Instruction Word (VLIW)
  • Vector register
  • Vector control register
  • Vector length register (VLEN)
  • Vector stride register (VSTR)
  • Prefetching
  • Vector mask register (VMASK)
  • Vector functional unit
  • CRAY-1
  • Seymour Cray
  • Memory interleaving
  • Memory banking
  • Vector memory system
  • Scalar code
  • Vectorizable loops
  • Vector chaining
  • Multi-ported memory
  • Vector stripmining
  • Gather/Scatter operations
  • Masked vector instructions

Lecture 21 (17.05 Thu.)

  • SIMD processing
  • GPU
  • Flynn’s taxonomy
  • Systolic arrays
  • Micron's Automata Processor
  • VLIW
  • Array processor
  • Vector processor
  • Row/Column major
  • Sparse vector
  • Gather/Scatter operations
  • Address indirection
  • Data parallelism
  • Vector register
  • Vector instruction
  • Vector functional units
  • Memory banks
  • Vectorizable loop
  • Vector Instruction Level Parallelism
  • Automatic code vectorization
  • SIMD ISA extensions
  • Intel Pentium MMX
  • Multimedia registers
  • Programming model
  • Sequential
  • Single-Instruction Multiple Data (SIMD)
  • Multi-threaded
  • Single-Program Multiple Data (SPMD)
  • Execution model
  • Single-Instruction Multiple Thread (SIMT)
  • Warp (wavefront)
  • Warp-level FGMT
  • Shader core
  • Scalar pipeline
  • Latency hiding
  • Interleave warp execution
  • Warp instruction level parallelism
  • Warp-based SIMD vs. Traditional SIMD
  • Control flow path
  • Branch divergence
  • SIMD utilization
  • Dynamic warp formation

Lecture 22 (18.05 Fri.)

  • GPGPU programming
  • NVIDIA Volta
  • Inherent parallelism
  • Data parallelism
  • GPU main bottlenecks
  • CPU-GPU data transfers
  • DRAM memory
  • Task offloading
  • Serial code (host)
  • Parallel code (device)
  • Bulk synchronization
  • Transparent scalability
  • Memory hierarchy
  • CUDA programming language
  • OpenCL
  • Indexing and memory access
  • Streaming multiprocessor (SM)
  • Streaming processor (SP)
  • Memory coalescing
  • Shared memory tiling
  • Bank conflict
  • Padding
  • GPU computing
  • GPU kernel
  • Massively parallel sections
  • Shared memory
  • Data transfers
  • Kernel launch
  • Latency hiding
  • Occupancy
  • Data reuse
  • SIMD utilization
  • Atomic operations
  • Histogram calculation
  • CUDA streams
  • Asynchronous transfers
  • Overlap of communication and computation

Lecture 23a (24.05 Thu.)

  • Systolic Arrays
  • High concurrency
  • Balanced computation and I/O memory bandwidth
  • Simple, regular design
  • Processing Elements
  • Decoupled Access Execute (DAE)
  • Image processing
  • Convolution
  • Convolutional layers
  • Convolutional Neural Network
  • AlexNet
  • ImageNet
  • GoogLeNet
  • Stream processing
  • Pipeline parallelism
  • Staged execution
  • WARP Computer
  • Tensor Processing Unit
  • Astronautics ZS-1
  • Loop unrolling

Lecture 23b (24.05 Thu.)

  • Memory
  • Virtual memory
  • Physical memory
  • Load/store data
  • Random Access Memory (RAM)
  • Static RAM (SRAM)
  • Dynamic RAM (DRAM)
  • Memory array
  • Decoder
  • Wordline
  • Memory bank
  • Sense amplifier

Lecture 24 (25.05 Fri.)

  • Destructive reads
  • Refresh
  • Capacitor and logic manufacturing technologies
  • DRAM vs SRAM
  • Mature and immature memory technologies
    • Flash
    • Phase Change Memory
    • Magnetic RAM
    • Resistive RAM
  • Memory hierarchy
  • Temporal locality
  • Spatial locality
  • Caching basics
  • Caching in a pipelined design
  • Hierarchical latency analysis
  • Access latency and miss penalty
  • Hit-rate, miss-rate
  • Prefetching
  • Cache line, cache block
  • Placement
  • Replacement
  • Granularity of management
  • Write policy
  • Separation of instruction and data
  • Tag store and data store
  • Cache bookkeeping
  • Tag - index - byte in block
  • Direct mapped cache
  • Conflict misses
  • Set associativity
  • Ways in cache
  • Fully associative cache
  • Degree of associativity
  • Insertion, promotion, and eviction (replacement)
  • Replacement policies
    • Random
    • FIFO
    • Least recently used
    • Not most recently used
    • Least frequently used
  • Implementing LRU
  • Set thrashing

Lecture 25a (31.05 Thu.)

  • Cache Tag
  • Tag Store Entry
  • Valid Bit
  • Dirty Bit
  • Replacement Policy Bit
  • Write-Back Cache
  • Write-Through Cache
  • Cache Coherence
  • Cache Consistency
  • Write Combining
  • (No-)Allocate on Write Miss
  • First-Level Cache
  • Second-Level Cache
  • Last-Level Cache
  • Sub-blocked (Sectored) Caches
  • Instruction Cache
  • Data Cache
  • Unified Instruction and Data Cache
  • Cache Management Policy
  • Cache Hit/Miss Rate
  • Cache Block Size
  • Critical-Word First
  • Working Set
  • Set Associativity
  • Compulsory Cache Miss
  • Capacity Cache Miss
  • Conflict Cache Miss
  • Loop Interchange
  • Loop Fusion
  • Array Merging
  • Shared vs. Private Caches
  • Cache Contention
  • Performance Isolation
  • Quality of Service
  • Starvation
  • Dynamic Cache Partitioning

Lecture 25b (31.05 Thu.)

  • Virtual Memory
  • Physical Memory
  • Virtual Memory Address
  • Physical Memory Address
  • Code/Data Relocation
  • Memory Isolation
  • Memory Protection
  • Code/Data Sharing
  • Address Indirection
  • Virtual Address Translation
  • x86 Linear Address
  • Virtual Memory Page
  • Physical Memory Frame
  • Page Size
  • Page Table
  • Demand Paging
  • Page Replacement
  • Page Granularity
  • Virtual Page Number
  • Physical Frame Number
  • Page Fault
  • Translation Lookaside Buffer (TLB)
buzzword.txt · Last modified: 2019/02/12 16:34 by 127.0.0.1