User Tools

Site Tools


buzzword

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
buzzword [2020/11/03 09:38] – [Lecture 8 (15.10 Thu.)] firtinacbuzzword [2020/12/04 12:02] (current) – [Lecture 20 (03.12 Thu.)] Add Buzzwords for Lecture 21, 4/12 sjoao
Line 534: Line 534:
  
 ===== Lecture 12a (30.10 Fri.) ===== ===== Lecture 12a (30.10 Fri.) =====
 +  * Error-correcting code 
 +  * Hamming code 
 +  * BCH code 
 +  * Reed-Solomon code 
 +  * On-Die ECC 
 +  * Rank-Level ECC 
 +  * ECC encoder 
 +  * ECC decoder 
 +  * Parity-check bits 
 +  * Error syndrome 
 +  * Error characterization 
 +  * SAT solver 
 +  * Data-retention error
 ===== Lecture 12b (30.10 Fri.) ===== ===== Lecture 12b (30.10 Fri.) =====
   * Capacity-latency tradeoff    * Capacity-latency tradeoff 
Line 567: Line 579:
   * Parallel Reduction    * Parallel Reduction 
  
 +===== Lecture 13 (05.11 Thu.) =====
 +  * Memory Interference
 +  * Prioritization
 +  * Data Mapping
 +  * Core/Source Throttling
 +  * Applitcation Thread Scheduling
 +  * Memory Service Guarantees
 +  * Quality of Service
 +  * QoS-Aware Memory Systems
 +  * Stall-Time Fair Memory Scheduling
 +  * Parallelism-Aware Batch Scheduling
 +  * PAR-BS
 +  * ATLAS Memory Scheduler
 +  * BLISS (Blacklisting Memory Scheduler)
 +  * Thread Cluster Memory Scheduling
 +  * TCM
 +  * Throughput vs. Fairness
 +  * Clustering Threads
 +  * STFM
 +  * FR-FCFS
 +  * Staged Memory Scheduling
 +  * SMS
 +  * DASH
 +  * Current SoC Architectures
 +  * Strong Memory Service Guarantees
 +  * Predictable Performance
 +  * Handling Memory Interference In Multithreaded Applications
 +  * Barriers
 +  * Critical Sections
 +  * Data mapping
 +  * Memory Channel Partitioning
 +  * Parallel Application Memory Scheduling
 +  * Fairness via Source Throttling
 +
 +===== Lecture 14 (12.11 Thu.) =====
 +  * Target metric
 +  * Theoretical proof
 +  * Analytical modeling/estimation
 +  * Abstraction
 +  * Accuracy
 +  * Workload
 +  * RTL simulations
 +  * Design choices
 +  * Cycle-level accuracy
 +  * Design space exploration
 +  * Flexibility
 +  * High-level simulations
 +  * Low-level models
 +  * Ramulator
 +  * Modular
 +  * Extensible
 +  * IPC (instructions per cycle)
 +  * 3D-stacked DRAM
 +  * DDR3
 +  * GDDR5
 +  * HBM
 +  * HMC
 +  * Wide I/O
 +  * LPDDR
 +  * Spatial locality
 +  * Bank-level parallelism
 +
 +===== Lecture 15 (13.11 Fri.) =====
 +  * Emerging memory technologies
 +  * Charge memory
 +  * Resistive memory technologies
 +  * Phase Change Memory (PCM)
 +  * STT-MRAM
 +  * Memristor
 +  * RRAM/ReRAM
 +  * Non-volatile
 +  * Multi-Level Cell PCM (MLC-PCM)
 +  * Endurance
 +  * Reliability
 +  * Intel Optane Memory
 +  * 3D-XPoint Technology
 +  * Read Asymmetry
 +  * Magnetic Tunnel Junction (MTJ) device
 +  * Hybrid main memory
 +  * DRAM buffer/DRAM cache
 +  * Data placement
 +  * Row buffer
 +  * Memory-Level Parallelism (MLP)
 +  * Translation Lookaside Buffer (TLB)
 +  * Page Table
 +  * In-memory bulk bitwise operations
 +  * In-memory crossbar array operations
 +  * Analog computation
 +  * Digital to Analog Converter (DAC)
 +  * Analog to Digital Converter (ADC)
 +  * NVM-based PIM system
 +
 +
 +===== Lecture 16a (19.11 Thu.) =====
 +  * Emerging memory technology
 +  * Flash memory
 +  * Memory-centric system design
 +  * Phase change memoery
 +  * Charge memory
 +  * Resistive memory
 +  * Multi-level cell
 +  * Spin-Transfer Torque Magnetic RAM (STT-MRAM)
 +  * Memristors
 +  * Resistive RAM (RRAM or ReRAM)
 +  * Intel 3D Xpoint
 +  * Capacity-latency trade-off
 +  * Capacity-reliability trade-off
 +  * Endurance
 +  * Magnetic Tunnel Junction (MTJ)
 +  * Hybrid memory
 +  * Writing filtering
 +  * Data placement
 +  * Data access pattern
 +  * Row-buffer locality
 +  * Overall system performance impact
 +  * Memory-Level Parallelism (MLP)
 +  * Utility-based hybrid memory management
 +  * Hybrid Memory Systems 
 +  * Large (DRAM) Cache
 +  * TIMBER 
 +  * Two-Level Memory/Storage model
 +  * Volatile data 
 +  * Persistent data 
 +  * Single-level store
 +  * Unified Memory and storage
 +  * The Persistent Memory Manager (PMM)
 +  * ThyNVM 
 +
 +===== Lecture 16b (19.11 Thu.) =====
 +  * Heterogeneity 
 +  * Asymmetry in design
 +  * Amdahl's Law 
 +  * Synchronization overhead 
 +  * Load imbalance overhead
 +  * Resource sharing overhead
 +  * IBM Power4
 +  * IBM Power5
 +  * Niagara Processor
 +  * Performance vs. parallelism
 +  * Asymmetric Chip Multiprocessor (ACMP)
 +  * MorphCore
 +===== Lecture 17 (20.11 Fri.) =====
 +  *Amdahl's Law
 +  *Parallelizable fraction of a program
 +  *Serial bottleneck
 +  *Synchronization overhead 
 +  *Load imbalance overhead
 +  *Resource sharing overhead 
 +  *Critical section 
 +  *Asymmetric multi-core (ACMP)
 +  *Symmetric CMP (SCMP)
 +  *Accelerated Critical Sections (ACS)
 +  *Selective Acceleration of Critical Sections (SEL)
 +  *Critical Section Request Buffer (CSRB)
 +  *Cache misses for private data
 +  *Cache misses for shared data  
 +  *Equal-area comparison
 +  *Bottleneck Identification and Scheduling (BIS)
 +  *Thread waiting cycles (TWC)
 +  *Bottleneck Table (BT)
 +  *Scheduling Buffers (SB)
 +  *Acceleration Index Tables (AIT)
 +  *The critical path
 +  *Feedback-Directed Pipelining (FDP)
 +  *Comprehensive fine-grained bottleneck acceleration 
 +  *Lagging threads
 +  *Multiple applications
 +  *Criticality of code segments
 +  *Utility-Based Acceleration (UBA)
 +  *Global criticality of the segment
 +  *Fraction of execution time spent on segment
 +  *Local speedup of the segment
 +  *Data marshaling
 +  *Staged execution model 
 +  *Segment spawning
 +  *Producer-Consumer Pipeline Parallelism
 +  *Locality of inter-segment data
 +  *Generator instruction
 +  *Marshal buffer
 +  *Pipeline parallelism
 +  *Aggressive stream prefetcher 
 +  *Energy expended per instruction (EPI) 
 +  *Dynamic voltage frequency scaling (DVFS)
 +
 +===== Lecture 18 (26.11 Thu.) =====
 +
 +  * Memory latency 
 +  * DRAM Latency 
 +  * Latency Reduction
 +  * Latency Tolerance
 +  * Latency Hiding 
 +  * Caching
 +  * Prefetching
 +  * Multithreading
 +  * Out-of-order Execution
 +  * Software prefetching
 +  * Hardware prefetching
 +  * Execution-based prefetchers
 +  * Next-Line Prefetchers
 +  * Stride Prefetchers
 +  * Stream Buffers
 +  * Feedback-Directed Prefetching
 +  * Content Directed Prefetching
 +
 +
 +===== Lecture 19a (27.11 Fri.) =====
 +
 +  * Execution-based Prefetcher 
 +  * Speculative thread 
 +  * Thread-Based Pre-Execution 
 +  * Runahead Execution 
 +  * Address-Value Delta (AVD) Prediction 
 +  * Multi-Core Issues in Prefetching 
 +  * Feedback Directed Prefetching
 +  * Bandwidth-Efficient Prefetching 
 +  * Coordinated Prefetcher Control 
 +  * Prefetching in GPUs
 +
 +
 +
 +
 +
 +
 +===== Lecture 19b (27.11 Fri.) =====
 +
 +
 +  * Multiprocessing 
 +  * Memory Consistency 
 +  * Cache Coherence 
 +  * SISD
 +  * SIMD
 +  * MISD 
 +  * MIMD 
 +  * Parallelism 
 +  * Instruction Level Parallelism
 +  * Data Parallelism 
 +  * Task Level Parallelism 
 +  * Loosely Coupled Multiprocessors 
 +  * Tightly Coupled Multiprocessors
 +  * Hardware-based Multithreading 
 +  * Parallel Speedup 
 +  * Superlinear Speedup 
 +  * Utilization 
 +  * Redundancy 
 +  * Efficiency 
 +  * Amdahl’s Law
 +  * Sequential Bottleneck 
 +  * Synchronization
 +  * Load Imbalance
 +  * Resource Contention
 +  * Critical Sections 
 +  * Barriers 
 +  * Stages of Pipelined Programs
 +
 +===== Lecture 20 (03.12 Thu.) =====
 +
 +  * Memory ordering
 +  * Memory consistency
 +  * Parallel computer architecture
 +  * Multiprocessor operation
 +  * MIMD (multiple instruction, multiple data) machine
 +  * Performance-correctness trade-off
 +  * Cache coherence
 +  * Ordering of operations
 +  * Local ordering
 +  * Global ordering 
 +  * Memory fence instruction
 +  * Out-of-order execution 
 +  * Mutual exclusion
 +  * Protecting shared data
 +  * Critical section
 +  * Sequential consistency
 +  * Weaker memory consistency
 +  * Dataflow processor
 +
 +
 +===== Lecture 21 (04.12 Fri.) =====
  
 +  * Cache coherence
 +  * Memory consistency
 +  * Shared memory model
 +  * Software coherence
 +    * Coarse-grained (page-level)
 +    * Non-cacheable
 +    * Fine-grained (cache flush)
 +  * Hardware coherence
 +  * Valid/invalid
 +  * Write propagation
 +  * Write serialization
 +  * Update vs. Invalid
 +  * Snoopy bus
 +  * Directory
 +    * Exclusive bit
 +  * Directory optimizations (bypassing)
 +  * Snoopy cache
 +  * Shared bus
 +  * VI protocol
 +  * MSI (Modified, Shared, Invalid)
 +  * Exclusive state
 +  * MESI (Modified, Exclusive, Shared, Invalid)
 +  * Illinois Protocol (MESI)
 +  * Broadcast
 +  * Bus request
 +  * Downgrade/upgrade
 +  * Snoopy invalidation
 +  * Cache-to-cache transfer
 +  * Writeback
 +  * MOESI (Modified, Owned, Exclusive, Shared, Invalid)
 +  * Directory coherence
 +  * Race conditions
 +  * Totally-ordered interconnect
 +  * Directory-based protocols
 +  * Set inclusion test
 +  * Linked list
 +  * Bloom filters
 +  * Contention resolution
 +  * Ping-ponging
 +  * Synchronization
 +  * Shared-data-structure
 +  * Token Coherence
 +  * Coherence for NDAs
 +  * Optimistic execution
 +  * Signature
 +  * Commit/re-execute
buzzword.1604396331.txt.gz · Last modified: 2020/11/03 09:38 by firtinac