Lecture 7

Contents

Edge Computing – Introductory Pre-Read

1. What is Edge Computing?

Edge computing refers to performing computation closer to where data is generated (sensors, users, devices), rather than relying solely on distant cloud servers. This reduces latency, lowers bandwidth usage, and improves responsiveness for real-time applications such as autonomous vehicles, IoT analytics, and AR/VR.

Key outcomes for students:

  • Understand why the cloud is not enough for latency-critical systems.
  • Recognize where “the edge” fits in the cloud–edge–device continuum.
  • Identify typical edge hardware, software stacks, and deployment architectures.

2. Foundational Topics to Review

2.1 Cloud Computing Basics

  • Centralized architecture
  • Virtual machines vs. containers
  • Scalability and elasticity
  • Latency constraints in wide-area networks

2.2 Internet of Things (IoT) Fundamentals

  • Sensor/actuator data lifecycles
  • Resource constraints (power, compute, bandwidth)
  • Local vs. remote processing

2.3 Distributed Systems Concepts

  • Client–server vs. distributed execution
  • Data locality and communication overhead
  • Eventual consistency vs. strong consistency

2.4 Networking Essentials for Edge

  • 4G/5G/6G edge connectivity
  • Round-trip time (RTT) and jitter
  • Multi-access edge computing (MEC)

3. Recommended Pre-Read Articles & Papers

Introductory

  1. "The Emergence of Edge Computing" – IEEE Internet Computing
    A short overview of motivations and early architectures.

  2. NVIDIA Edge Computing Overview (Developer Docs)
    Clear visual explanations of edge hardware and applications.