Modern hardware is increasingly rich in diversity, including CPUs, GPUs, FPGAs and more, with new and novel architectures constantly emerging. To provide differentiation between these devices, each is typically built around architectures optimized for some classes of application or some patterns of parallelism. Numerous computational cores, varying levels of hardware vectorization, and other degrees of architectural freedom exist across the many hardware options. The need to efficiently utilize diverse hardware has led to emergence of a wide variety of programming models, execution models, and languages, and has simultaneously led to a complex landscape of confused and often conflicting terminology and abstractions. This reality makes it challenging for developers to comprehend and then choose a programming model that fits with their applications and mental model, particularly when more than one target architecture or vendor is of interest. This talk strives to untangle the landscape of modern parallel programming models, to help developers understand how the models and options relate to each other, and to frame how to think about their specific algorithms when expressing them in code. Although experienced developers typically understand much of the terminology and the relationships between models, a holistic presentation of the material is of strong value, as evidenced by feedback from parallel programming experts that have seen previews of this presentation. To begin, a brief overview will be presented to frame parallel programming and offload compute programming models, followed by characterization of the Single Program Multiple Data (SPMD) abstract model and the power it exhibits when mapping to multiple classes of architecture. We will discuss how fundamental design decisions within a compiler impact the mapping from source code to an underlying programming model, highlighting that the same code can be lowered to multiple models. This is particularly relevant in the presence of vector data types, which permit multiple interpretations and are a common cause of confusion. A core element of the presentation is decomposition of how programming model and design assumptions of a compiler are ideally understood concurrently by developers to streamline the creation and tuning of performant code. SPMD and explicit Single Instruction Multiple Data (SIMD) programming models will be discussed relative to the Khronos OpenCL and SYCL standards, as well as to OpenMP and CUDA, with the aim of clarifying the concepts and models for developers working in specific languages. The talk will conclude with an overview of an experimental extension to SYCL that proposes a mechanism for mixing SPMD and explicit SIMD programming styles with clear semantics and boundaries in code. The talk will show that providing clear points of transition with clear semantics can enable expert tuning at the granularity of a single line of code, without breaking the SPMD programming abstraction used by the rest of a kernel. Parallel programming models such as SPMD and SIMD are critical in the modern landscape of heterogeneous compute architectures. When coupled with decisions made during the implementation of specific compilers, developers are left with a complex task when working to understand how concepts and hardware mappings interact. This talk describes the most common programming models exposed through SYCL, OpenCL, OpenMP, and CUDA, with the intent of clarifying misconceptions and confusion about the mapping of software to hardware. Attendees will leave the presentation with a holistic understanding of how SPMD and SIMD-like programming models fit together, and how they relate to the code that many of us write from day to day. Speaker: Michael Kinsner (Intel) Co-Authors: Ben Ashbaugh, James Brodman, Greg Lueck, John Pennycook and Roland Schulz (Intel)