AMD 2015 Annual Report Download - page 11

Download and view the complete annual report

Please find page 11 of the 2015 AMD annual report below. You can navigate through the pages in the report by either clicking on the pages listed below, or by using the keyword search tool below to find specific information within the annual report.

Page out of 130

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130

Heterogeneous System Architecture (HSA) describes an industry standard that is an overarching design for
having combinations of CPU and GPU processor cores operate as a unified, integrated engine that shares system
responsibilities and resources. We are a founding member of the HSA Foundation, a non-profit organization
established to define and promote this open standards-based approach to heterogeneous computing.
Heterogeneous computing allows for the elevation of the GPU to the same level of the CPU for memory access,
queuing and execution—a true “compute core”. This capability allows software programmers to develop
applications to more fully utilize the capabilities of the graphics compute core.
Graphics Processing Unit (GPU). A GPU is a programmable logic chip that renders images, animations
and video and is also increasingly being used to handle general computing tasks. GPUs are located in plug-in
cards, as a discrete processor or in a chipset on the motherboard, or in the same chip as the CPU. GPUs perform
parallel operations on data to render images for the screen and are essential to presenting computer generated
images on the screen, decoding and rendering animations and video. The more sophisticated the GPU, the higher
the resolution and the faster and smoother the motion. GPUs on stand-alone cards or discrete GPUs on the
motherboard typically include their own memory, while GPUs in the chipset or CPU chip share main memory
with the CPU.
In addition to graphics processing, the parallel operation of GPUs are used on multiple sets of data,
increasingly used in vector processor for non-graphics applications that require repetitive computations such as
supercomputing, deep neural networks, and various embedded applications.
Chipset. A chipset is a generic term referring to a collection of system level components that manage data
flow among a microprocessor or microprocessors, memory and peripherals (such as CD ROM drives, DVD
drives and USB peripherals). Chipsets perform essential logic functions, balance a system’s performance and
provide system control and power management functions. Some chipsets have graphics capabilities by including
an integrated graphics processor (IGP) within the chipset. A chipset with an IGP is known as an IGP chipset. IGP
chipsets can offer a lower cost, reduced power alternative to a discrete GPU, and are often also used in smaller
form factors. Systems that are powered by an APU or by a CPU and discrete GPU combination often do not have
a chipset and instead use an AMD Controller Hub chip to perform the functions of a chipset. As a result, we
believe that either an APU and AMD Controller Hub chip combination or a SoC, which already includes a
chipset, will eventually replace the market for IGP chipsets.
Our x86 Microprocessor and Chipset Products
Our microprocessors are incorporated into computing platforms, which are a collection of technologies that
are designed to work together to provide a more complete computing solution and to enable and advance the
computing components. We believe that integrated, balanced computing platforms consisting of microprocessors,
chipsets and GPUs (either as discrete GPUs or integrated into an APU or SoC) that work together at the system
level bring end users improved system stability, increased performance and enhanced power efficiency. In
addition, we believe our customers also benefit from an all-AMD platform (consisting of an APU or CPU, a
discrete GPU and a chipset or an AMD Fusion Controller Hub chip), as we are able to optimize interoperability,
provide our customers a single point of contact for the key platform components and enable them to bring the
platforms to market faster in a variety of client and server system form factors.
We currently base our microprocessors and chipsets on the x86 instruction set architecture and AMD’s
Direct Connect Architecture, which connects an on-chip memory controller and input/output (I/O) channels
directly to one or more microprocessor cores. We typically integrate two or more processor cores onto a single
die, and each core has its own dedicated cache, which is memory that is located on the semiconductor die,
permitting quick access to frequently used data and instructions. Some of our microprocessors have additional
levels of cache such as L2, or second-level cache, and L3, or third-level cache, to enable faster data access and
higher performance.
5