AMD 2014 Annual Report Download - page 10

Download and view the complete annual report

Please find page 10 of the 2014 AMD annual report below. You can navigate through the pages in the report by either clicking on the pages listed below, or by using the keyword search tool below to find specific information within the annual report.

Page out of 127

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127

Computing and Graphics
The x86 Microprocessor and Chipset Markets
Central Processing Unit (CPU). A microprocessor is an IC that serves as the CPU of a computer. It
generally consists of hundreds of millions or billions of transistors that process data and control other devices in
the system, acting as the “brain” of the computer. The performance of a microprocessor is a critical factor
impacting the performance of computing and entertainment platforms, such as desktop PCs, notebooks, tablets
and workstations. The principal elements used to measure CPU performance are work-per-cycle (or how many
instructions are executed per cycle), clock speed (representing the rate at which a CPU’s internal logic operates,
measured in units of gigahertz, or billions of cycles per second) and power consumption. Other factors impacting
microprocessor performance include the number and type of cores in a microprocessor, the bit rating of the
microprocessor, memory size and data access speed.
Developments in IC design and manufacturing process technologies have resulted in significant advances in
microprocessor performance. As businesses and consumers require greater performance from their computer
systems due to the growth of digital data and increasingly sophisticated software applications, semiconductor
companies are designing and developing multi-core microprocessors, where multiple processor cores are placed
on a single die or in a single processor. Multi-core microprocessors offer enhanced overall system performance
and efficiency because computing tasks can be spread across two or more processing cores, each of which can
execute a task at full speed. Multi-core microprocessors can increase performance of a computer system without
greatly increasing the total amount of power consumed and the total amount of heat emitted. Businesses and
consumers also require computer systems with improved power management technology, which helps them to
reduce the power consumption of their computer systems and lower total cost of ownership.
Accelerated Processing Unit (APU) and System-on-Chip (SoC). Consumers increasingly demand
computing devices, including desktop and notebooks PCs, and smaller form factors, such as tablets and 2-in-1s
(PCs that can function both as a notebook or a tablet), with improved end-user experience, system performance
and energy efficiency. Consumers also continue to demand thinner and lighter mobile devices, with better
performance and longer battery life. We believe that a computing architecture that optimizes the use of its
components can provide these improvements.
An APU is a processing unit that integrates a CPU and a GPU onto one chip (or one piece of silicon), along
with, in some cases, other special-purpose components. This integration enhances system performance by
“offloading” selected tasks to the best-suited component (i.e., the CPU or the GPU) to optimize component use,
increasing the speed of data flow between the CPU and GPU through shared memory and allowing the GPU to
function as both a graphics engine and an application accelerator. Having the CPU and GPU on the same chip
also typically improves energy efficiency by, for example, eliminating connections between discrete chips.
An SoC is a type of IC with a CPU, GPU and other components, such as a memory controller and peripheral
management, comprising a complete computing system on a single chip. By combining all of these elements as
an SoC, system performance and energy efficiency is improved, similar to an APU.
Heterogeneous System Architecture (HSA) describes an industry standard that is an overarching design for
having combinations of CPU and GPU processor cores operate as a unified, integrated engine that shares system
responsibilities and resources. We are a founding member of the HSA Foundation, a non-profit organization
established to define and promote this open standards-based approach to heterogeneous computing.
Heterogeneous computing allows for the elevation of the GPU to the same level of the CPU for memory access,
queuing and execution. In other words, rather than having a CPU as a master and other various processors as
subordinates, these CPU and GPU processing units can be referred to as “compute cores” (where each core is
capable of running at least one process in its own context and virtual memory space, independently from other
cores). Heterogeneous computing also allows software programmers to develop applications that more fully
utilize the full compute capabilities of APUs and SoCs.
4