AMD 2013 Annual Report Download - page 12

Download and view the complete annual report

Please find page 12 of the 2013 AMD annual report below. You can navigate through the pages in the report by either clicking on the pages listed below, or by using the keyword search tool below to find specific information within the annual report.

Page out of 132

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132

Accelerated Processing Unit (APU)
While general purpose computer architectures based on the x86 architecture are sufficient for many
customers, we believe that an architecture that optimizes the use of a CPU and GPU for a given workload can
provide a substantial improvement in user experience, performance and energy efficiency. As the volume of
digital media increases, we believe end users can benefit from an accelerated computing architecture. An
accelerated computing architecture enables “offloading” of selected tasks, thereby optimizing the use of multiple
computational units such as the CPU and GPU, depending on the application or workload. For example, serial
workloads are better suited for CPUs, while highly parallel tasks may be better performed by a GPU. Our AMD
APU combines our CPU and GPU onto a single piece of silicon. We believe that high performance computing
workloads, workloads that are visual in nature and even traditional applications such as photo and video editing
or other multi-media applications can benefit from our accelerated computing architecture.
Microprocessor Products
We currently design, develop and sell microprocessor products for desktop PCs, notebooks, tablets, hybrids,
servers and embedded products. Our microprocessors and chipsets are incorporated into computing platforms
that also include GPUs and core software to enable and advance the computing components. A platform is a
collection of technologies that are designed to work together to provide a more complete computing solution. We
believe that integrated, balanced platforms consisting of microprocessors, chipsets and GPUs that work together
at the system level bring end users improved system stability, increased performance and enhanced power
efficiency. Furthermore, by combining all of these elements onto a single piece of silicon as an APU or an SOC,
we believe system performance and power efficiency is further improved. An SOC is a type of IC with a CPU,
GPU and other components, such as a memory controller and peripheral management, comprising a complete
computing system on a single chip. In addition to the enhancements at the end-user level, we believe our
customers also benefit from an all-AMD platform, as we are able to provide them with a single point of contact
for the key platform components and enable them to bring the platforms to market faster in a variety of client and
server system form factors.
Our CPUs and APUs are currently manufactured primarily using 65 nanometer (nm), 45nm, 40nm, 32nm
and 28nm process technologies. We currently base our microprocessors and chipsets on the x86 instruction set
architecture and AMD’s Direct Connect Architecture, which connects an on-chip memory controller and input/
output, or I/O, channels directly to one or more microprocessor cores. We typically integrate two or more
processor cores onto a single die, and each core has its own dedicated cache, which is memory that is located on
the semiconductor die, permitting quicker access to frequently used data and instructions. Some of our
microprocessors have additional levels of cache such as L2, or second-level cache, and L3, or third-level cache,
to enable faster data access and higher performance.
Energy efficiency and power consumption continue to be key design principles for our products. We focus
on continually improving power management technology, or “performance-per-watt.” To that end, we offer
CPUs, APUs and chipsets with features that we have designed to reduce system-level energy consumption, with
multiple low power states which utilize lower clock speeds and voltages that reduce processor power
consumption during both active and idle times. We design our CPUs and APUs to be compatible with operating
system software such as the Microsoft®Windows®family of operating systems, Linux®, NetWare®, Solaris and
UNIX.
Our AMD family of APUs represents a new approach to processor design and software development,
delivering serial, parallel and visual compute capabilities for high definition (HD) video, 3D and data-intensive
workloads in the APU. APUs combine high-performance serial and parallel processing cores with other special-
purpose hardware accelerators. We design our APUs for improved visual computing, security, performance-per-
watt and smaller device form factors. Having the CPU and GPU on the same chip reduces the system power and
bill-of-materials, speeds the flow of data between the CPU and GPU through shared memory and allows the GPU
to function as both a graphics engine and an application accelerator in highly efficient computing platforms.
4