AMD 2011 Annual Report Download - page 10

Download and view the complete annual report

Please find page 10 of the 2011 AMD annual report below. You can navigate through the pages in the report by either clicking on the pages listed below, or by using the keyword search tool below to find specific information within the annual report.

Page out of 140

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140

Accelerated Processing Unit (APU)
While general purpose computer architectures based on the x86 architecture are sufficient for many
customers, we believe that an architecture that optimizes the use of a CPU and graphics processing unit, or GPU,
for a given workload can provide a substantial improvement in user experience, performance and energy
efficiency. As the volume of digital media increases, we believe many customers can benefit from an accelerated
computing architecture. An accelerated computing architecture enables “offloading” of selected tasks, thereby
optimizing the use of multiple computational units such as the CPU and GPU, depending on the application or
workload. For example, serial workloads are better suited for CPUs, while highly parallel tasks may be better
performed by a GPU. Our AMD Accelerated Processing Unit, or APU, combines our CPU and GPU onto a
single piece of silicon. We believe that high performance computing workloads, workloads that are visual in
nature and even traditional applications such as photo and video editing or other multi-media applications can
benefit from our accelerated computing architecture.
Microprocessor Products (CPUs and APUs)
We currently design, develop and sell microprocessor products for servers, desktop PCs and mobile devices,
including mobile PCs and tablets. Our microprocessors and chipsets are incorporated into computing platforms
that also include GPUs and core software to enable and advance the computing components. A platform is a
collection of technologies that are designed to work together to provide a more complete computing solution. We
believe that integrated, balanced platforms consisting of microprocessors, chipsets and GPUs that work together
at the system level bring end users improved system stability, increased performance and enhanced power
efficiency. Furthermore, by combining all of these elements onto a single piece of silicon as an APU, we believe
system performance and power efficiency is further improved. In addition to the enhancements at the end-user
level, our customers also benefit from an all-AMD platform, as we are able to provide them with a single point of
contact for the key platform components and enable them to bring the platforms to market faster in a variety of
client and server system form factors.
Our CPUs and APUs are manufactured primarily using 45-nanometer (nm), 40nm, and 32nm process
technology. We currently base our microprocessors and chipsets on the x86 instruction set architecture and
AMD’s Direct Connect Architecture, which connects an on-chip memory controller and input/output, or I/O,
channels directly to one or more microprocessor cores. We typically integrate two or more processor cores onto a
single die, and each core has its own dedicated cache, which is memory that is located on the semiconductor die,
permitting quicker access to frequently used data and instructions. Some of our microprocessors have additional
levels of cache such as L2, or second level cache, and L3, or third level cache, to enable faster data access and
higher performance.
Energy efficiency and power consumption continue to be key design principles for our products. We focus
on continually improving power management technology, or “performance-per-watt.” To that end, we offer
CPUs, APUs and chipsets with features that we have designed to reduce system level energy consumption, with
multiple levels of lower clock speed and voltage states that reduce processor power consumption during idle
times. We design our CPUs and APUs to be compatible with operating system software such as the Microsoft®
Windows®family of operating systems, Linux®, NetWare®, Solaris and UNIX. Our CPUs and chipsets support
multiple generations of HyperTransport™ technology, which is a high-bandwidth communications interface that
enables higher levels of multi-processor performance and scalability over traditional front side bus-based
microprocessor technology.
Our AMD family of APUs represents a new approach to processor design and software development,
delivering serial, parallel and visual compute capabilities for HD video, 3D and data-intensive workloads in the
APU. APUs combine high-performance serial and parallel processing cores with other special-purpose hardware
accelerators. We design our APUs for improved visual computing, security, performance-per-watt and smaller
device form factors. Having the CPU and GPU on the same chip reduces the system power and bill-of-materials,
4