Advanced Micro Devices, Inc. (AMD) Arete Tech Conference (Transcript)

Advanced Micro Devices, Inc. (NASDAQ:AMD) Arete Tech Conference Call December 6, 2022 10:10 AM ET

Company Participants

Mark Papermaster – Chief Technology Officer

Ruth Cotter – Senior Vice President, Marketing, Communications & Investor Relations

Conference Call Participants

Brett Simpson – Arete Research

Brett Simpson

I wanted to introduce the CTO of AMD, Mark Papermaster. Mark, thanks for coming on today.

Mark Papermaster

Right. Thanks for having me.

Brett Simpson

And we also have Ruth Cotter, who you all know is Senior VP, Marketing, Communications and IR at AMD. Thanks for coming on, Ruth.

Ruth Cotter

Great. Thanks for having us, Brett. We appreciate it.

Brett Simpson

And for this session, we’re going to focus on the future of compute from AMD’s perspective and who better to lay this out than Mark. I think we all know Mark led the Zen architecture overhaul at AMD. And it feels like we’re about to enter a new architecture phase, having watched AMD acquired Pensando and Xilinx and talk much more about their data center or GPU strategy. So we’re going to dig into all these subjects in the next 45 minutes. And so thanks, everyone, for dialing in. So I mean maybe as an intro mark for historical content, can we maybe just start with — just talk us through the journey over the last sort of 4 or 5 years and how you sort of thought through the Zen architecture changes and maybe talk some of the highlights that’s led to the success across the business of late.

Mark Papermaster

No Thanks, Brett. When you look at our journey that we’ve been on at AMD, it is a story about return to high performance and a focus on high-performance computing across our product portfolio. If you go back some years, just 7, 8, 9 years ago and look at that period, the vast majority of our revenue was PC. So we were very focused in where our business was. And also, if you look at the history of AMD, there had been cycles in terms of where our new product will gain a lot of traction and where there would be then a gap into the next highly competitive and leadership product. And so the focus on high performance was equally a focus on execution, laying the foundations of a culture of execution that’s steeped across the engineering teams. And then that execution mantra is married with a very thoughtful road map process, really listening to our customers, understanding where high performance could really make a difference for them, either delivering more compute efficiency or better experiences.

And so that has been the underpinnings across really everything we’ve done. And it starts with CPU because CPU is foundational in terms of how AMD is viewed by our customers. And in the CPU, of course, drives into how well we compete in server, how well we compete across the PC market, how well we compete in embedded solutions. And so it’s a very important IP. And that, as you mentioned, was why the Zen architecture was so key. It was a decision that we made really about 10 years ago to start a new architecture that was high performance but also could scale. It wasn’t about designing one generation of microprocessor but instead about a family of microprocessors. So we had from the outset and idea about how we would approach the design to deliver that performance and an idea about the enhancements that could be made in future generations. And then, even the way that we structured the team was such that while any one generation was in design and then completing that and bringing it out into the market, there would always be the next generation that would be in advanced phase of design. So that in that way, there would not be a gap in the future of the cadence as to how we brought a new CPU microprocessors out to the market.

And we’ve delivered upon that. We’ve just launched our fourth generation of Zen and it’s our strongest yet. We launched it midyear in high-performance desktop. We launched just last month, our fourth generation of ZEN4 and EPYC. So therefore, a fourth generation of EPIC, our server line and is incredibly well received because it’s very clear that value proposition that is bringing to the market. It’s a commanding total cost of ownership advantage. So that’s really been key is that investment first in CPU Secondly, in the culture that we have of execution; and thirdly, how we put all the pieces, the CPU, the GPU, any of our acceleration IP and our software stack together because we’re very solution focused. So those are really key elements about how we have driven our journey back to high performance and not just high performance but being a bankable supplier to bring that high performance to our customers in generation after generation as scheduled and as committed.

Brett Simpson

And Mark, the results that you’ve seen over the last 2 or 3 generations as Zen, can you maybe just thumbnail the market share as you see it today in CPU for notebooks for desktops but particularly for servers, just so we get a sense as to now you’re going into the Zen4era, where are we today so we can look at potential gains going forward.

Mark Papermaster

Sure. When you look at the progress of those 4 generations of Zen processors, you really can see the market share gain. The market needs performance, they want computing efficiency and we’ve been rewarded with market share gains that if you look over the last 4 years, our x86 unit share overall went from 14% to 30% today and our x86 revenue share went from 6% in 2018 to 25% today. And so that’s across both PC and data center. On the server itself, we were just 3% in 2018. We had launched our new Super line in 2017. We had about 3% unit share in 2018. And today, that unit share is 18% but that revenue share today is now 28% and that’s indicative of the kind of performance we’re bringing. So we’re really doing extremely well in the upper — mid to upper areas of the product stack where that performance just delivers even more evident, compelling advantage to our customers.

Brett Simpson

And this is before we get the general ramp going as well. So…

Mark Papermaster

That’s correct.

Brett Simpson

Yes. Very huge achievement. Yes, fantastic. I wanted to spend a bit of time, Mark, just on the sort of strategic changes you see ahead in compute, particularly in the data center. And it looks like we’re on the cusp of major architecture change. And I guess we can see you’ve acquired Pensando and Xilinx and that clearly signals intent as well. So I mean how — what’s your perspective on how this is going to play out over the next 2 or 3 years? And what’s your big customers asking you in terms of like this is what I need in terms of road map and platforms going forward?

Mark Papermaster

Thanks, Brett. When you look at the data center, first thing to remember is that the fundamentals of x86 CPU is a dominant computation platform aren’t changing overnight. We’re talking to Simon about other changes. There are workloads that can take advantage of more tailored acceleration and that will grow over time. But at AMD, we’re not going to lose focus on the fundamentals of having a very high-performance general purpose processor that scales every generation and delivers total cost of ownership. So that remains to data. Genoa, our fourth-generation Epic that I mentioned a moment ago, brought just a huge improvement in computing density. So it’s going from 64 cores in a single socket to 96 cores.

And then with simultaneous multi-threading you could actually double the number of working threads that you have acted at any one time. So it’s actually 296. And that just brings a tremendous advantage because we did that at a gain of 48% performance per watt generationally in terms of that energy efficiency. So it’s also from a sustainability standpoint, a huge advantage. And we leveraged the 5-nanometer node in doing so. So you think about what data centers need that fundamental of general-purpose computing improvements, generation of generation don’t change.

People need that TCO advantage and they want gains of sustainability. And as massive as those changes were and the competitive advantage that the fourth generation Epic gave us, it’s not enough given the trends in computing because there are new workloads that need specialty computing that need to accelerate even beyond the capabilities they need to work with a CPU. And so what we’re seeing clearly is the need for accelerators. Our GPU and what we’ve done with the AMD Instinct line is a great example, where — we’ve really — we’ve had the ability to bring that GPU compute hardware. So it’s not our first incenting product. But with the Instinct 250 that we’re now shipping in production now, in fact, it’s the computation underneath the world’s largest supercomputer in Oak international lab. So it’s a scale class supercomputer. But that computing now with our software stack that we’ve now matured.

And if you look, for instance, at PyTorch, you’ll see that our ROCM, our Rock and software stack has moved out of beta. It’s a production software stack. So you have to be able to accelerate our key workloads like AI training or dense AI inferencing that occurs at the data center on workloads like open AI, doing natural language generation, some of these really heavy lifting AI workloads in the cloud. So that’s one class of acceleration where the industry is going and we’re providing those capabilities with our instinct line.

And then beyond that, we’re seeing inferencing is going to be needed across the portfolio. So we’ve added that with the Xilinx acquisition brings a strong set of adaptive computing used across data center applications and it leverages an AI engine, a very efficient specialized AI inference acceleration. In fact, we’re putting that same AI acceleration into our notebook line. So it will be in our next-generation notebook as well. And then you talked about networking. So as these workloads really need to scale out very, very efficiently with the size of the models like AMI models increasing so exponentially, the acquisitions of networking IP.

First with Xilinx, they had brought with them and they had acquired Solarflare. And with Solarflare was a deep experience in networking. Those Xilinx products we use, for instance, for high-frequency trading, where unique very, very low latency networking connections with — and also that was closed in February this year. And then also the acquisition of Pensando that brings a highly programmable smart network interface card. And so you have the efficiency and flexibility to put within that networking connectivity, you have the smarts of a highly programmable engine. There’s actually 144 of these small packet engines that are programmed by a language called P4, that’s the now becoming a dominant programming language for these type of smart NICs and allows us to give data center operators the tremendous flexibility to now layer on micro services right on that network connectivity.

So you can offload a CPU, you can get about 20% offload of a CPU in a data center by just having that smart NIC be the work house but much more than that. You can add services for software-defined networking. You can put in a firewall services and many more microservices with the libraries that our Monsanto team is creating software libraries that allow a very, very quick transition to bring these services to bear.

Brett Simpson

And I guess bringing all that in together, Mark, how do you think about the adoption of accelerators in servers going through this decade, let’s say, how do you think about DPU adoption? And then when we bring this all together, I guess, servers traditionally have been a couple of CPUs for AMD, maybe $2,000 chips. But I guess when we look at this sort of new workloads and the adoption of DPUs and accelerators, we’re talking about much more silicon content going into a server. How would you frame that? I mean are we going into a very strong content growth window for semiconductor generally as all this different compute engines start to get configured together.

Mark Papermaster

Yes, absolutely. And the bottom line is that all the traditional workloads haven’t gone away. Every business needs to close their books every quarter. Inventory has to be managed. You’re running your — whatever your business is, you’re running the computation to design that next widget that your company is creating. So all that base computing needs that you had your head of infrastructure is developing and how they’re handling their ERM and all of the rest of their tests. None of that changes. What’s happening, Brett, is layered on top of that is now a set of smarts that every company is needing to add and that is they have to put their data to work. So in the midst of doing everything they’ve been doing, they’ve just created a treasure troth of data — and if you don’t make that data, work for you and give you predictability in the trends and let you optimize your operations. And really, it doesn’t matter almost any facet of your business.

If it’s creating data, you can put that data to work, applying AI techniques and analytics techniques to make yourself more productive. And so that is a whole new set of computation that’s being added. That’s in industrial and workplace applications. But you’re actually seeing the same thing in the PC. You’re seeing the same thing in gaming where AI is being used to even enhance the very experience that you and I are speaking over a video conference. And what if we wanted to blur the background — well, that’s an AI process that can blur that background and not even tax the CPU as it runs that AI inference application. So A is the added workload across pretty much every application in commercial as well as consumer that’s going to drive the next wave and it does drive more silicon content because you need accelerators to get this new workload done.

So we’re very excited about that at AMD and that is why we have made the investments to double down over the last 5 years as we started shipping our new end product line. That’s why we’ve added to our GPU, GPU not just for the graphics and the great headway we’ve made over the years with our history of Radeon graphics but adding a whole new graphics line with instinct to be a GPU compute accelerator to address these trends and then the acquisitions of Xilinx and Pensando. So the TAM is for the industry going and our serviceable TAM with the investments we have made at AMD and the acquisitions we’ve made has grown very significantly.

We feel that we are poised very well to bring these solutions together because they can benefit from enhanced connectivity and we can note that in — so always — we won’t force an AMD-only solution. We always want our customers to have choice. So our products can be used standalone. But when you put them together and you optimize it for — particularly on the heavy lifting data center applications, we are ensuring that our products give you more scale, give you more productivity as they work together.

Brett Simpson

And I’ll come to AI in a second but just to round off the DPU side of things, how do you see adoption for DPUs in servers? Is this going to be a one-for-one relationship? And let’s say, in 4 to 5 years’ time, almost all servers will have an independent DPU? Or is it going to play out in a different way? I just keen to get your perspective on that.

Mark Papermaster

Yes. We’re getting great traction right off the bat with Pensando. They had already won prior to our acquisition of the attention of the industry. They were one of the very small set of vendors working with VMware for Project Monterey which lowers the barrier to adoption of these DPUs because now with Monterrey which will be coming out next year, you have built in into VMware, the ability to take advantage of these micro services in the DPUs. And Pensando, with the added muscle that AMD brings to bear now can scale. And so we brought the whole economy of scale of our supply chain and our broader engineering resources to be supportive of Pensando. And so what we’re seeing is the initial deployments that we have in hyperscale and in enterprise are, in fact, growing. And your question is, will it become one-to-one?

Eventually, I believe it will be indispensable where providers across the data center are simply need to optimize and tailor for the tasks they have at hand because what the DPU allows you to do is rapidly tailor and reconfigured to the workload that you’re running. So it used to be that it was a generic x86 CPU in a generic network, you go up to top of rack, you fan out to the edge of role. I mean this is how all of us, if you go back 5, 6 years ago, there’s how all of us built up our data centers. You just can’t get the necessary compute efficiency in the data center going forward with the sort of generic and homogeneous approaches. It takes tailoring the configuration to the workload at hand and that’s where the DPU is highly effective.

And I go back to that programmability. So you can take the same piece of silicon. And with the massive packet engine and P4 programmability that we have built in our Pensando DPU, that same piece of silicon completely changes the rollout with software that — again, we have a whole library of preprogram functions to really facilitate adoptions to the task at hand.

Brett Simpson

Interesting. And I guess we’re going to hear more about that next year as you get into full commercial rollout of Pensando.

Mark Papermaster

Absolutely.

Brett Simpson

So let’s talk more about AI, Mark and the AMD strategy around AI because I guess from a training perspective, we haven’t yet seen an instinct GPU cluster, for example, running large language models today. How does this change in 2023, 2024, how does AMD sort of break into this market? And do you need to be doing bleeding edge models, very large language models out of the gates? Or do you slowly sort of build into this?

Mark Papermaster

Right. It’s a great question and a timely question because we’re at an inflection point, literally right now at AMD in terms of GPU acceleration for these large training and large cluster inferencing that’s needed the data center for large language models. I mean you look at what is going on with open AI and how these large models we used to get incredible accuracy of natural language generation or image content, look at Daly and what it’s able to do and just creating these incredible images with just given hence, text tents as to what the users are looking for. So the capability is there, the accuracy is there and it really does rely on a highly programmable yet high compute efficiency GPU engine. And why do I say this is the inflection point for us at AMD, is we have been working for years to get both our hardware and software capabilities to be able to take on these largest tasks.

And we changed our strategy; we were going broadly across the market with our instinct line. And now we’re focusing more exactly on these type of very compute-intensive large model workloads because now rather than needing to create a software that supports A to Z of GPU data center computations, we now narrow that application stack. And so we’ve been able to put more of a laser focus on getting our Rock software stack to be at production level. and we’ve made tremendous strides at the last year. And so we’re now at a point where we have a really full production support across the 2 most popular frameworks and that’s pie torch and TensorFlow. We’re also working with Onex, bringing Onex capability to the same level. And so with that, actually, we did get an announcement from Microsoft in just in the recent quarter here that announced that they have our Instant 250 that they now have stood up on Azure and are running their production training workloads. So that’s milestone number one.

I’d say, ring the bell, we’re out of the gate and we’re incredibly focused here. We’re invested here. And so that announcement from Microsoft was on the instinct 250 million — we already have the next-generation Instant 300 which will power the next huge Department of Energy, a supercomputer, code-named Capitan with more National Lab. And so that design it’s design complete. It’s back in the lab, we’re bringing it up. And so we’re tracking to bring that next generation out. And this will bring leadership AI capability to the market, both high-performance computing, HPC, particularly in these national lab applications. The Instinct 300 was equally designed for AI prowess. And you marry that with now bringing our software stack up to production level, we’re out of the starting gates in a big way.

Brett Simpson

Interesting. And you mentioned Roam a little bit earlier, Mark. How — what role does software play in the strategy? And has everything matured around Rock EM to do the pie torch and the Tenteflo integration that you talked about.

Mark Papermaster

Yes. I mean we have really partnered with our customers. We’ve asked them where — this — if you go back over the last 4-plus years, we’ve listened to where do we need to make our software stack more robust. Where do we need to focus? How do we make sure that we have a release structure to that software such that you have completely bankable quality of the software at every new release like we do for every other software product that we released with AMD. So we got a lot of good feedback on where we needed to focus and particularly what were the software libraries that we need to develop for the AMD hardware underneath that allowed our customers to just run, take their applications that they had. It might have been run on a competitor’s code base, you have to have a portability tool. We have it. You can — and it’s GPU to GPU. So you can in a straightforward way, competitor GPU onto our GPU compute for the data center but we had to add the enabling libraries and that’s what we’ve done and that’s why we’re able to be stood up on Azure today and we’re working with other hyperscalers as well.

So look, the market wants competition. And no one will paint a yellow road for you to be that competitive solution. But customers will tell you here’s where the bar is. Here’s what you need to do. And we’ve gotten that strong input from our customers. We’ve executed. And as I said, we’re now out of the gate with competitive solutions. So we’re very with this product line moving forward.

Brett Simpson

And Mark, we spoke with — yesterday, we spoke to some of the private AI silicon guys or platform guys, SanmaNova, CerroBras and they were saying that they want to get into the business of licensing pretrained models. So the idea that you take these large language models and you serve them up as a service or you license them to the industry. Is that a business that AMD wants to get into as well? Do you want to be in that sort of offering enterprises pre-trained models or subscription services to reduce the — or speed up the time to market, if you like.

Mark Papermaster

I think there’s a lot of learning the whole industry will do in terms of the commercial applications in the AI space and it’s going to be a big cap per my comments earlier, AI is entering almost every application. Where we’re focused at this time is where we have the strongest differentiation, the strongest value for our end customers. And that is really leveraging what very, very few can do in the industry. Today, it’s ourselves and our chief GPU compute competitor and that’s build a massively dense but highly programmable GPU engine with a software stack that can take on these biggest training and inferencing workloads. And once you train on a given approach, the inferencing wants to be done in like models. And so as we’re now becoming that supplier of that large model of training and inference. It allows us to take the learning and to ensure that then when you run inference, whether it be in other areas of the data center, it might be running on our CPU, our EPYC CPUs in the data center.

It might be an edge devices running on our Xilinx Adaptive Compute or it might be at endpoint devices running our embedded Ryzen line or Ryzen right in a PC, the learning that we’re getting on these most demanding workloads and how to train and then how to making sure that the resulting trained data set can run very, very effective inference from the cloud to the edge to the client endpoint is incredibly valuable learning for us. And so we’re playing to our strength and we’re listening to our customers of how we can deliver the best solutions in that way.

Brett Simpson

And from an influence perspective, you mentioned Xilinx comes into play here and you mentioned the AI engine that’s going to start to get embedded into your APUs. How about servers, — does the General server platform have the AI engine embedded as well? Is there an inference story that where you’ve got sort of embedded accelerators to sort of drive a lot of that inference capability.

Mark Papermaster

Yes, it’s a great question. If you look at inferencing today, over 90% of it is done on x86 CPU because that’s what — that’s the compute engine you have in your data center. And we focus to really accelerate that. We’ve added a software support on our CPU that we call Zen. It’s — it’s a math kernel library acceleration. And our fourth generation EPC code name Gena, we just added support for the vector neural-net instruction, VNI. And so we’re getting a big speed up where you’re running — where you can take advantage of that instruction set. And then you mentioned the AI engine that Xilinx has already had in production. Well, they’ve invested for years on the inferencing acceleration software stack associated with that. And we’ve taken that Vitus stack and it’s now offered to all of our customers running inference on our on our EPIC CPU. So you can run that optimization. It takes a network and it prunes it and it optimizes it such that it can get a 2x to 5x speed up of the inferencing applications on that Zen CPU and EPIC.

So a combination of hardware enablement of new instructions that optimize performance and major investments in the software stack. We are extremely pleased with the value proposition that we’re offering on that x86 EPYC server that we have such a vast deployment on in the industry.

Question-and-Answer Session

Q – Brett Simpson

Yes. Yes, excellent. Maybe this is a good time to open up for Q&A. I’m just thinking so with my colleague, Jim Fontanelli standing by and Ruth’s also standing by. Jim, do we have any questions from Mark.

Unidentified Analyst

Yes, we do. We’ve got a few queued up. The first one is on how your advanced packaging road map with particular reference to hybrid bonding and 3D sort of intersects with the product road maps you’ve been talking about over the sort of past minutes with Bread and how do you see adoption over the next couple of years around that hybrid bonding packaging technology?

Mark Papermaster

Yes. Thanks, Jim. The packaging technology investment has been huge for us at AMD. We’ve had an investment there for years. We’re one of the first to adopt packaging which allowed us even — first product was in 2015 on the GPU side for graphics, where we used 2.5D to connect HBM memory, high-bandwidth memory, silicon over silicon to the GPU. Xilinx had done the same thing. They were the first adopter using a 2.5D that had really allowed them to just have a leadership gate count and FPGA and adaptive computing. And we’ve kept that investment going forward. And you asked specifically about hybrid minding. We’re shipping hybrid bonding in production today. So because of the just adoption — the very high adoption rate that we’ve had of our AMD EPYC server line, we’ve added product swim lanes. We have our base offering. I’ll just take our fourth gen Epic that we just announced as an example.

So you have Genoa, our 96 core base offering we have coming out in the first half of 2023, our dense core. So that goes head-to-head with instances like Amazon, really for throughput computing offering a real density of computing, where you may not need the peak frequency on that CPU. We have second half of next year, we have Siena which is an Epic offering that’s tailored to telco and we will have Genoa-X. And with X, what that means is that we have a very accelerated compute performance by vertically stacking cash memory over the CPU using hybrid bonding. So our hybrid bonding, we have a Milan already in production today. Jen actually use that same hybrid bonding capability to vertically stack cash above the CPU. And it’s incredibly effective because it’s — it has very excellent electrical characteristics.

When you stack vertically — it’s not only a sotsilicon connection. But the traversal length is so short, the resistance and capacities so low that you’re having very, very low incremental energy and you’re able to have that cash perform just like it was in monolithic silicon, yet it’s not. It’s stacked vertically providing you that advantage. I think you’re going to see more and more of that. What we’ve said in the future is that we have made these investments such that with hybrid bonding and the 2.5D techniques that we’ve been perfecting over the years, it really sets the stage for the package to be the integration point of multiple technologies. And so we’ve led the way there. If you look at that instinct $250 million that we’re shipping today. It is an elevated fan-out bridge that connects laterally. But there’s nothing that stops us in the future of combining that lateral connectivity along with hybrid bonding and the vertical connectivity.

So that’s in our road map and it really is an enabler for us in terms of giving us flexibility of what is the future of computing. And that is heterogeneously bringing accelerators with — together with the CPU in very efficient ways but also flexible ways that the package can be tailored with different packaging techniques, different accelerator elements to be tailored to the compute need at hand.

Unidentified Analyst

Super. A follow-on question, I guess, prompted by the news today from TSMC regarding U.S. manufacturing. But the question is, how does a company like AMD have had so much successive late reorientate themselves to the reality of U.S.-China relations and the scenarios around Taiwan that might impact the business going forward?

Mark Papermaster

Yes. There’s clearly a geopolitical backdrop for the entire industry in this regard in terms of how do you think about mitigation capabilities if there’s any supply chain disruption. So all of us in the industry are looking at just making sure that we have geographic diversity of our supply chain. TSMC is our number one semiconductor supplier. They’ve just been such a great deep partner for us for so many years. It goes all the way back to ATI days, ATI was acquired by AMD in 2007. And in high performance, we set out to partner even more deeply with TSMC. And so we’ve done that. And you’ve seen the that we’ve been really such a leading-edge partner for them on their new nodes in high performance. And so it is about geographic diversity. We certainly had been talking with TSMC. We’re a top customer of theirs. And very pleased to be participating. Lisa Su is in Arizona today with the TSMC senior executive team and we’re very pleased to our announcement of both 4-nanometer and 3-nanometer capability that they’re building up in Arizona and we look forward to taking advantage of that geographic diversity that TSMC is bringing to bear.

Unidentified Analyst

I might hand back to Brett.

Brett Simpson

Thanks, Jim. I had one. I wanted to sneak in there at the end, if that’s okay, Mark. I guess there’s 2 markets where AMD has, I would say, significant potential long term to address where maybe historically, it’s not been a big focus which is telco and autos. Can you maybe talk a little bit about your strategic thinking as CTO of AMD? How do you think about autos? I know you have a good relationship with Tesla and so does Xilinx. But how do you think about the future of the car and how AMD sits there? And then just again, thinking about all the IP that’s been developing inside AMD, how do you get more relevant in telco aside from just the Siena server, if you like.

Mark Papermaster

Absolutely. So this really — both those markets point to why it was a great marriage of AMD and Xilinx because there are such complementary portfolios, particularly in those 2 markets are shining examples. Let’s start with automotive. Automotive for AMD prior to Xilinx acquisition was a narrow market. We were looking for partners that could really take advantage of our kind of high performance. In Tesla, we had a long relationship with Tesla. We’ve been talking to them for years and they reached a point where they really wanted the kind of experience like we’re bringing to PCs where you have a great computing efficiency but you also have a great visual experience, an audio experience when you have a Ryzen-based compute.

So we embedded Ryzen was a great fit and it really is a differentiator that we’re happy to partner with Tesean. But what with Xilinx, they brought already a rich history of not only sensors and supporting LiDAR but also in their adaptive compute already supporting the safety requirements of automotive. They’ve already taken that step to be able to have the safety islands that are required for a dedicated automotive hub of computation. And so now when you bring those 2 portfolios together, we really have a full offering that is not just for infotainment but now extends through the ADAS and sensor control hubs.

And likewise, in telco, we were serving the — as an old AMB pre-acquisition of Xilinx. We serve the control plane. It was a very important market for us to attack that control plane. But now with that deep adaptive compute all way through the RF offerings that the Xilinx portfolio brings, we are an end-to-end player. So both automotive and telco, as you say, beyond classic AMD. So beyond what we do with embedded infotainment on automotive beyond what Siena does as it expands our control plane and CPU offerings, we now and both have an end-to-end portfolio offering that’s getting a very strong customer reception.

Brett Simpson

Yes. Very good. Jim, I think we have one more question. We can maybe squeeze in, hopefully. A couple of minutes.

Unidentified Analyst

Yes. It was a follow-on to the hybrid bonding question which — so the question is, may I ask about how we think about the value share between AMD and TSMC in creating advanced packaging, TSMC has said adoption of Cipla and 3D packaging broadens into 2-nanometer. Does that narrow AMD’s advantage in chiplet or broaden it given you’ve got the most experience.

Mark Papermaster

Do you want to repeat that, is what broadening our capabilities.

Unidentified Analyst

So your experience with production around type bonding and 3D packaging. Does that — do you get a greater advantage as we go into 2-nanometer because TSMC has obviously said advanced packaging and particularly hydroponic is going to broaden in terms of its adoption, broader adoption in compute into 2-nanometer. So you’re already there. How does your advantage look as TSMC broadens out its offering into 2-nanometer.

Mark Papermaster

Yes. We haven’t announced yet the specifics on our product plans in 2 nanometer but you can certainly look at the trend. And that is the new nodes bring certainly the old historical Moore’s Law advantage of higher density and lower power but they cost more. And so therefore, that’s been one of the big impetus for us to be a leader in chiplet. We already have 40 chiplet designs that are either in the market in shipping or already completed the design phase. So we’re absolutely a leader in this chiplet approach. And so what you see is we’re putting and leveraging our affinity architecture that gave us the modularity that’s been such a key to our ability to move quickly at AMD and to adapt our solutions to the specific customer needs.

Leveraging that Infinity architecture, we start with proprietary chiplet to chiplet interconnects but we’re adding on UCI capabilities and our road map, it will be industry standard to chiplet interconnects. And we architect each product such that that we put — where — the subset of elements that really can benefit and justify the cost of that cutting-edge new node onto that island, that chiplet. And we keep the elements that can do just fine in N-minus-1 and even N-minus-2 semiconductor nodes in those technologies. And then we leverage either lateral connectivity in the package or with hybrid vertical binding to architect the most performant solution but also the most cost-effective solution. And so, now the — it’s about variables. We have more variables at play, more tools in our tool chest to be able to optimize for our customers.

And lastly, we stood up a division of the company called Strategic silicon solutions, cubed. And that team, it was — the team has been grown and modified as the team that used to the semi-custom solutions like the Microsoft Xbox solutions or those Sony PlayStation solutions. They can still do those monolithic solutions but they’ve added to their tool chest to be the division of the company that can integrate for our customers with these 2.5D and 3D packaging techniques tailored custom solutions for end customers. It can be with our IP or our customers’ IP. That’s the kind of flexibility we get with our chiplet approach going forward.

Brett Simpson

Okay. I think with that, we’ll call time. I think we squeeze every last second mark. So I really appreciate it. Ruth, thanks very much for coming on. Great discussion.

Ruth Cotter

Thanks, Brett. Appreciate it.

Mark Papermaster

Thanks.

Be the first to comment

Leave a Reply

Your email address will not be published.


*