Western Digital Corporation (WDC) Management Presents at Arete Tech Conference (Transcript)

Western Digital Corporation (NASDAQ:WDC) Arete Tech Conference December 5, 2022 10:10 AM ET

Company Participants

Dr. Siva Sivaram – President, Technology and Strategy

Gabriel Ho – Director of Investor Relations

Conference Call Participants

Nam Hyung Kim – Arete Research

Nam Hyung Kim

Hi, good morning. Also good afternoon for those who joined from Europe. This is Nam Kim today. I am very pleased to introduce Dr. Siva Sivaram, President, Strategy, Technology of Western Digital. Dr. Sivaram has more than 35 years in semiconductor technology and manufacturing. He has held executive position at Intel and Metric Semiconductor and SanDisk previously. We also have Gabriel Ho from IR team. So hi, Dr. Sivaram and Gabriel. Great to have you here today.

Dr. Siva Sivaram

Nam, it’s wonderful to be with you.

Nam Hyung Kim

Thank you. Thank you. I will ask you Dr. Sivaram a list of questions, then later open up for Q&A. Investors can take questions or send the question to our team through e-mail, so we can read your question at the time. So the first question I would like to ask is this, WDC, previous SanDisk, we did Toshiba, you guys invented the NAND Flash technology, floating gate based NAND technology, then you moved to target trap in 3D NAND structure. However, some companies still, like Intel, now [indiscernible] still use the floating gate based 3D NAND. As a technology expert, what drove your decision from floating gate to trap? You think floating gate based 3D NAND can be sustainable? If not, then why is floating gate based NAND seems easier to have a [indiscernible] like a QLC versus charge trap? so I just want to have your opinion on this between floating gate and charge trap?

Dr. Siva Sivaram

So Nam, when we make technology decisions like this, you look at it from both sides from market and broad-based applications that we can develop, and with cost and ease of process development in the fab. You have to look at both sides of it when you make it. As you said, we originally invented NAND flash technology. The floating gate technology has been around for a very, very long time, where you use an oxide layer where you store a charge. 2D NAND was fully always floating gate. But the number of electrons that were available for multi bit processing was getting lower and lower and lower. So by the time we reach the limit of 2D NAND, we had maybe 10, 12 electrons effectively per state for us to work with. It was getting to be very, very narrow.

When we move to 3D NAND, you have this technology where you are now replacing the broad-based applications, all the way from consumer to mobile to client to enterprise, each one having different needs. One needs to be low cost, one need to be high density, one need to be low power, one — these kinds of many differences. We go look at the application as to how we are going to develop. So in doing a 3D NAND, you stack many, many layers, oxide, nitrite, oxide, nitrate layer and then you etch them all together.

The ease of processing with charge trap is a lot better than floating gate. You are able to provide higher density, more layers, more bits, even 3 bits, 4 bits per cell is much easier to do it on charge trap than on floating rate, which is running lower and lower on electron count available and on a very complex process. So that’s why the entire industry moved to charge trap, only Intel decided to stay on floating gate. I cannot talk to as to why they decided to stay that way. But Intel has a very long history of staying with floating gate. In their NOR flash, they had the ETOX process, they had for a very long time. So I cannot talk to them, but all of the rest of the industry has moved to charge trap, which is the technology there for the long-term for all of us. It’s not — floating gate is no longer a viable technology for 3D NAND.

Nam Hyung Kim

Okay, okay. Understand. And then now on another question on something people are always curious about. Industrial drivers are always interested in storage transition from HDD to SSD. I think that what you see is the most qualified company to answer this question, if you make those product and understand technology very well.

I understand a lot of cloud companies are currently trying to use more and more SSD as there are so many benefits like power consumption, footprint and speed. And at the same time, 3D NAND capacity is required for making HDD as extended mass authority tools. So what’s your view on SSD penetration in the mass storage and cloud in the future?

I know many people believe the cost per bit is key criteria, but I feel we need to think more broad issue other than cost per bit. So can you share your view on where we are and how long it takes to replace some degree of HDD with SSD? Or can SSD ever replace HDD in master [indiscernible] space?

Dr. Siva Sivaram

Yes. This question, obviously, as you said, Nam, gets asked very often.

Nam Hyung Kim

Yes.

Dr. Siva Sivaram

And because people are used to thinking about what happened in client in PCs, where slowly people replaced their hard drive storage with solid state. These days, when you go buy a PC, laptop, you buy only solid state drives. There’s a very, very small percentage of hard drive still sold. So people think in this replace mentality mostly. Let me give you the statistic. In 2021, last year, in data center storage, the mix was 88% HDD, 12% SSD.

The projection for 2026 today, 5 years from the 2021 bid, is 83% HDD, 17% SSD. So it is not like flash is not growing. The overall TAM is growing and flash percentage is growing, but HDDs also growing. So the difference in cost, especially when the large capacity enterprise. So today, for example, our leading products in hard drive are 22 and 26 terabyte drives. In that kind of densities, the cost difference is still almost 8x to 9x. And to replace that kind of volume is going to take a long time.

So in my mind, these are two parallel lines. They’re going to stay parallel for at least through the end of this decade. Then is going about flash. Clearly, there’s — but you’re going from 11% to 12% to 17% to 19%. You are not becoming 70%, 80%. It will grow, because both are going down in cost and capacity is growing, overall data needs are growing. So SSD will remain mostly in fast applications, HDD will always in colder large data applications.

Nam Hyung Kim

Okay. But critical reason is not just a price difference, right? I think SSD, you have some endurance issue. So it is beyond the price parity, right?

Dr. Siva Sivaram

It’s not that. No, that’s not fair to say to the SSDs. So if you were to use, for example, the number of drive rights that the hard drive uses and use that on SSD, SSD will live the same amount of time. It is capacity available as cost is an important factor. And how fast HDD is also growing. HDD capabilities, the existing fleet and incumbency, those are all very useful for HDD. People know how to run with HDD. Last year, AWS had made a speech in the AWS major conference. The King of storage is HDD. That’s the way even the enterprise guys in data center people are still thinking.

Nam Hyung Kim

Okay. Okay. Good point. And then we’ve been seeing recently many emerging SSD technologies, such as computational storage, like SSD plus SoC. With the rising AI and high performance quality in data center, what’s your view on SSD in the future? Do you think today’s SSD technology needs to be changed? What do you think can be changed in 5 years on SSD in the cloud space?

Dr. Siva Sivaram

Yes. So just like when the markets grow, there is segmentation on [indiscernible]. Each big cloud player uses the SSD in their own unique way. And someone just uses it for search, someone uses it just for streaming. Someone just uses it for database, they are different. Now that’s upfront each. They manage their fleet very differently. But more interestingly for us, there are some vectors where things are changing. Massive SSDs, new category.

So when I now come and say, I can have a 50, 60 terabyte SSD. That’s one category. Ultrafast SSDs, with very fast — very low latencies. Now we are talking, starting to talk single digit latencies, different kind of SSDs. Computational storage that you are talking about, where standard functions like encryption or sort of data — video format converting. These kinds of standard functions can be embedded closer to the data as opposed to having to move all the data back up to the compute and do it. So you are now starting to see more compute functionalities in the SSD.

Now you have a question, whether I do it in each SSD or I put the SSD in a fabric and have a large compute close to an array of SSDs. These kinds of configurations are also growing. So SSDs in a fabric, SSDs with different attach, SSDs, which are very large with the lower I/O speeds, SSDs that are extremely fast, which are smaller, many different segmentation of SSDs are starting to come in. SSDs that are used in storage versus SSDs versus boot SSDs that are used in compute, many kinds of SSDs are coming up.

Nam Hyung Kim

Yes. Okay. Okay, got it. And then let’s talk about CXL. I think investors also has a lot of interest on CXL. This is new things coming. There’s been a lot of discussion on CXL, but public seems more focused on DRAM side. This is a great way of pulling DRAM for different hosts. Can you explain in detail how CXL can benefit NAND industry and WDC? When do you think CXL will contribute to your business meaningfully?

Dr. Siva Sivaram

Yes. This is by the way, CXL is an area of very immense interest for us. As you said, CXL now allows accelerators, whether it is data processing units or TPUs or whatever to directly access DRAM in a coherent fashion. And that CXL 1.0 just starting to ship, CXL 2.0 is already out there and people are starting to figure out, CXL 3.0 coming up in the next couple of years. What it allows us, first and foremost, is tiered memory. So pooling of the memory and tiering of the memory is the first step that CXL allows us to do, which means we automatically think as to, okay, if I had low latency flash, can I make that into a peer buffered in the front by DRAM.

So can I have a larger cache of DRAM, cache SSD in the back of it? And SLC NAND is still very cheap. So you can have a, for example, 1 to 2 microsecond SLC flash. Can I put that as a tier behind DRAM? The DRAM in the nanosecond or so as a pipeline access. And behind a slightly segmented SLC flash, would have very, very fast access is an important application that’s coming up very soon.

But I think what we are more excited about is other non-DRAM, nonvolatile memories will now become popular. Because when you have other memories that will develop along with flash, now you have a very finite gradation in the way memory can be used. There’s SRAM, there’s DRAM, there’s a fast — there’s other memory plus SLC, plus MLC. You can have a full gradation of storage that we can use. So we are very excited about CXL 2.0 and 3.0. I don’t expect it to be in revenue — in volume revenue till probably 2025 — 2024, 2025.

Nam Hyung Kim

Okay. Okay, got it. According to your slide during your Analyst Day a few months ago, WD Kioxia JV has a smallest die side and best CapEx efficiency. Can you share how you achieve this, and what is the key to doing this?

Dr. Siva Sivaram

This is an important area. Everybody, when you go, they always talk about, I got more number of layers. People always come and say, I got 168, I got 232, I got data. We don’t think that way. For us, more layers is bad, not good. You want to get the same bit growth and cost reduction with the fewest number of layers. We want to do with the fewest number of layers, and we do that by, in our own mind thinking, the Z axis, the number of layers, you multiply as a lever the X and Y axis shrink. If I can get more X and Y axis shrink, then I can multiply by the Z axis, I get an overall multiplication. Z axis, the number of layers is very expensive.

Nam Hyung Kim

Right.

Dr. Siva Sivaram

It’s linear. So every layer now needs to add cost. Adding — you need to etch deeper, you need to do more depositions. It does not — there’s no leverage there. The leverage comes from X and Y shrink. So we are very conscious about X and Y shrink. We make sure we remove overhead. We make sure that we are the maximum whole density possible. So we do X shrink, Y shrink, Z careful held at a fixed number and then logic. I want to make sure I go from 2, 3, 4, 5 bits. And so if you keep that mentality, we come back and say, optimize for the lowest CapEx per percentage increase in bit growth.

Nam Hyung Kim

Okay.

Dr. Siva Sivaram

So for the same bit growth increase, I want to minimize the CapEx. And that is done by I do a maximum amount of reuse. I try to reuse from the previous node as much as possible. If I go too high, then I cannot reuse. I need the latest tools. This is how we have dramatically reduced the CapEx compared to anybody else in the industry. Our — I showed this slide in the Investor Day. We are probably almost 30% lower than the nearest competitor.

Nam Hyung Kim

Okay. Okay, got it. And then how much do we have left on current charge 3D NAND technology? I mean each company has a different road map. I think you point to a very interesting angle here. However, do you think 300,000, 500,000 layer 3D is possible? I mean, people keep increasing number. So where are we in terms of layer? And what will remain challenging from here? I mean, people are talking about over 300, 200 layers, 300 layer. One of your competitors say they cannot chip 1,000 layer, so …

Dr. Siva Sivaram

So, right now — look, in all technologies, you have a long-term vision of what the physics limitations are.

Nam Hyung Kim

Right.

Dr. Siva Sivaram

But then you focus only on the next two generations, because there is so many engineering problems to be solved in the short-term that the technology that I’m going to develop for next year, the following technology and maybe the one after that we have some vision for. The rest of them are all, “Hey, do I see some major problems in that?” And we don’t see it. In 3D NAND, we don’t see a major problem with — I’ve done two tier. I can now go to 3 tier, that can be done.

Nam Hyung Kim

Okay.

Dr. Siva Sivaram

We have done — what we have done with respect to lateral shrink, we have now seen we can do X3, X4. We have talked about X4.5, X5. These are also in the card. And we are also seeing, as we said, in the future wafer bonding.

Nam Hyung Kim

Yes, okay.

Dr. Siva Sivaram

These are all — there are more tools in the toolkit still left.

Nam Hyung Kim

Yes, yes, yes. And the other thing I noticed recently was surprised that NAND wafer processing time has reached about 6 months today in the case of 160, 170 layer. It seems even worse than DRAM in increasing the number of wafer processes that — now I remember NAND wafer processing time of the last number of months a long time ago, I wonder how NAND supplied like WDC will try to deliver this issue. I mean in the DRAM [indiscernible], they use EOB [ph] to cut down some processes there. So how will you deal with increasing wafer processing time from here?

Dr. Siva Sivaram

So I wanted to be objective. In logic, the quick cycle time is important for prototyping. We need to make sure that we get it done. In memory, normally, as long as the fab runs full, cycle time is not so important in the old days. But of course, for yield improvement, defect improvement, those kind of things like time is important. As long as the fab runs full, we are not like, “Oh, I need a prototype, et cetera.” During development, cycle time is very, very important because you want to make sure that you can ensure that you have changes you make, you can get it back. So we have developed a lot of techniques around it.

How do I just process the CMOS, simulate the back end, get the data out of it? How do we just process segments of the array, make sure that we can get the data that you can. So when I put all that together, even now our cycle times are on the order of 80 days, 90 days, so 3 months is where most of the cycle line is. Now having said that, your point is correct. Later on when we go to wafer bonding, when the industry goes to wafer bonding, you can do two in parallel.

So the CMOS gets processed separately, array gets processed separate. So you cut the processing time one-third, two-third. The array takes two-thirds a time, CMOS takes one-third the time, they are done separately, and then they come together, right? There are many such techniques, but we do need to worry about it because the number of steps is long. The theoretical time it takes to do the steps is very long. I mean, you may imagine, on a 238 layer, I have to deposit 238x oxide nitride, oxide nitride, oxide nitrate, right? So it takes a long time.

Nam Hyung Kim

Yes. I mean you briefly touched about wafer bonding. I think this day, this is a hot topic because it could be the next big move in this industry. So can you explain what wafer bonding is for those who are not familiar with this term? And what were the benefit and challenge of this technology? And when its time line for NAND wafer bonding in the industry?

Dr. Siva Sivaram

So wafer bonding is not a new technology. Wafer bonding has been in very high volume production for optic sensors. CMOS sensors have been doing the back end processing and on the CMOS and growing them together.

Nam Hyung Kim

Right.

Dr. Siva Sivaram

What you do is have exposed copper on both ends of the wafer. And when you bond them together, the copper attaches to each other on either side and you provide the continuity. So now instead of one wafer being processed per die, now you have two wafers being processed. In NAND, the most important problem is CMOS, when you make the transistor and then you subject it to all the back end thermal processing, all the heat cycles at the back end the transistor is degraded.

What we do now is you process the CMOS separately so you can have very high density, very narrow advanced CMOS, make the array, the 3D superstructure. So you make the parking lot separately, skyscraper separately and join them together in one go and polish one side of it, and that’s how you get the device. And this is — the technology is known, but adapting to a high volume manufacturing like NAND is very difficult to do. It’s very advanced in the way you lay out, you allow for how to make sure the two join with the perfect yield. How do we allow for redundancy. So it is going to be a mainstream very soon. We think that this is one of those big steps that NAND will take very soon.

Nam Hyung Kim

So what does it mean to cost structure and like CapEx? For example, I think maybe some of your competitors may moved to wafer bonding maybe later stage, maybe some of your competitors moved already. Do you think the [indiscernible] the better?

Dr. Siva Sivaram

Like you know it, right? Each one …

Nam Hyung Kim

What was the cost of structure here when you go to wafer bonding?

Dr. Siva Sivaram

Each one to their own. So we know our market, we know our application. Wafer bonding, as you can see, you add one more wafer cost, right? Obviously, you had one more wafer cost. However, in terms of yield and in terms of how fast do I go to two tier versus three tier, those kinds of things may make the overall cost cheaper just to spend a wafer cost, let’s say, $50 per wafer. So you are just adding $50 extra, but in a $3,000 wafer, but you may get saved somewhere else. So this analysis will be done by each company separately on their own. And for their application, their densities, if you are making lower density products, you may not be able to do it. You do high density products than you may want to do early, et cetera.

Nam Hyung Kim

Okay, got it. I think — yes, I’m excited to see this thing happen in a few years. And then the NAND [indiscernible] regarding the more challenging in DRAM, mainly because there are more players, certainly, it is a very tough time today. Many believe the market needs to be further consolidated. What’s your view on this? Or what need to happen before the market for the stabilize? Do you think NAND can be ever like a DRAM, which generate like 40%, 50% normalized operating profit margin?

Dr. Siva Sivaram

Nam, you’ve been in this business a long time. I’ve been in this business a long time. You remember how bad DRAM used to be. DRAM was an extremely cyclical market that nobody could sustain so many players and is getting a lot streamlined the fact that hynix bought Intel, took one player less in the marketplace. Various export kind of things are rationalizing the market more also. But even better, I’m seeing more and more the different players are playing a much more rational game than trying to just go get market share.

It’s not — everybody is not trying to grab market share. People — there is some rationality. You saw how many people announced reduction in CapEx spending, reduction in wafer start. All of that, every player is starting to do it. It is a much more rational marketplace. Should there be more consolidation? As a player, I would always like for it to be more, but that’s not for me to decide. It is an open marketplace where it will happen when it will happen. But in the long-term, rationality of the players is the best way that this industry continues to be more profitable.

Nam Hyung Kim

Yes, yes. And then I know WDC has been recently doing well with enterprise SSD for cloud customer. However, some of the cloud players like AWS, making their owners [indiscernible] their third-party controllers, the DIY type of enterprise SSD. Now I know you guys can still do business with the NAND wafer. However, cloud guys engagement on building their own SSD can devalue NAND supplies of business, I wonder. So clearly, today, only a few hyperscalers do this. So what should be on cloud values, cloud company building their own SSD? Is this going to be a threat to SSD suppliers?

Dr. Siva Sivaram

So we look at it slightly differently. There has always been a lot of players trying to make SSDs with somebody else’s NAND. There is Taiwanese players that do it, there is high-end players that do it. They have always taken third-party controller, their firmware by somebody else’s NAND and integrate it to make SSD. It has always been around. Cloud players, of course, have a lot more money. They have a lot more resources. They also know exactly what their workload are. So they would like to make their own, and that it makes sense.

However, as you know, NAND — the most important thing is technology changes every 18 months. With every 18 months, we introduced new changes to the process, new changes to the device, new architecture changes. And so it is now becomes important for the cloud player to follow very, very, very closely and then adapt to it. Otherwise, they can’t be cost competitive. If they are willing to do it and they are willing to buy NAND from us, we are happy with that also. Because then they have to pay for a differentiated NAND.

Now in the end, it will reach an equilibrium. For somebody who runs very, very, very high volume on one kind of SSD, they may just do themselves, where we will supply NAND to them. For others, we will continue to make enterprise class SSD. Even today, each cloud player asks for something slightly different. It is not a standard SSD that you supply to everybody. Each one wants their little twist on the SSD. So we are customizing them for them anyway. So whether we are the customizer or they are the customizer in the end, they have to pay for that.

Nam Hyung Kim

Okay, got it.

Dr. Siva Sivaram

We will work with them on this.

Nam Hyung Kim

Okay. And then a couple more question before we go into Q&A. Auto, including in the current data center will be potentially a huge demand driver on memory in the future. So can you talk about opportunity in auto space for memory supplier like WD in a bit more detail? And when do you think auto will take a meaningful portion of the NAND market?

Dr. Siva Sivaram

Yes. So I want to be very clear when we say auto. In the transportation sector in general, cars by themselves are not a big consumer. There are a fixed number of cars that are being produced. But autonomous is a big consumer of NAND [ph]. As every means of transportation becomes more autonomous, whether it is trucks, whether it is long haul trucks, whether it is fleets of delivery trucks, our cars become more and more autonomous, and more autonomy is included, then given the number of sensors and processing we think there’s going to be multiple ways this is going to increase storage.

Storage in the cars, in the vehicles themselves, storage in local data centers, meaning edge data centers, and then storage in their own central cloud data centers. All of those places storage is going to increase. So we are going to be directly related to the degree of autonomy in the vehicle. So when you can predict to me how much autonomy is going to be, I will also plot for you how flash increase in the transportation sector is going up.

As you know, autonomy has been delayed more and more and more. We said, by 2020, everything will be autonomous, and we said by 2025 — autonomy is playing out very differently than the way we thought it would. We are no longer saying self-driving cars all the time anymore. But we are talking about a hybrid autonomy. And in those things, as they develop, we will be a big presence. That’s where the big growth will be.

Nam Hyung Kim

Okay. And one of your SSD competitor recently talked about SSD as service? SSD supplier can study customer storage usage can provide cloud customer service, so your customer can reduce CapEx with a more subscription type of business. It’s going to be a new business model for sure. In this case, memory suppliers can have higher value increase on recurring revenue. Do you think that this can make sense? I know when AWS like hypescaler even make their own SSD, what’s your opinion on SSD as a service business model?

Dr. Siva Sivaram

So clearly, we want to experiment with many different business models. But the one thing we never want to do is we don’t want to ever compete with our customers. One of our hard drive competitor has tried this model, trying to go run a storage as a service as a competition to their own customers. We have not done that, don’t do that. We will work with our customers with any business model they work. If anything, clearly, big cloud customers have a stronger balance sheet than us for them to carry it than we can, but we will explore. If there are second and third tier players that would like to do it with us, we will obviously explore this opportunities, both on the hard drive side and on the flash side. It’s the same idea. And in both cases, we know a lot more about our drive than anybody else does.

We actually, in our hard drives, we offer a fleet monitoring service, where we can get the real-time health of our fleet. And we can apply it across so that we can extend the life, et cetera. These models are always been around. How we monetize it is going to be different by different customers, different service agreements that we have with our customers.

Nam Hyung Kim

Okay, okay. In the smartphone market, NAND content has risen to like 5, 12 gigabyte, in some case, 1 terabyte. How much room to grow do you think we have in consumer devices like smartphone? Clearly, I don’t think I need more than 1 terabyte. What’s the long-term outlook and growth prospect in this smartphone market, which accounts like 40% of NAND demand today? If smartphone market become ex growth, what’s going to replace a smartphone as your main demand driver?

Dr. Siva Sivaram

Yes. So there are two things that you said, and I want to make sure we think through both of them. I remember when I first got my 16 gigabyte phone, I was like, wow, 16 gigabyte in my cellphone, now I buy a terabyte. Certainly on my laptop, I’m much older than you, my first laptop had a 20 megabyte hard drive. And we now talk about — there are people who actually buy 16 terabyte hard drive for their desktop computer. So I’m never going to say what the ceiling on how much storage goes into each device.

Now having said that, you’re absolutely right. I mean, mobile phones as a general market in unit growth is starting to be a mature market. But the growth of data, on the other hand, has not slowed down. So whether it is endpoint devices, edge devices or cloud devices, overall data is still continuing to grow. So cloud growth is very big. As you know, we continue to have a large cloud growth projected.

Nam Hyung Kim

Okay, okay.

Dr. Siva Sivaram

Edge data are growing up very, very fast. And of course, end point devices, there are so many IoT devices that need, whether it is, surveillance, or we said, automotive that you just talked about, smart video, multiple new consumption mechanisms are coming up everywhere, industrial and smart TV, and so many places where there is demand for storage that I don’t see — and consumer devices are still continuing to grow, whether it is USB, SSD, or microSDs, they still continue to go. So the breadth of opportunities is there to replace any one market that stabilizes.

Nam Hyung Kim

Yes. One last question, maybe this is a near-term question before we go to Q&A. Unlike a competitor like Micron, Kioxia and so on, WD hasn’t mentioned any production cut. So I just want to ask why overall demand is obviously very weak and supplies inventory keep rising at a very high-level at this time. What’s your criteria on production cost, I’m just curious?

Dr. Siva Sivaram

Yes, so this is also depends on individual vendor on their particular situation. We have announced big CapEx reductions, and we announced it early enough. So 6 months, 8 months, hence, there will be production output reduction. We have also talked about slowing down the ramp of BiCS6. That reduces the bit growth. We have not done immediate reductions, but we have done our own way of rationalizing supply. It depends on when do I have short-term demand, when can we place bits short term, long term, what is my current inventory position, those things help me make that decision. I know our partner has already announced a reduction in wafer starts. We have indicated other means of reducing bid growth.

Nam Hyung Kim

Okay, got it. Okay. Now my question is over now. And I will let investor have a chance to ask a question. Jim, do you have any questions?

Unidentified Company Representative

Yes. So we had a few come in. One, just to clarify, you obviously covered auto, the auto storage question a little earlier, but we had an autos question come in ahead of that. And maybe there was just a couple of points to clarify. So one, which I don’t think you covered, was what portion of SSD is autos today? And the second was, do you have to have auto grade qualification to sit in the [indiscernible] chain? So that’s the first one.

Dr. Siva Sivaram

So that first question, Jim, automotive has different applications under the hood versus in the other parts of the car. And we do automotive certification of NAND, and that is a substantive part of our revenue. We don’t break out that revenue and talk about it explicitly, but we do participate in that marketplace. Most of those devices are eMMC devices. Automotive still does not use sort of a PCIe SSDs in different segments. We have a lot more of eMMCs and UFS drives that go into automotive. And we are qualified and we are shipping those in volume and to multiple customers around the world.

Unidentified Company Representative

Great. Thank you. And then a second question from the floor. Again, you touched on it just at the end here, but there was a question on Kioxia obviously announcing they started cutting wafer input. What is that — does that mean there’s no change on your side, on the wafer side and the underutilization charges in the fab only apply to Kioxia?

Dr. Siva Sivaram

So to the second question, the underutilization charges only will apply to Kioxia. They will not apply to us. Now having said that, the reduction in wafer starts is always an opportunity for us. We will apply it at the appropriate time if needed. When and if needed, we will do the same. But for us, we have gone a different route. The route we have gone, of course, as I said, is we have reduced overall CapEx substantively and we have reduced the rate at which we were converting bits from BiCS5 to BiCS6. Both of those lead to lower bits going out into calendar ’23.

Unidentified Company Representative

Super. That was — that’s it in terms of questions from the floor. Thank you.

Nam Hyung Kim

Okay. So maybe a final question to Gabriel. A lot of investors are worried about [indiscernible] environment. So what’s your message to investors? Any update on — anything you want to add?

Gabriel Ho

Yes. Thank you. And I think the market condition, so we are not updating the guidance or reiterating the guidance today. But I think — what I think [technical difficulty] he came on board 2.5 years ago, he separated the 2 businesses. And I think that cash tree brought in [indiscernible] I think actually from HDD side. And we are [indiscernible] on the flat side team that actually added a lot of activity on the businesses.

If you look at this past fiscal quarter, in the September quarter, it was an incredibly difficult environment, and we were able to, I think, actually meet, I think, the revenue at the upper end of the guidance range. And also on the product line side, I think he added, I think a lot of the — I think one thing we would like to talk about is the 26 terabyte hard drive is a result, I think, of that lobbies, activity and product road map improvements that we have done there. And near-term, the business environment is challenging, but I think next week, as the demand continues to improve, maybe later part of the fiscal year, we see the opportunity to improve the gross margin further.

And then on the other side, on the flash side, we also have like 5 pillars, not a bit, to place the bits. And that’s part of the reason I think, at this point in time, we are not reducing the wafer input because I think we see a lot of market where client SSD, retail side and then also emerging position on the enterprise SSD, along with gaming, I think these are all the pillars where I think we are able to place the bits and continue to generate cash flow. So I think that’s just some points to consider.

Nam Hyung Kim

Okay. Thank you. Okay. I think this time, I like to wrap up. Thank you for joining this call, everyone. Dr. Sivaram and Gabriel, thank you so much for your time. We certainly learned a lot today. So thank you. And then okay, next session is the NVIDIA with Mr. Ian [indiscernible]. Brett Simpson will host the session. Let’s have a quick break, and please dial back. The next session will start at 11 AM.

Question-and-Answer Session

Q –

[No formal Q&A for this event]

Be the first to comment

Leave a Reply

Your email address will not be published.


*