Arista Networks, Inc. (ANET) Wells Fargo 6th Annual TMT Conference Transcript

Arista Networks, Inc. (NYSE:ANET) Wells Fargo 6th Annual TMT Conference November 30, 2022 5:30 PM ET

Company Participants

Aaron Rakers – Wells Fargo

Conference Call Participants

Anshul Sadana – COO

Aaron Rakers

So thanks for joining us. Looking forward to the conversation. I wanted to just start, just talking a little bit about the evolution of a recipe you guys recently and if anybody’s not seen the slide deck from the Analyst Day, that they hosted a great Analyst Day event just a month or so ago, but one of the things the messages I felt coming out of the company was that the company started as a product company, right, and you’re evolving into a much bigger platform company.

Question-and-Answer Session

Q – Aaron Rakers

So, maybe that at a high level, maybe you can help us appreciate what that means, what that evolution looks like as we think about the next five years or whatever it might be time arise and wise, on that platform journey for the company.

Anshul Sadana

Absolutely, Aaron. When we started, we are very focused on just data centers switchers and all of you largely have thought about us as this hardware box company. We sell boxes with some software on it and that software is fairly complex, right. It’s about US and television the US and US as our operating system, added together at least 25 million lines of code today and is highly programmable and a very, very high quality stock.

So when you look at IT operations and networking as part of IT operations, no longer just a silo. It has to be for integrated with everything that the teams are doing because provisioning and automation and security and business outcomes are all interrelated.

So the way we’ve developed our stack, just now to a point where US is truly becoming a platform, as you can run it on any of the hardware we have, but in the end, you can program it. You can use it the way you want. Cloud vision, which is our automation stack, is not just about automating the network. Now we are helping enterprises automate the automation. There’s more integration, not on our integration happening between cloud vision and all the other provision systems that exist.

So getting into the deep glue of enterprise IT, that’s where this is truly becoming a platform and it’s easy for people to deploy it, they love the product already and I think there are long ways to go from here into many of the other used cases that all adjacent from right here.

Aaron Rakers

Yeah and that kind of you mentioned cloud vision, but the EOS thickness of what you’re doing, everything wraps around a US, right, it’s an unmodified Linux Kernel, how far can you take this? Maybe talk a little bit about some of the adjacencies that you’re most excited about right now.

Anshul Sadana

I would say one thing about EOS that is maybe not always appreciated is how do we get the high quality? The architecture is already great, that it’s foundational to everything that we build, but the way networking has worked in the industry for decades is you write a feature and then you throw it over the wall to a test team and they test it and they find some bugs. They find the bugs, develop being fixes the bugs. Then you test a little bit more. Then you ship the power to the customer.

The way we think of this is how do you build a test framework where the product always works? So instead of having thousands of QA engineers, the size of the test team, the QA team at Arista today is eight megawatts. Our software development team writes all the test cases, fully automated infrastructure. We scale that to a point where we run about 150,000 test cases every day on every product, on every release we’ve ever shipped and that keeps on improving on its own because if there’s a bug, you are likely to find it and then you add the test case and that bug will never escape again to our customer. Very different than what our competition has been doing for two or three decades.

When you take that, that’s the customer’s love it and it just works and now look at other used cases. We constantly get pulled by our customers into other segments saying if I can run EOS and cloud vision, why can’t you do campus networking for me, and that’s really how we got into campus. Had they enough pulled from the customer interest in there, the same thing is not happening in a little bit in the routed band environment. Not just inside the building, but how do you leave the building and how do you go out?

There’s good opportunities there. We’ve already been doing some of the work and routing with our cloud customers and some of the service providers, we can apply this to enterprises as well and the same is true and applying other layers on top, for example, awake and NDR is a massive strength in our product portfolio.

That’s never going to be the primary revenue driver, but what we do with our network detection and response system is build a network where you can detect threads based on what’s going through the network and that’s just an extra layer of protection that today doesn’t exist. The customers love that as well and that’s integrated into the campus products from day one.

So when you buy an Arista Campus switch, it has threat deduction already built in. You turn on a license and you get that feature and so on. So I think there’s a lot to be done there and that’s how we grow in the adjacencies too.

Aaron Rakers

Yeah — but that’s the kind of that. When you think about that competitively, the competitors are still fragmented across their product portfolio. So from a competitive perspective, the durability of what you’re talking about, appears extremely strong, right. You don’t see the competitive landscape being able to moderate or change their path forward very effectively to compete against that single operating system approach.

Anshul Sadana

I think that are right and we have some good competitors as well. So it’s not as if this is an industry that is lax and there’s no competition. It’s very intense competition, but to a great extent, you’re right, I think competition are always ahead of the execution. For us our execution is ahead and we want to keep it that way.

Aaron Rakers

So shift gears a little bit, so I don’t think you’d ever get into a discussion with a financial analyst that wouldn’t ask you about the demand environment and one of the things that we’re seeing in networking is this this kind of question of networking spend seems to be so far fairly resilient and there’s been some concern and choppiness overall macro dynamics and stuff like that. So, as you look at it, how are you currently seeing or how would you characterize the demand environment and certainly segwaying that into kind of the visibility that you see from the customers as you engage with them.

Anshul Sadana

Whenever this crisis, everyone has to prioritize and today there might be a macro and already happening, but in the end, companies cut back on free lunch and drinks and free laundry, not on networking because networking is essential. You cannot really make two of your business and run your workloads without enough bandwidth and the right connectivity and security and so on and between.

We are already seeing that. Our customers are telling us this is critical to the infrastructure. In addition, I think networking is this one place where we glue everything together and if you become a bottleneck, it significantly reduces the efficiency of the entire infrastructure. Regarding spending so much on computer, storage or other things, you might as well spend a little bit more to have ample bandwidth in between the right connectivity, the right segmentation, so that you don’t disrupt any of your business floors. So we think some of that.

The same applies to cloud companies. The cloud companies might slowdown in how many racks. They’re adding to their third data center depending on demand, but they do not slowdown in DCI build-out if they have to build a new region, they will do the DCI build out upfront, the data center interconnect to get to the right outcome for that region. So we are seeing healthy demand as a result of that for networking and specifically our products.

Aaron Rakers

Yeah and is — how about that same kind of question succinctly to the enterprise piece of the business? That’s one thing in the aristocracy. A lot of people ask you about the cloud and I’m certain that we’ll talk more about the cloud, but the enterprise momentum that you’ve seen has been fairly remarkable. How do you characterize the enterprise demand environment that you’re seeing right now.

Anshul Sadana

So look, for enterprise, there are two big pieces of our business. One is the enterprise data center. It continues to be healthy. We’ve seen good growth. We continue to see good growth over there. You have campus, which we say call it as an adjacency, but it’s growing really well. It’s starting to near roughly 10% of our annual revenue. So to at some point and with the near future, it may not be just an adjacency. It will become our core business. It’s growing very, very well.

Because of the way we’re exposed to enterprises through large enterprises, which I think either feel the — or macro event later in its cycle or certainly don’t try to cut back as much or campus, we have such little penetration and good growth, but we won’t feel the macro upfront. I think we’ll be the last company to find out their macro event on the first one. That demand has been good and sales teams are opportunistic, right?

They’ll find the customer that is still willing to spend a lot of money and choose that as the best opportunity and what’s happening in campus is there was this debate during COVID on do you upgrade or replace the campus or not because people are not coming back to the office, but now often so that even if you come back to the office one day a week, you still need that network to work because when you sit there, what are you going to do? You’re going to read a conferencing all day with all of your colleagues all over the world. So you have to invest in that network and infrastructure and upgrade no matter what, which is what many enterprises are doing as well.

Aaron Rakers

How about — I don’t know if you guys ever talked about this, but I just want to ask a question anyway. How much of your business comes from, what you would characterize is kind of infrastructure refresh or replacements versus net new footprint build-ups, and obviously that applies probably to the cloud vertical in particular, but just curious how you think about that mix of business for rest of the day?

Anshul Sadana

This is something that’s extremely hard to measure for us or for anyone else. So I’ll put that out as a caution. Do not try to infer this into some mathematical model that immediately results and this is what the spin is going to be. It varies quite a bit by customer as well. But the way to think about the cloud even and this applies to the enterprise as well and the enterprise I think the math is fairly straightforward that data centers they try to refresh between five to seven years. If there is supply shortage, then become seven years. If there’s no shortage, they try to do maybe five, six years, but at least that duration, they’re not going to replace prior to that.

Campus environment, people have spreaded their assets much longer. This is one analyst report we’ve seen where the average life of our campus, which that’s deployed somewhere in the world today might be between 10 to 12 years. It’s been sitting out there that long.

Cloud, on the other hand, does repress a little bit faster and they need the efficiency too, but the way to think about the cloud is you have DCI. PCI, the data center interconnect and the backbone, the primary job is to send as much traffic as possible over longer distances. Long distances could be 100 kilometer or thousands of kilometers. So there’s a newer technology that can get you from 100 to 400 gig. They deploy that quickly, but if they’ve just deployed 100 gig last year, they’re not going to retrofit that immediately with 400 gig as an example. They’re going to wait at least three, four years before they even have cycles to go back and revisit that site.

But compute is a bit harder to understand. On average, cloud companies also would like to refresh their upgrades every five years or so. So on a simple map basis, one-fifth of the infrastructure should get upgrade every year, but what happens is whatever is high end of compute being sold today, two years from now will be sold as mid-range compute and five years from now, it will be sold as low end.

A lot of reuse that happens and depending on the skew and the architecture and that without re-users succeeding or not, the models actually vary a lot. If on top of that, you put supply shortages and they haven’t managed to work the way this was became wishful, thinking that I would also like to repress when there’s such shortage. So as a result, some of these things got pushed out, but it does vary by customer, but that’s the overall goal that they’re trying to get to. Try to get the roughly every five, six years or so.

Aaron Rakers

So, shifting to the next topic, which is, and I think you alluded to that a little bit in the response here, but you have a tendency to talk probably a lot about 400 gig a little bit. I’m curious of Arista’s position at 400 gig, I think you gave from market share metrics at the Analyst Day, but maybe help us appreciate you being able to maybe even take share at 400 gig cycle. Where do you think we’re at in the 400 gig cycle, and then I’m definitely going to ask you about 800 gig using 1.6 after that.

Anshul Sadana

Right. So, a lot of exemptions in that question, I think I’m really glad you’re asking this because this this perception that there’s a 400 gig cycle, what if there is no 400 gig cycle? Customers — the cloud are deploying 100 gig in high volume. We showed this at the Analyst Day as well that 100 gig actually continues for the next five to seven years. Doesn’t really slow down much.

On top of that, you have 400 hundred gig for certain used cases and those used cases today are data center interconnect or backbone, as well as AI. And the big BCI has been going on based on availability of 400 gig products and rear optics things like that. AI is somewhat newer relative to DCI, but still starting to happen.

But customers will continue deploying more efficient 100 gig for a couple of years to come. So 400 gig really gets layered on top. It’s not a cycle by itself. It’s getting added on top and we’ve done phenomenally well, I would say in our execution with our key customers, our top cloud customers, some of the tear two cloud as well. And they are extremely happy with us.

This whole notion that someone puts out an announcement that just because they finally made a product that somehow they take away 100% of their share, it’s just not true. We talked about the 25 million lines of code. A lot of those lines of code were written based on requirements by the cloud companies. You take some of our competitors a decade to catch up to all of that and the automation and the APIs and the streaming telemetry and so on, but customers do want to be multi-vendor and often that got confused for someone else who is going to take away a lot of share.

We’ve done very well in forensics so far. I think the market analyst reports have been published up to Q2 or Q3 results of this year and we have the number one market share in 400 gig ports globally in the OEM vendors. There are two cloud companies in the US that put their own white boxes. They continue to do so with their own 400 gig products. So if you exclude that, we are doing significantly better than our competition. But at the same time, but also maintaining a very strong share, the number one position almost 40% plus market share in 100 gig as and on top of that adding 400 hundred gigs.

So I think this is good execution by the entire team at Arista, especially with the cloud vertical and some of the high end high tech.

Aaron Rakers

And you mentioned in a segue off that that answer the question will be, it always seems to come up this soft, the white box competition, right, and you’ve been fairly candid in the past about how you see it evolving and I think you just mentioned a little bit, the lines of code and how you work very tightly with these cloud customers is an important attribute when we think about that white box risk competitively. Just maybe for the audience shares, your thoughts on white box. How do you see that competitively if at all?

Anshul Sadana

This discussion has been on the table since our IPO. This was the number one risk flagged at our IPO that somehow will lose our business to white boxes. What you have to understand is why do these cloud companies use white boxes in the first place and for every company, the decisions they’ve made at the time they’ve made them have been different reasons.

Google did this in 2005. Guess what? There wasn’t any competition in the market at that point. They looked at one networking company and asked them, hey, can you give us a wave last network at the right price? They said, we can’t, build it on their own. Amazon looked at this whole space in 2010. In fact, they talked to us at that time, but we were a very tiny start-up. So they didn’t really think AWS could run on infrastructure from a start-up. They decide they’ll build on their own and they have religion and vertical integration if you didn’t know.

So they didn’t do like to build. Everything on their own or buy their own planes and ships and build their own switches when they can. I’m good for them, right. They can get the right results if they have the scale to put the investment into that. Come 2013, 2014, when Facebook or Mirror had to make that decision, they had a different viewpoint. They said, you know what, the market is a lot more competitive now.

They talked to us. They said, let’s partner and you saw the result of that in the first switch that came out around 2017, 2018, with Tomahawk One and that was the product that was a Tomahawk Three that was a product where essentially the two companies co-developed it together. Said, from build versus buy to build and buy, they been extremely happy the outcome because they are multi-sourced. They get all of their requirements met to their data centers specs.

During the supply chain crisis, they were so thankful that we were there for them to get them the supply we could and the delivery is we could and so on. Look at Microsoft, our biggest customer and they’ve looked at other cloud companies too and realized, what if a risk is competitive and be able to supply all this gear to them and meet every used case from top of Rack, to Spine, to DCI, to Band, to Edge and so on, then why go to the pain of building something on our own only to not even sort of be as competitive. They be somewhat behind and it’s not worthwhile.

So all these companies have made a decision that in today’s time, it makes sense to not build on their own, but buy from the industry because the industry is extremely competitive, but the ones who were building on their own, won’t easily go back to buying from the industry because they’re locked into their own stack with the right software development 10, 15 years, as I mentioned, 25 million lines of code in US. Guess why these cloud companies have millions of lines of code in their own stack as well. Who’s going to port all of that work, which is why I think this entire industry remains largely status quo.

There might be a plus minus 5% shift here and there and it’s not going to be a massive shift in either direction I think it’s a misconception for anyone to think there’s a risk, if anything. I mentioned this on one of the earnings calls as well with at least one large cloud company for a few year’s cases. Not everything, but for a few years cases is considering going from white boxes to buying from the industry. So if anything, it’s actually going the other way, not more towards five boxes.

Aaron Rakers

You just answered the question I’m going to ask because I thought that why not the reverse? So you’re seeing at least one hyperscale cloud customer.

Anshul Sadana

Maybe used cases because they have additional functionality that’s required that doesn’t exist in the internal stack. It will take them too long to build it. At the same time, they will go to the process and announce them converting and actually using products on the outside in cases places they’ve never used done so before. So they have to change their controller logic. The upstream not bond software that has to adapt to that as well.

We’ll see if that happens or not, but certainly I don’t see any of these companies saying, you know what, we’re done and will only build white boxes.

Aaron Rakers

Yeah. We talked about 400 gig…

Anshul Sadana

I want to add one thing here, but absolutely, this this is — we had lots of one-on-one’s today. This was the number one topic today. This has been the number one topic for the last ten years. So, on white boxes, for some of our largest cloud customers today, we are working with them on their architectures from 2025 to 2027 and in places where we are deeply entrenched, we are working on how do you cool a 1,000 watt chip still keep it efficient for the customer? How do you get the signal integrity on a standard PCB technology for six, seven inches of traces at 100 gig and 200 gig, which are the next gen speeds. And our customers are amazed by the contribution that our teams are able to bring to the table.

And as a result of that, they have no interest trying to do all of this by themselves and many of you don’t see discussions and meetings that are happening that are three, five, seven years out. We are in these meetings daily, which is why we are so confident that these customer bases are not going to go back to white boxes. They actually need us to develop all of this and get there as quickly as possible.

Aaron Rakers

And that’s extremely interesting. I’ve got thirty-five more questions and a little under nine minutes. So we’re going to try…

Anshul Sadana

I’ve thought we’ll only talk about white boxes.

Aaron Rakers

So I think this question is going to tie together a little bit. At the Analyst Day, your colleague Andy, one of the cofounders of the company, gave as always a very good presentation overview. Talked about 800 gig and 1.6 TBA, maybe even faster cycles and I’m going to dovetail this within the context of AI fabric networks, right? This idea that as we see more GPUs attached in servers, they’re consuming just a massive amount more of bandwidth.

So maybe connect the dots there. AI fabrics, the Arista opportunity and again, kind of help us appreciate how that’s being driven by AI — you know GPSs?

Anshul Sadana

Absolutely. Around 2012, 2014 timeframe, IP storage or Ethernet networks was a very big deal. You ought to do in a loss less way and 40 gig was just coming around to the market, but wasn’t enough. The price got saturated very, very quickly with storage traffic. Then came 100 gig and everyone was so relaxed. Finally, there’s enough network IO that I’m not congested and dropping traffic all day.

The same thing is happening with AI today. At 100 gig speeds or even 400 gig speeds, the AI will just consume all the bandwidth and you’re still congested in dropping and the reason this matters is the way AI works is if you have a 1,000 node cluster, the 1,000 GPUs, you’re doing a transformation of a data set and if one of the nodes is still not done because it’s waiting some packets to come back, all the other 999 nodes are wearing. Waiting for that transaction to complete.

Facebook Meda published a paper on this and they showed that most of the GPUs for many of the AI benchmarks are waiting for network IO to complete for a third of their cycles. So 33% of GPUs are completely wasted. So if you gave them more bandwidth, they could do the same job in the same amount of time with only 66% of the GPUs or they can finish the entire job in 66% of their time, if you get them all the GPUs, but in any way you look at it, it can be a lot more efficient in a significant cost saving.

So all these companies, the AI groups within these companies are coming to us and every other company saying, can I go faster? That’s where the need for 800 gig comes up. Those are the companies sitting down with us and talking about 200 gig, which are not even coming to the market. Now they will come back two, three, four years from, the best case and they want to start designing that in now because they know that as soon as 1.60 comes the market, they can consume it. So immense opportunity.

The AI clusters are already starting to get large and when they get large, you need a nice systematic network that works. You can monitor it. You can provision it. You can automate it. There’s nothing better than the IP lease pine designs we’ve done so far, but now tuned towards AI workload and get the right monitoring and buffering and other mechanisms in there and as a result of that beginning pulled into a lot of these opportunities. I think AI high speed internet, IP will all converge with every generation of technology that comes out now.

Aaron Rakers

And I think at the Analyst Day, you guys talked about that representing $2 billion to $3 billion adjacent market opportunity for the company, arguably in the very early innings of seeing that opportunity materialize. I guess where I get confused a little bit sometimes is, how does what you’re talking about, Ethernet side, where does InfiniVAN fit in or is it Ethernet versus InfiniVAN or both coincide in the context of AI fabric network buildup. So what’s the delineation there if there is any?

Anshul Sadana

Well, there are certain workloads that are latency sensitive. and HPC environments if you look at the top 500 clusters, many of them use InfiniVAN for that reason. For the many workloads we are seeing in the large public cloud, that are not latency sensitive, but they need a loss left network. They are IO sensitive. You cannot drop the packet.

If you give them a better Ethernet network, like we have with our AI Spine, which has very deep pocket buffers, give you a contrast, an average top of rack switch today has about 32 megabytes of packet memory and we’re trying to get all the packets to without dropping with this congestion, you buffer them up into 32 megabits.

The AI spine we have like the 7800 has eight gigabytes of packet memory per chip. That’s a lot more packet memory than you would imagine, but you need that to have that completely lost this architecture. That’s the kind of tradeoff that you’re looking at. These products cost a little bit more, but in the end that you can see 1 GPUs, why not. So I think that’s why we are headed towards these architectures in a way that nothing else can scale to right.

You can build a 256 node implement cluster, but if someone says, can you build me a 32,000 node cluster and operate it like a cloud and just not have to bring down the whole cluster for maintenance or operations on you need to be back into the leaf spine type of architecture we’ve done a distributed mechanism essentially to really scale this up.

Aaron Rakers

So it’s interesting. We’ve seen obviously Meta made some fairly public announcements around their AI RSC deployment, a big driver of their CapEx spend. We’ve seen recently NVIDIA announce the collaboration, multi-year collaboration talking about multi thousands of GPUs deployed in the AI projects. When we see those kind of things, do we think, hey, those are net-new adjacent network build-out said that obviously as part of that $2 billion to $3 billion TAM opportunity that’s starting to materialize for Arista.

Anshul Sadana

Absolutely. I think as you see more AI move to the cloud, that’s a great opportunity for Arista. That’s in a nutshell, how you can measure it. The specifics are different business cluster, Meta has different types of architectures, different types of architectures and so on.

Aaron Rakers

Okay. In the two minutes we’ve got left, I want to ask you about software strategy, right. At the heart of it at the end of the day, Arista was founded on the software differentiation as far as the strategy and you’ve obviously talked about expanding in adjacencies around that core software. How does the company think about monetizing software? Is there an evolutionary path, where we start to think about Arista being a software-centric line-item subscription line-item, just curious as to how you guys thinking about that internally.

Anshul Sadana

Yeah. So the software line-item is actually quite significant already, but the way to think of software subscription or a SaaS model is that you’re delivering value where the customer appreciates the subscription model, they can turn it on, turn it off, number of seats, number of features and so on at any given time and they’re getting constant value every month, every quarter, every period with updates, then a subscription model is justified.

In places like CloudVision, CloudVision is pretty much offered only as a subscription product to our customers. And it can run on-prem or in the cloud and when it runs on the cloud, you have significant value and how we manage CloudVision for our customers, so that’s what they manage and run and automate their infrastructure. But it’s all offer as a SaaS model. We have our licenses for routing and so on that are a line item they get added on, if you want to turn on more functionality on the switches, you’re paying more for the product as well.

What we don’t like to do is, do an unnecessary conversion of hardware to subscription to show it like subscription. That’s essentially a leasing model. There’s no real value to the customer other than telling them, you need to pay more if you keep on using the product longer. That’s not what they like, because they think they are buying something else perpetual.

So I think that’s somewhere in-between. They’re not trying to do an artificial shift just to please Wall Street. I think it has to be organic in your business and then the results will show. You see this in CloudVision, you see this in Awake part of our business, you see this in our DANZ Monitoring Fabric. These are all subscription offerings.

Aaron Rakers

In the 45 seconds we do have left, I mean is there anything that I didn’t ask you if there’s any comments you might have on supply-chain dynamics or anything else that maybe we should have asked you or takeaway from this discussion?

Anshul Sadana

Look, supply will recover. We’ve said this in earnings calls as well probably towards the end of ’23 is our best case guess, but let’s see what happens to the whole world. Best opportunity in front of us is still growth. Cloud has long-term systematic growth. This is a sector that has ups and downs.

It comes with the segment, we can’t ignore it. But at the same time, when I ask the cloud customers what are their plan for the next 10 years, 15 years, 20 years, they just see growth. Enterprise, datacenter, we are still underpenetrated. Campus, we’re just starting. It’s tremendous growth opportunity and I get as excited as I was at the time of the IPO that we still see that growth and great opportunity to keep on taking share.

Aaron Rakers

Perfect. Anshul, we’re right on time. Thank you so much for joining us.

Anshul Sadana

Thank you, Aaron.

Be the first to comment

Leave a Reply

Your email address will not be published.


*