Who: Alysa Taylor, CVP, Azure & Industry Event: JP Morgan Global TMC Conference Date: May 21, 2024

Mark Murphy: Welcome, everyone. Good morning. I'm Mark Murphy, software analyst with JP Morgan. It is a great pleasure to be here this morning with Alysa Taylor, who is CVP of commercial cloud and AI with Microsoft. Alysa, I was on stage with you virtually about four years ago... Alysa Taylor: Yes. Mark: in the wake of the pandemic, and it's so nice to be here with you. Alysa: It is nice to be back and in person. Mark: Welcome. We really appreciate your time here. Perhaps we can just begin with it with kind of the brief one-minute introduction of your background and your current role at Microsoft. Alysa: Absolutely. As you indicated, I'm responsible for our commercial cloud and AI business. My role at Microsoft, I work very closely with our engineering counterparts to determine what services we're going to bring to market. Then my team does all of the pricing, packaging, and go to market strategy across our Azure business and our global industries. Mark: Thank you. Alysa, it's very impressive when we think back and we realize that Microsoft made its first investment into QpenAI way back in 2019, but a half decade ago. Because the topic of generative AI really wasn't something that was mainstream, right? It became mainstream with when ChatGPT was released. That was late 2022. Then we fast forward to today and that initiative has now blossomed into a seven point AI tailwind in the Azure business. How do you conceptualize, for this audience, the scale of the opportunity here for Microsoft at this point to be in pole position for the era of AI? Alysa: Absolutely. The interesting point is we actually brought our first set of Azure AI services to market in 2019. That was what you think about around cognitive services, traditional machine learning. What we realized is the barrier for enterprises to have to take all of their data, do data science work on top of it, it wasn't something that was widely accessible to organizations. At the time, working with OpenAI, what we saw was the ability for these large language models to really democratize AI. To have pre-trained models that companies could just, you know, with an API, integrate into those models and not have to do all of the heavy lift with the data science work. That was our thesis around the investment into OpenAI. It's paid off, with the introduction of GPT coming to market and these large language models, it's done exactly that. It's allowed organizations to be able to have AI accessible, generative AI in a way that we haven't seen possible. Mark: It certainly has. In your role, Alysa, how do you convey to customers that Microsoft really should be their primary platform for all their Gen AI activity moving forward, as opposed to, the alternative would be doing that work on a competing hyperscaler or maybe one of these GPU as a service providers. What is the marketing message around Microsoft's core differentiators that you're trying to bring to customers? Alysa: We start with the Microsoft Cloud, so we have infused AI at every layer of the Microsoft Cloud. If you think about our first party assets, the Microsoft 365, Dynamics, GitHub, which is our developer services, our Power platform, our security services. That's our Copilot layer, our first party Copilot layer. We recently introduced the Copilot Studio, which is the service that allows organizations to customize and extend our first party Copilots. Then at Build last year, which is kicking off today. We introduced the Copilot stack. That's for organizations that want to build their own unique AI solutions. That's everything from the infrastructure layer, the data layer. What we do around the foundational models as well as the AI orchestration and tool chain. When you ask about differentiation, it is really the completeness of everything from the first party copilots, the extensibility of those copilots, and then the copilot stack to have organizations build their own unique AI solutions. Mark: We're trying to track all those build announcements in real time while we're here at the conference, and it's been really impressive what we have been able to catch on the side. When you think about, Alysa, what is going to be happening with the foundation models, do you expect that we're going to see some convergence in the capabilities across those? People probably obviously have the GPT models. Anthropic is out there and others. Or do you suspect that we're going to see the release of GPT-5 presumably sometime fairly soon, and that this would show some kind of a sustained performance differential? I'm wondering because we're trying to think through all those scenarios. In the convergence scenario, how would Microsoft perpetuate a structural advantage in AI? In other words, is it going to come down to what you're trying to do with the first party silicon, would it be having a broader family of across all the models? You've got the small language models. Is it going to come down to something you're doing in security and governance? Mark: There's a lot in there. I'll start with the model part of it. We don't believe there is one model to rule them all. We actually believe in a variety of what we call fit to purpose models. We have in our model catalog today 1,700 models. Those are, as you indicated, across large language models, proprietary, as well as open source, third party models, and then the introduction of the small language models. 5.3 is our open source that we just announced. Having this range of models, we think, is something that allows organizations to use those models for very specific purposes. We also see organizations bringing models together to drive optimal efficiency and performance. In fact, the Microsoft Copilot is a combination of GPT-3, 3.5, 4, and Meta's Llama model. That's a great example of where, even in our first party Copilot, we are using a combination of models for that optimal performance. That's where we are on the model side of it. To your point around. Then how do you bring the governance and the security into those models? That is one of the things that I get most often when I'm talking to customers is how do we govern the data that the models reason over? We introduced a product called Microsoft Purview. It is our data governance solution. It is one of the things that is the most important assets for an organization to be able to use Purview to do all of the governance work. Then we are building security directly in to our AI services. We introduce things like the Azure content safety, which is a tool that allows organizations to both detect and mitigate biases in the model. It is ultimately the range of models, how you bring those models together and then how you govern and secure the models. Mark: That range in breath is obviously quite impressive already. If we then try to think about, Alysa, the way that's manifesting in customer conversations around AI externally. We can see, again, you've got the seven-point tailwind that has developed from AI services and Azure. We can see there have been these huge announcements. We've seen it with Coca Cola. We've seen it with cloud software group. There've been a bunch of others. We don't always know exactly what it is that they're building. I thought given you also run go to market for global industry that maybe you would have a window into this to help us understand. What is a manufacturer, retailer, or an insurance firm building at the moment? Alysa: I always start from the horizontal scenario. I'll do that and then I'll go into industry, which is your specific question. We see probably three universal use cases across any industry. How organizations are working with their customers. Particularly generative AI has enabled organizations to do like very tailored personalized at scale customer experiences in a way that we've never seen before. There's the employee side of it. How do you make employees more productive, giving them tools, resources, access to information? Then on the operation side, more efficient operations, being able to rethink workflows. Those are the horizontal scenarios that cross any industry. To your very specific question, then how does that translate into opportunity for industry? We see things like in healthcare. Physician burnout is one of the greatest challenges within healthcare. Generative AI, the combination of both ambient AI and generative AI, has allowed physicians to use technology to record patient and physician interaction. The technology can then automatically analyze and generate clinical notes. That takes a lot of the administrative burden off of the physicians, which is a big contributor to physician burnout. That's a great healthcare use case scenario. We see in the customer engagement side, a great example is Real Madrid. They are a Spanish football league. They have a very small set of their fans that live in Spain. They have over 500 million fans globally that they have been challenged to reach in near real time in a personalized way. They used AI to be able to create their fan engagement platform. The interesting thing with their fan engagement platform is they were able to not only put the matches in the hands of their fans in near real time, but they could also analyze the sentiment in real time and then do targeted campaigns to their fans. The reason I love this story is that actually they've increased their fan profile base by 400 percent, and their top line revenue by 30 percent. This is an area where you see both a use case in an industry, and then you also see tangible results. That's the combo that we want to see. The last example I'll give you is in the automotive space. Volvo is a great example. They used a combination of cognitive services, generative AI, to digitize all of their invoices. If you think about, not only being able to invoice their customers, but all of the tracking that goes along with auto maintenance, they actually estimated that their new operational platform took out 850 manual hours per month. These are the industry use cases where we see the technology coming to bear to solve or, create an opportunity and then actual top line or bottom line results as an association with that. Mark: Yeah, I'm impressed by the range in the number of layers that where that activity is occurring. Obviously you're laying out something that's pretty tangible across some huge organizations in terms of the ROI. That might be helpful to as a lead into the next question where we think about the investment that Microsoft is putting into this. Your CapEx will have risen from, let's say, roughly $25 billion to probably $60 to $70 billion, right? That will have happened in the course of a few years. Amy Hood, the CFO of the company has been very clear that the investments are based upon demand signals. We've been hearing this repeatedly. We do see at the same time, there are questions, certainly in the media, there are these questions, is AI demand for real, or are we going to find out somehow that people are overbuilding? What other signals could you help us with that you're seeing that may give us comfort that the CapEx surge is well informed that it's not speculative, right? That we're going to have this kind of monetization, several years into the future. Alysa: It's a very important question. We look at demand in three different aspects. The first is customer demand and do we see the inbound of customers that want to leverage the new AI services that we've put at every layer of the stack, as I talked about. It's we look at customer demand, both from those that are coming in a pilot phase, and then we also look at it from the dimension of is that pilot then translating into an at scale deployment? Because both of those components are incredibly important. They're not just experimenting, but they're actually taking it from experimentation into full scale deployment. The other dimension that we look at is our ecosystem. As you know Microsoft is a very ecosystem driven company. We look at the number of partners within our ecosystem that are getting AI specializations and where they are bringing in new customers. We have, within our Azure AI services, we have 53,000 active customers. A third of those in the last year are new to Azure. That is a great signal of we are not only bringing in existing customers, but new customers as well. Then the last dimension that we look at is customer commitment. Do we have customers making long term commitments to the Microsoft services, to the Microsoft platform? Our hundred million dollar plus contracts have increased 80 percent year over year. Those demand the customer dimension, our ecosystem dimension, our long term customer commitments are how we triangulate demand. Then we weekly as a senior leadership team look at that demand against supply. It is an ongoing, very fluid situation that we manage. Mark: Big commitments. Long term. The projects are moving from, pilots to full deployment and then you're seeing all this partner buy in. It seems like pretty good triangulation on it. Let's spend a moment talking about the macro. We haven't heard Microsoft yet call out any kind of a pivot in macro demand. Yet we think back to this March quarter. We had both commercial bookings in Azure grew 31 percent year over year. They're healthy results. Both of those saw acceleration. There wasn't much acceleration across the software industry but there was there. How would you characterize, Alysa, business willingness to invest at the moment? If I were to say it this way, is there at least In improved sense of stability out there that might be helping on the margin. Alysa: If we just look at it from a continuum, and I go back to the start of the pandemic, right? As we had organizations move entire groups, customer service to a remote capacity. There was an intense capital investment around digitizing foundations, foundational things, customer service environments, supporting employees from a hybrid perspective. Then we came out of the pandemic and we were in a state of cost optimization. How do organizations take all of that investment that they made in their digitization, in the new kind of digital foundation, their IT investments? How did they then make sure that they right size those? We worked very closely with our customers to make sure that we were hand in hand, helping optimize their environment. We are now in a place where they're taking, new investment into generative AI, some of the examples that I have given. There is both the, how does AI play into their IT investments, but then you also see that translate into what I would call core IT spend, so migrations, continuing to migrate on prem data, app development, and the app development, and building new applications, but the change has become a pivot to intelligent app development with, with the onset of generative AI data. How do you bring together disparate data sets? Have an enterprise wide data architecture. Then continuing to invest in developers and making sure that developers are as productive as possible. It is both a spike in the, what we're seeing around generative AI, but then also a translation into kind of the core IT functions across migration, app dev, data, and developers. Mark: That's encouraging. Alysa: It is. We are also encouraged. Mark: Then let's go a little deeper into that in terms of the workload migrations. When we go back and look at our recent survey, Microsoft partners were calling for an uptick in Azure growth, moving forward in the next 12 months. That is quite rare, because it is just such a large scale business where you get some law of large numbers. We looked at what happened, again, subsequently, Azure growth, 31 percent, it accelerated by three points. Is there anything else in here that you would call out that is aligning to drive this rebound in Azure growth that we're seeing? Alysa: We talked about it from what are customers doing and that time frame? I would say at that same period of time we looked internally at where we had placed our investments, particularly in the Azure and industry side. The reality is, as Azure had grown as a platform and we were invested in a number of different areas. We took that moment to say, where should we be focused that have the greatest addressable market, and where we have the greatest strength? We went from, what I would say was probably too many areas of disperse focus into a very highly focused GTM. The core areas that we look at are around migration. We brought new migration tooling to bear. We put new programs and market in the last year. We've had actually over 10,000 projects come through our migration, what is called Azure migrate and modernize. We just become, how do we make sure that we are hand in hand working with our customers on migration data, and making sure that we had we're bringing new capabilities, both to our analytical databases, as well as our operational databases. We really started to think about data, particularly in the era of AI. Bringing new services into our app dev portfolio. Not only on the generative AI stack, but then also bringing things like GitHub Copilot for developers to be able to code faster and more efficiently. Then lastly, on the hybrid space we introduced at Ignite this past fall, this notion of an adaptive cloud centering on Azure Arc as the central control plane. Allowing organizations not only to manage their on prem, but their cloud and multi cloud environments. We believe we have one of the strongest hybrid solutions in market. That's where we're focused. That's where we spend all of our time is in those areas, both from a where are we innovating at the product level, but then also how we are bringing those to market. Mark: I want to come back to that, especially on the data and analytics in the Fabric layer, just a moment. To round out the thought on the migrations. You had mentioned, Alysa, an incredible stats a moment ago. The number of hundred-million-dollar Azure deals being up 80 percent year over year. Our work actually was signaling an improvement in these larger cloud migrations that was actually beginning in the back half of the March quarter itself. What is your view on the rate in pace of those types of migrations? Because it's such a big revenue driver. Do you feel that enterprises are back in an investment mood as it relates to their cloud spend? Alysa: Definitely. There's two vectors we look at or we see customers why they migrate. The first is particularly with AI. We have a saying, you migrate to innovate. Your AI solution is only as good as your underlying data, and that data has to be in a cloud based environment. You see organizations that are migrating their data to be able to apply the new AI services on it, and the more information, the more data you have, the richer your AI solution. We have seen the onset of AI help fuel our migration efforts, which is fantastic to see. The second dimension is cost and how organizations continue to optimize for cost. Migration has been a key component of that. Sapiens is a great example of that. They are an insurance provider. They serve over 600 insurers across 30 countries. They knew that they had on prem data in different pockets serving different countries. They migrated over into Azure Arc, as I talked about, keeping some aspects of their platform on prem, bringing the majority of it into the cloud. They actually have a multi cloud strategy. They're using Arc as the central control plane to be able to govern their IT, and then serve those insurers across their global capacity. They were able to take 40 percent of their operational cost out of the bottom line. That is an example of, where you're migrating, you're aggregating, you're using a central IT environment to be able to bring down cost. Mark: Part of our core thesis, Alysa, has been that Microsoft might at some point end up seeing what we were calling an Azure halo effect. That would stem from, again, this early category leadership in generative AI that goes back at least as far as 2019. We have heard some feedback that there could be some companies out there that had been, let's say, for instance, they were previously sole sourced on AWS or somewhere else. They may be thinking of, little different future road map because of, there could be a little more consideration of Azure because of these moves you've been making. Is any of that tangible to you? Do you think that you could gain a greater share of cloud workloads because companies are going to align to your architectural view? Alysa: That's one of the very exciting things that we're seeing because you could just use the API into the foundational models and that's it. We actually are seeing organizations start with the API, bringing in their unstructured data into a blob storage type capacity, but then actually moving into more sophisticated analytical data services. Obviously, if you're building an app, you're bringing that into an operational data service. In fact, of the 53,000 Azure AI customers that I talked about, as I said, third of those are new to Azure, but half of them are actually using our data services as well. It's a good stat that shows the customers are not just using the APIs, but also then bringing in their data into the Microsoft platform. We're pulling through from just the base sort of integration into the foundational models, actually pulling in our data services as well. To your question, the answer is yes, we are seeing customers both come to Azure that were not previously an Azure customer, and using services beyond just the core AI services. Mark: There's adoption of so many services, but then we think back to the recent earnings call. Amy had made a comment that near term AI demand is a bit higher than Microsoft's available capacity, right? The concept of the capacity constraints came up a bit there. Can you unpack that for us a bit? One of the questions we get is, should we be somewhat handicapping the forward Azure AI services estimates due to supply constraints. Or do you think that this is something that we can overcome fairly rapidly? Alysa: Amy uses that word bit very intentionally because as we talked about, we have the triangulation that we do on the demand side. The customer inbound customer the long term commitments in the ecosystem. As I indicated, we do that week over week. I would say we are conservative in our demand. We do that intentionally because we then take that demand and we marry it against the supply. As we make sure that we are conservative in our demand forecasting, we have a bit more supply constraints, but it's nothing material. I would say it has no impact in future for us. Mark: OK, we'll try to be a bit cognizant of that going forward in our model. Let's think about Microsoft Fabric. We do hear this quite often that what a company is going to need to do is they're going to have to right size enterprise data in the age of AI and then, clean up that estate to feed it into these large language models. You have Fabric, which is a newer analytics platform, and it's definitely been at the forefront of all the discussions lately on the earnings calls. There was a comment about it reaching over 11,000 paid customers in less than a year of launch. Can you walk us through what is the customer interest in this Fabric product? Should we think about Microsoft, really truly positioning to try to be an end to end AI platform when integrated with Azure? Alysa: Definitely on the integrated AI platform side, and you'll see we are building in across all of our services, the different AI components. Specific to Fabric, we had a thesis about 18 months ago that organizations would want a more unified environment to bring in the different analytic services. Be able to aggregate their disparate data into a unified data lake, and then be able to bring in AI services directly into that. This was a bet that we took over a year and a half ago. We introduced Fabric at Build in preview a year ago. It was around the unification of the services into a SaaS environment with a unified business model. Those were all three major changes for us in how we came to market from an analytics standpoint. It brought together things like, our real time monitoring, BI, data warehousing, all of that into this notion of Fabric. Aggregating into a data lake called OneLake. Then we have one meter that goes against it which before it was all different services that you would then bring together. We introduced Fabric. We actually came to general availability this fall. We've actually been in market less than a year and we have 11,000 paid customers. A great example of this is Denver Motor Company. They monitor real time racing cars, as you can imagine, detecting anomalies in the car is quite important. They adopted Fabric. Prior to bringing together their data into Fabric and being able to do real time monitoring and the analytics on that real time monitoring, they had about a 30-minute window before they would know if there was an anomaly with the car. They report today they're in less than 2 minutes. That's the benefit of being able to aggregate into this OneLake environment and then start to bring in the different analytical services across it. Then ultimately be able to then do things like vector search and build out those AI solutions. We are integrating at the Fabric core, as well as bringing in our AI services directly into Fabric as well. Mark: Fabric and OneLake is having that type of an impact. We've spent a lot of time, Alysa, so far talking about the software stack, and we haven't really gotten into the hardware side, right? Sometimes we like to consider the back end that is supporting this whole prior discussion. Going back to late last year, Microsoft announced a couple of very important innovations, Azure Maya and Azure Cobalt, which are chip innovations. Could you walk us through how is it that Microsoft is innovating with first party silicon now? Then what is going to be the benefit of having this tightly integrated hardware and software stack? Alysa: The foundational models it is important that AI platform that they run on because they are only as efficient and effective as the infrastructure underneath. I talked about the stack at the core of that is our AI infrastructure. As you indicated, we have brought first party silicon to bear to the market, but it is to complement the investments that we have with NVidia and AMD. It is about a portfolio of GPUs and CPUs. We talk about our AI platform as a systems approach. Bringing together Maya, which is our AI accelerator, Cobalt, which is our CPU, our investments with NVidia and AMD. Then we wrapper that with networking investments as well as new newly talked about liquid cooling to bring together an AI infrastructure that is the most performant for our AI solutions to run on top of. All of this is opaque to a customer. When you are a customer and you go in and select whatever Azure service you want to run, we on the back end are firing across our different silicon investments again with that kind of updated networking storage capacity, so that really the end customer all they see is the best price to performance. We manage the system on the back end, and it really is an integrated system. It isn't about one chip versus the other, NVidia versus AMD. It's the portfolio, and we do the network load balancing to be able to provide to the end customer the best price and performance. Mark: Can you bridge that through to what it's going to mean to developers? We know there's quite a focus on developer tools. You're talking about abstracting all this complexity away from the customer. How do you think about at a high level the ability to attract the world's developers and, have them build the next generation of all these intelligent apps? Alysa: Obviously developers are at the center and core of all of this. When we think about our developer ecosystem, we have, over the years invested in the best tooling and the best tool chain for developers. We have GitHub, which has 100 million developers. It is literally the home of open source development. We have Visual Studio, which is the Visual Studio ID plus VS code that has 40 million active developers. Then I talked about Copilot Studio, which is our low code extensible platform for both building new AI Copilots as well as extending our first party Copilots. That actually in less than a year has 30,000 active organizations. We have this full range of the tool chain for developers and then actually we are announcing, about 3 minutes ago, new enhancements for GitHub Copilot for Azure, which is allowing developers to use natural language to then be able to code in GitHub Copilot. Then use Azure resource manager to actually then deploy directly into Azure. Connecting our large 100 million wide ecosystem of developers to build an AI solution, and then deploy that directly into Azure. So enhancements were bringing also the Visual Studio AI toolkit. Bringing the AI development into our already existing developer base, and the tools DevSecOp tools that they use, the coding tools that they use. It's a continued investment for us. Mark: It's moving rapidly. Alysa, in closing, as you think about the year ahead. What are you most excited about? Alysa: We've talked a lot about the innovation across the portfolio, but ultimately it comes down to what are industries doing with it? What are organizations being able to innovate? Being in technology right now, we're seeing the adoption of AI services actually happening faster than cloud computing or smartphone adoption. It's really an incredible pace. The thing I get most excited about is a lot of this we talk at the organizational level, but there's a human element to it. We look at developers that are more satisfied with their work using GitHub Copilot than they have ever been. You see individual knowledge workers being more productive, some of the mundane tasks being taken out. It's a unique time to see technology innovation at a pace we've never seen, and then actually see human satisfaction go up. It's a really unique time in the industry. Mark: The pace and the scale and the linkage back to the mission of the company is really incredible to behold at this moment. Alysa, I cannot thank you enough for taking the time to, to be here with us. Alysa: Thank you. [applause]

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

Microsoft Corporation published this content on 21 May 2024 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 22 May 2024 22:02:09 UTC.