Amazon Web Services (AWS) isn’t betting on a single AI model to dominate the future. Instead, it’s positioning itself as the go-to platform for a wide range of AI models, offering customers flexibility to choose the right tools for their needs. This approach sets AWS apart from other major cloud players like Microsoft, Google, and Meta, who are competing head-to-head on large language models (LLMs). AWS’s goal is to be both an AI platform for training models and a marketplace for selling them.
AWS CEO Matt Garman emphasized at the Goldman Sachs 2024 Communacopia and Technology Conference that no single AI model will remain dominant forever. AWS aims to offer “a bunch of models” to help customers easily access and integrate the latest AI technologies. Its Bedrock service already provides a wide selection of LLMs, including Meta’s Llama 3, Anthropic’s Claude 3, and Amazon’s own Titan models, allowing businesses to build their own AI-powered applications.
This strategy came to light recently with AWS and Oracle’s announcement of a partnership, despite their 15-year rivalry in cloud services. AWS is focusing on collaboration rather than direct competition to diversify its revenue streams and expand its AI offerings. In fact, AWS projects $105 billion in revenue for 2024, making up 17% of Amazon’s overall income.
Some perceive AWS as lagging behind in AI compared to Microsoft, which is known for its close partnership with OpenAI. However, Garman countered this view, explaining that AWS’s methodical pace is intentional. Instead of rushing into developing chatbot technologies like Microsoft, AWS has prioritized building a solid infrastructure for AI that will support enterprise-level applications in the long term.
AWS’s focus on creating a strong AI foundation has contributed to Amazon’s stock performance, which has risen 18% year-to-date, outpacing both the Nasdaq index and competitors like Microsoft and Google.