Amazon reported strong first quarter earnings, driven by its Amazon Web Services (AWS) and growth in AI services.
“It was a good start to the year across the business, and you can see that in both our customer experience improvements and financial results,” said Andy Jassy, Amazon President and CEO. “The combination of companies renewing their infrastructure modernization efforts and the appeal of AWS’s AI capabilities is reaccelerating AWS’s growth rate (now at a $100 billion annual revenue run rate); our Stores business continues to expand selection, provide everyday low prices, and accelerate delivery speed (setting another record on speed for Prime customers in Q1) while lowering our cost to serve; and, our Advertising efforts continue to benefit from the growth of our Stores and Prime Video businesses. It’s very early days in all of our businesses and we remain excited by how much more we can make customers’ lives better and easier moving forward.”
Amazon’s amount of investment in AI services is huge, as explained by its report:
- Shared how customers and partners are using AWS generative artificial intelligence (generative AI) services to deliver new customer experiences, accelerate employee productivity, and transform operations.
- Siemens is integrating Amazon Bedrock into its low-code development platform Mendix to allow thousands of companies across multiple industries to create and upgrade applications with the power of generative AI.
- Philips is using AWS HealthImaging and Amazon Bedrock to scale digital pathology, helping labs and healthcare organizations improve diagnostics, increase productivity, accelerate research, and address complex medical cases, like cancer care.
- Accenture and Anthropic are collaborating with AWS to help organizations—especially those in highly-regulated industries like healthcare, public sector, banking, and insurance—responsibly adopt and scale generative AI technology with Amazon Bedrock. This collaboration will help organizations like the District of Columbia Department of Health speed innovation, improve customer service, and improve productivity, while keeping data private and secure.
- BT Group, a multinational communications company, provided Amazon Q Developer to 1,200 of its engineers, generating more than 100,000 lines of code in its first four months and automating approximately 12% of the repetitive and time-consuming work done by software engineers using the platform.
- The Deutsche Fußball Liga (DFL) named AWS the Official Generative AI Provider of the DFL, and is using AI on AWS to generate metadata that allows the league, clubs, and media partners to more efficiently search for content in more than 210,000 hours of video footage.
- Audi used Amazon SageMaker and Amazon OpenSearch Service to build a generative AI chatbot to improve its enterprise search experience and help employees find and navigate internal documentation, reducing search time from hours to a few seconds and improving productivity.
- Konica Minolta Healthcare, a medical diagnostic imaging and healthcare information technology company, announced an enterprise imaging cloud platform using AWS HealthImaging to reduce image storage costs, enable access from almost anywhere, and accelerate data retrieval.
- Dana-Farber Cancer Institute built a new research solution on Amazon Bedrock to help clinicians interpret complex lab results to help diagnose medical conditions, assess treatment efficacy, and determine next steps in clinical care.
- Enterprise healthcare software provider symplr announced a generative AI solution that uses Amazon Bedrock and Amazon Q to connect and optimize hospital operations so clinicians and administrators can quickly access relevant data, simplifying staff scheduling and other administrative processes.
- Choice Hotels became the first hotel company to migrate its entire system infrastructure to the cloud. The move to AWS is part of Choice Hotels’ long-term technology roadmap and adoption of AI, and involves decommissioning more than 3,700 servers, retiring over 300 applications, and migrating more than 250 applications.
- Mobile network operator Tele2 launched an Internet of Things customer support solution, powered by Amazon Bedrock, to help agents provide faster and more detailed conversational responses to customers across multiple channels.
- Vonage, a cloud communications provider, announced a new anti-fraud solution with AWS generative AI capabilities to enable businesses to better protect themselves from mobile fraud while improving customer experiences.
- Genomics England, a human genome research organization, is using Amazon Bedrock to help researchers identify associations between genetic variants and medical conditions by quickly processing millions of pages of scientific literature, with the goal of informing future genetic tests and improving human health.
- AI biotechnology company Owkin selected AWS as its primary cloud provider to develop generative AI applications and accelerate drug discovery and development, empowering scientists and researchers to access, analyze, and manage vast amounts of data efficiently and securely in the cloud.
- Parsyl, a data-powered insurer, is using Amazon Bedrock to turn unwieldy customer information, like email attachments in different formats, into usable data to help identify patterns of risk for businesses transporting goods.
- Announced the general availability of Amazon Q, the most capable generative AI-powered assistant for accelerating software development and leveraging companies’ internal data.
- On the software development side, Amazon Q not only generates highly accurate code, but also tests code, debugs coding conflicts, and transforms code from one form to another (today, developers can save months using Q to move from older versions of Java to newer, more secure and capable ones; in the near future, Q will help developers transform their .net code as well). Q Developer Agents does multi-step planning and reasoning to allow developers to string together multiple requests and have Q implement them.
- On the internal data side, most companies have large troves of data that reside in wikis, intranet pages, Salesforce, storage repositories like Amazon S3, and many other data stores and SaaS apps that are hard to access. It makes answering straightforward questions about company policies, products, business results, code, people, and many other topics hard and frustrating. Q makes this much simpler. Customers can point Q at all of its enterprise data repositories, and it’ll search this data, summarize logically, analyze trends, and engage in dialog with customers about this data. With the launch of a powerful new capability called Q Apps, employees can now describe, in natural language, what apps they want to build on top of this internal data, and Q Apps will quickly generate that app. This will make it much easier for internal teams to build useful apps from their own data.
- Delivered a number of innovations in Amazon Bedrock (AWS’s generative AI service that enables customers to leverage an existing large language model, customize it with their own data, and have the easiest and best features available to deploy secure, high quality, low latency, cost-effective production generative AI apps). Tens of thousands of organizations worldwide are using Amazon Bedrock. These new Bedrock capabilities include:
- The general availability of Anthropic’s Claude 3 family of vision-enabled foundation models (FMs)—Opus, Sonnet, and Haiku—which are the best performing models in the world right now and provide industry-leading accuracy, performance, speed, and cost. Claude 3 Opus has set a new standard, outperforming other models available today in the areas of reasoning, math, and coding. Amazon Bedrock became the first managed service to add Claude 3 models, and continues to provide customers with the widest choice of high-performing, fully-managed large language models (LLMs) and FMs.
- The general availability of Meta Llama 3 models, a collection of pretrained and instruction fine-tuned LLMs that come in two sizes—8B and 70B. The new models demonstrate significant improvements over previous versions due to vastly increased training data and scale. This collection of models supports a broad range of use cases, like text summarization and classification, sentiment analysis, language translation, and code generation.
- The addition of FMs from leading AI startup Mistral AI. Mistral Large, the latest and most advanced LLM from Mistral AI, provides top-tier reasoning capabilities for complex multilingual reasoning tasks. Mistral AI’s Mixtral 8x7B and Mistral 7B models can summarize, answer questions, and help organize information with their deep understanding of text structure and architecture.
- The availability of new Cohere models (Command R and Command R+) on Bedrock.
- New first party Amazon Titan LLMs, including Amazon Titan Text Embeddings V2 and the general availability of Amazon Titan Image Generator.
- The general availability of Model Evaluation, which is the fastest way for organizations to analyze, compare, and select models on Amazon Bedrock, reducing time from weeks to hours spent evaluating models so customers can bring new applications and experiences to market faster.
- The general availability of Guardrails, which provides customers with best-in-class technology to easily implement safeguards to remove personal and sensitive information, profanity, specific words, and harmful content. Guardrails offers industry-leading safety protection on top of the native capabilities of FMs, helping customers block up to 85% of harmful content. Guardrails is the only solution offered by a broad cloud provider that allows customers to have built-in and custom safeguards in a single offering, and it works with all LLMs in Amazon Bedrock, as well as fine-tuned models.
- The new Custom Model Import capability that makes it simple for customers to import models from SageMaker (or elsewhere) into Bedrock before deploying their application—enabling customers to take advantage of all the Bedrock features that make it so much easier to build high quality production-grade generative AI apps.
- Announced the extension of AWS and NVIDIA’s strategic collaboration to make AWS the best place to run NVIDIA GPUs, helping customers unlock new generative AI capabilities. AWS will bring together NVIDIA’s next-generation Blackwell platform with AWS Nitro System and AWS Key Management Service advanced security, Elastic Fabric Adapter petabit-scale networking, and Amazon EC2 UltraCluster hyper-scale clustering, enabling customers to securely build and run multi-trillion parameter LLMs faster, at massive scale, and at a lower cost than previous-generation NVIDIA GPUs on Amazon EC2. In addition, Project Ceiba, an AI supercomputer being jointly developed by AWS and NVIDIA exclusively on AWS for NVIDIA’s own AI research and development, will be built on the Blackwell platform.
- Continued to meet growing demand for AWS Trainium and Inferentia chips, which help customers maximize performance and control costs for their machine learning (ML) workloads. Customers using these purpose-built AWS ML AI chips include Anthropic, Databricks, Leonardo.ai, Qualtrics, Ricoh, Stockmark, Watashiha, and Vyond. In addition, Meta’s recently-launched Llama 3 model is running on Trainium and Inferentia2—delivering the lowest cost to train, fine tune, and deploy Llama 3 on AWS.
- Updated AWS Neuron (software that lets customers use popular frameworks like PyTorch to train and deploy models with minimal code changes on Amazon EC2 instances, powered by Trainium and Inferentia chips) to include features that further reduce costs for LLM inference for both external customers and internal teams at Amazon, such as Amazon Rufus.
- Continued to rollout Rufus in the Amazon Shopping app to millions of customers in the U.S. Rufus, in beta, is a new generative AI-powered shopping assistant that can help customers save time and make more informed purchase decisions by answering a variety of shopping-related questions, providing product comparisons, making recommendations, and more. Amazon improved Rufus’ answer accuracy and response speed, and added new features, including “My Orders,” which answers questions such as “when did I last order coffee?” and “what dog treats did I last order?”
- Added more generative AI features for independent sellers in the U.S. to create product listings, including a new tool that allows sellers to leverage product listings on their own websites, simply by providing Amazon with a URL. Amazon’s generative AI-based features automatically parse the information to seamlessly create high-quality, engaging listings for Amazon’s store. This feature further enhances and streamlines the process of creating product listings, saving sellers time and effort, while also developing product listings that appeal to customers and help drive sales.