Highlights from the AWS re:Invent 2023 keynote


Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Amazon is growing its already formidable array of AI services, with a long list of new capabilities announced at the annual AWS re:Invent 2023 conference in Seattle today.

In a wide ranging keynote session that was nearly 2.5 hours long, Adam Selipsky, CEO of Amazon Web Services (AWS), the largest cloud provider in the world by revenue, number of customers, and data stored, detailed the continued progress his division has made in advancing its signature platform over the past year.

And because this is the year 2023, the vast majority of Selipsky’s keynote was about AI in one form or another. Topping the list of announcements made by the AWS CEO is the new Amazon Q (yes another Q, no it’s not OpenAI’s Q* and it’s not Star Trek‘s multidimensional entity Q either, nor is it QAnon) AI that will bring generative AI automation across AWS cloud services for development, analytics and operations.

Selipsky also detailed multiple new capabilities that are now generally available in the Amazon Bedrock generative AI model service.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

He was also joined by the trillion-dollar man himself, Nvidia CEO Jenson Huang, to detail how the two companies work together.

Though, not 15 minutes after Huang left the stage, Selipsky announced AWS own competitive Tranium 3 silicon for AI training. 

Overall Selipsky emphasized that AWS is helping organizations throughout the AI lifecycle with infrastructure, model and applications.

“If you’re building your own models, AWS is relentlessly focused on everything you need: the best chips and most advanced virtualization, powerful petabyte scale networking capabilities, hyperscale clustering and the right tools to help you build,” Selipsky said.

Amazon Q is the omnipresent enterprise AI assistant you never knew you needed

The idea of an AI assistant to help with operations in the cloud is one that Microsoft has been advocating for with its Copilot approach, and Google has been advancing with its duet AI.

AWS is now going down the same path with Amazon Q that is being deeply integrated across multiple cloud services, including Amazon CodeWhisper and Amazon Connect among many others. Selipsky said that Q can answer questions, generate text and visualizations, and take actions on behalf of users. 

“We expect that Amazon Q is going to save customers so much time architecting and troubleshooting and optimizing workloads,” Selipsky said.

Amazon Q will be able to provide assistance to developers working on AWS by answering questions about services and troubleshooting issues. It will also integrate with applications and business tools to enhance functionality, such as assisting contact center agents through Amazon Connect. 

On the developer side one of the Amazon Q capabilities is called code transformation and it is the first in a series of features coming to the service to help organizations migrate across different technologies. The initial capability of Amazon Q code transformation enabled Amazon itself to migrate 1000 Java applications from an older version of Java to a modern version in just two days.

“That’s how long a single application upgrade used to take,” Selipsky said. “I mean, this is months, if not years of developer time saved – there are a lot of very happy people at Amazon over this.”

Selipsky said that coming soon to Amazon Q Code Transformation will be the ability to migrate dot net workloads from Windows to Linux.

“There are a lot of applications out there stuck on Windows because of the sheer effort required for making the migration and this is an opportunity for huge cost savings,” he said.

Amazon Bedrock builds on solid ground to add RAG, new customization features

Selipsky also used his keynote as an opportunity to announce a series of incremental updates to the Amazon Bedrock service that became generally available in September.

Amazon Bedrock now offers new customization capabilities that allow customers to further tailor generative AI models to their specific needs and data.

One of the new customization features enables fine-tuning models using labeled examples from a customer’s own data to teach models domains, products, services or other business-specific information.

There is now also a Retrieval-augmented generation (RAG) capability which allows models to retrieve and consider additional information from a customer’s internal data sources when responding. The other customization feature is called – Continued pre-training, which uses large amounts of a customer’s unlabeled data to improve a model’s knowledge and reasoning abilities in their industry or field.

“We want you to be able to use Bedrock to complete actions like booking travel, filing insurance claims, deploying software..and this usually requires orchestration between multiple systems that operate in your organization,” Selipsky said. “That’s why a few months ago, we introduced agents for Bedrock and today I’m happy to announce that this powerful feature is today generally available”

Going a step further, Selipsky also announced Amazon Bedrock Guardrails as a way to further improve safety for AI models. Guardrails that let customers configure models to avoid certain topics, words or responses based on their responsible AI policies.

“We’re approaching the whole concept of generative AI in a fundamentally different way because we understand what it takes to reinvent how you are going to build with this technology,” Selipsky said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link