Generated on: April 18, 2025
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Graviton3-based M7g instances for both Standard brokers and Express brokers for MSK Provisioned clusters in Middle East (UAE) AWS region.
Graviton M7G instances for Standard brokers deliver up to 24% compute cost savings and up to 29% higher write and read throughput over comparable MSK clusters running on M5 instances. When you use Graviton instance on Express brokers, you can realize even more benefits with up to 3x more throughput per broker, scale up to 20x faster, and reduce recovery time by 90% compared to standard Apache Kafka brokers.
To learn more, check out our blogs on MSK Express brokers and M7G based Standard brokers. To get started, visit the Amazon MSK console.
Amazon SNS now supports Internet Protocol version 6 (IPv6) for API requests enabling you to communicate with Amazon SNS using Internet Protocol Version 6 (IPv6), Internet Protocol Version 4 (IPv4), or dual stack clients using public endpoints.
Amazon SNS is a fully managed messaging service that enables publish/subscribe messaging between distributed systems, microservices, and event-driven serverless applications. The addition of IPv6 support provides customers with a vastly expanded address space, eliminating concerns about address exhaustion and simplifying network architecture for IPv6-native applications. With simultaneous support for both IPv4 and IPv6 clients on SNS public endpoints, customers can gradually transition from IPv4 to IPv6-based systems and applications without needing to switch all systems at once. This enhancement is particularly valuable for modern cloud-native applications and organizations transitioning to IPv6 as part of their modernization efforts.
To learn more on best practices for configuring IPv6 in your environment, visit the whitepaper on IPv6 in AWS. This feature is now available in all AWS commercial Regions, including AWS China Regions, and can be used at no additional cost.
See here for a full listing of our Regions. To learn more about Amazon SNS, please refer to our Developer Guide.
We are excited to announce that AWS Mainframe Modernization service is now available with greater control of managed runtime environments that run modernized mainframe applications.
For both refactored and replatformed applications, you can now export data sets to an Amazon S3 bucket. Optionally, you can choose to encrypt the exported data set. This export feature makes it easier to move data set across environments, or to archive data sets.
For applications refactored with AWS Blu Age, you can now restart a batch job at a specific step. This enables advanced batch operational and recovery procedures.
For applications replatformed with Rocket Software, you can now configure your managed runtime application using a base configuration compatible with Rocket Enterprise Server deployed on non-managed environments. This base configuration provides flexibility by allowing numerous advanced configuration parameters supported by Rocket Enterprise Server, such as CICS or IMS granular parameters. It also allows transferring exported configuration parameters from a Rocket Enterprise Server deployed on Amazon EC2 to an AWS Mainframe Modernization managed runtime application.
These new features are available in any AWS Region where AWS Mainframe Modernization managed runtime is already deployed. To learn more, please visit AWS Mainframe Modernization product and documentation pages.
Amazon Bedrock Knowledge Bases now extends support for hybrid search to knowledge bases created using Amazon Aurora PostgreSQL and MongoDB Atlas vector stores. This capability, which can improve relevance of the results, previously only worked with Opensearch Serverless and Opensearch Managed Clusters in Bedrock Knowledge Bases.
Retrieval augmented generation (RAG) applications use semantic search, based on vectors, to search unstructured text. These vectors are created using foundation models to capture contextual and linguistic meaning within data to answer human-like questions. Hybrid search merges semantic and full-text search methods, executing dual queries and combining results. This approach improves results relevance by retrieving documents that match conceptually from semantic search or that contain specific keywords found in full-text search. The wider search scope enhances result quality, particularly for keyword-based queries.
You can enable hybrid search through the Knowledge Base APIs or through the Bedrock console. In the console, you can select hybrid search as your preferred search option within Knowledge Bases, or choose the default search option to use semantic search only. Hybrid search with Aurora PostgreSQL is available in all AWS Regions where Bedrock Knowledge Bases is available, excluding Europe (Zurich) and GovCloud (US) Regions. Hybrid search with Mongo DB Atlas is available in the US West (Oregon) and US East (N. Virginia) AWS Regions. To learn more, refer to Bedrock Knowledge Bases documentation. To get started, visit the Amazon Bedrock console.
Amazon Lex now allows you to disable automatic intent switching during slot elicitation using request attributes. This new capability gives you more control over conversation flows by preventing unintended switches between intents while gathering required information from users. The feature helps maintain focused conversations and reduces the likelihood of interrupting the process.
This enhancement is particularly valuable for complex conversational flows where completing the current interaction is crucial before allowing transitions to other intents. By setting certain attributes, you can ensure that your bot stays focused on collecting all necessary slots, or conformations for the current intent, even if the user's utterance matches another intent with higher confidence. This helps create more predictable and controlled conversation experiences, especially in scenarios like multi-step form filling or sequential information gathering.
This feature is supported for all Lex supported languages and is available in all AWS Regions where Amazon Lex operates.
To learn more about controlling intent switching behavior, please reference the Lex V2 Developer Guide.
Anthropic's Claude 3.7 Sonnet hybrid reasoning model, their most intelligent model to date, is now available through cross-region inference on Bedrock in Europe (Ireland), Europe (Paris), Europe (Frankfurt), and Europe (Stockholm). Claude 3.7 Sonnet represents a significant advancement in AI capabilities, offering both quick responses and extended, step-by-step thinking made visible to the user. This new model includes strong improvements in coding and brings enhanced performance across various tasks, like instruction following, math, and physics.
Claude 3.7 Sonnet introduces a unique approach to AI reasoning by integrating it seamlessly with other capabilities. Unlike traditional models that separate quick responses from those requiring deeper thought, Claude 3.7 Sonnet allows users to toggle between standard and extended thinking modes. In standard mode, it functions as an upgraded version of Claude 3.5 Sonnet. In extended thinking mode, it employs self-reflection to achieve improved results across a wide range of tasks. Amazon Bedrock customers can adjust how long the model thinks, offering a flexible trade-off between speed and answer quality. Additionally, users can control the reasoning budget by specifying a token limit, enabling more precise cost management.
Claude 3.7 Sonnet is also available on Amazon Bedrock in the US East (N. Virginia), US East (Ohio), and US West (Oregon) regions. To get started, visit the Amazon Bedrock console. Integrate it into your applications using the Amazon Bedrock API or SDK. For more information, see the AWS News Blog and Claude in Amazon Bedrock.
We are excited to announce that Amazon SageMaker Studio now supports recovery mode, enabling users to regain access to their JupyterLab and Code Editor applications when configuration issues prevent normal startup.
Starting today, when users encounter application startup failures due to issues such as corrupted Conda configuration or insufficient storage space, they can launch their application in recovery mode on Studio UI or using AWS CLI. When configuration issues occur, users see a warning banner with the recommended solution and can choose to run their space in recovery mode. This simplified environment provides access to essential features like terminal and file explorer, allowing users to diagnose and fix configuration issues without administrator intervention. This functionality provides users with an important self-service mechanism, helping them minimize workspace downtime.
This feature is available in all AWS Regions where Amazon SageMaker Studio is currently available, excluding China Regions and GovCloud (US) Regions. To learn more, visit our documentation.
Amazon SageMaker Catalog, part of the next generation of Amazon SageMaker, now supports enhanced search capabilities with exact-match and partial-match functionality for technical identifiers such as column and table names. This feature allows users to perform precise searches by enclosing terms within a qualifier such as double quotes (" "), helping them quickly locate assets with exact or partial technical names. For example, analysts can find specific columns faster, stewards can validate assets using naming patterns like "audit_", and engineers can identify temporary tables with prefixes like "temp_".
Building on SageMaker Catalog’s existing keyword and semantic search, this enhancement is designed for organizations managing large-scale data catalogs with complex naming conventions. For example, searching for "customer_id" returns only those assets with an exact match, while a query like "sales_" returns assets such as sales_summary and sales_data_2024. These capabilities help users quickly locate technical assets, improve data governance by reducing errors, and enhance collaboration.
Check out the product documentation to learn more about how to set up metadata rules for subscription and publishing workflows.
The first models in the new Llama 4 herd of models—Llama 4 Scout 17B and Llama 4 Maverick 17B—are now available on AWS. You can access Llama 4 models in Amazon SageMaker JumpStart. These advanced multimodal models empower you to build more tailored applications that respond to multiple types of media. Llama 4 offers improved performance at lower cost compared to Llama 3, with expanded language support for global applications. Featuring mixture-of-experts (MoE) architecture, these models deliver efficient multimodal processing for text and image inputs, improved compute efficiency, and enhanced AI safety measures.
According to Meta, the smaller Llama 4 Scout 17B model, is the best multimodal model in the world in its class, and is more powerful than Meta’s Llama 3 models. Scout is a general-purpose model with 17 billion active parameters, 16 experts, and 109 billion total parameters that delivers state-of-the-art performance for its class. Scout significantly increases the context length from 128K in Llama 3, to an industry leading 10 million tokens. This opens up a world of possibilities, including multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over vast code bases. Llama 4 Maverick 17B is a general-purpose model that comes in both quantized (FP8) and non-quantized (BF16) versions, featuring 128 experts, 400 billion total parameters, and a 1 million context length. It excels in image and text understanding across 12 languages, making it suitable for versatile assistant and chat applications.
Meta’s Llama 4 models are available in Amazon SageMaker JumpStart in the US East (N. Virginia) AWS Region. To learn more, read the launch blog and technical blog. These models can be accessed in the Amazon SageMaker Studio.
Amazon Bedrock Guardrails announces new capabilities to safely build generative AI applications at scale. These new capabilities offer greater flexibility, finer-grained control, and ease of use while using the configurable safeguards provided by Bedrock Guardrails aligning with use cases and responsible AI policies.
Bedrock Guardrails now offers a detect mode that provides a preview of the expected results from your configured policies allowing you to evaluate the effectiveness of the safeguards before deploying them. This enables faster iteration and accelerating time-to-product with different combinations and strengths of policies, allowing you to fine-tune your guardrails before deployment.
Guardrails now offers more configurability with options to enable policies on input prompts, model responses, or both prompts and responses - a significant improvement over the previous default setting where policies were automatically applied to both inputs and outputs. Providing finer-grained control enables you to selectively apply the safeguards to make them work for you.
Bedrock Guardrails offers sensitive information filters that detect personally identifiable information (PIIs) with two modes: Block where requests containing sensitive information are completely blocked, and Mask where sensitive information is redacted and replaced with identifier tags. You can now use either of these two modes for both input prompts and model responses giving you flexibility and ease of use to safely build generative AI applications at scale.
These new capabilities are available in all AWS regions where Amazon Bedrock Guardrails is supported.
To learn more, see the blog post, technical documentation and the Bedrock Guardrails product page.
AWS announces the availability of Pixtral Large 25.02 in Amazon Bedrock, a 124B parameter model with multimodal capabilities that combines state-of-the-art image understanding with powerful text processing. AWS is the first cloud provider to deliver Pixtral Large 25.02 as a fully managed, serverless model. This model delivers frontier-class performance across document analysis, chart interpretation, and natural image understanding tasks, while maintaining the advanced text capabilities of Mistral Large 2.
With a 128K context window, Pixtral Large 25.02 achieves best-in-class performance on key benchmarks including MathVista, DocVQA, and VQAv2. The model features comprehensive multilingual support across dozens of languages and is trained on over 80 programming languages. Key capabilities include advanced mathematical reasoning, native function calling, JSON outputting, and robust context adherence for Retrieval Augmented Generation (RAG) applications.
Pixtral Large 25.02 is now available in Amazon Bedrock in seven AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Dublin), Europe (Paris), and Europe (Stockholm). For more information on supported Regions, visit the Amazon Bedrock Model Support by Regions guide.
To learn more about Pixtral Large 25.02 and its capabilities, visit the Mistral AI product page. To get started with Pixtral Large 25.02 in Amazon Bedrock, visit the Amazon Bedrock console.
Today, Amazon introduces Amazon Nova Sonic, a new foundation model that unifies speech understanding and generation into a single model, to enable human-like voice conversations in artificial intelligence (AI) applications. Amazon Nova Sonic enables developers to build real-time conversational AI applications in Amazon Bedrock, with industry-leading price performance and low latency. It can understand speech in different speaking styles and generate speech in expressive voices, including both masculine-sounding and feminine-sounding voices, in English accents including American and British. Amazon Nova Sonic's novel architecture can adapt the intonation, prosody, and style of the generated speech response to align with the context and content of the speech input. Additionally, Amazon Nova Sonic allows for function calling and knowledge grounding with enterprise data using Retrieval-Augmented Generation (RAG). Amazon Nova Sonic is developed with responsible AI in mind and features built-in protections including content moderation and watermarking.
To help developers build real-time application with Amazon Nova Sonic, AWS is also announcing the launch of a new bidirectional streaming API in Amazon Bedrock. This API enables two-way streaming of content, which is critical for low latency interactive communication between a human user and the AI model.
Amazon Nova Sonic can be used to voice-enable virtually any application. It has been extensively tested for a wide range of applications, including enabling customer service call automation at contact centers, outbound marketing, voice-enabled personal assistants and agents, and interactive education and language learning.
The Amazon Nova Sonic model is now available in Amazon Bedrock in the US East (N. Virginia) AWS Region. To learn more, read the AWS News Blog, Amazon Nova Sonic product page, and Amazon Nova Sonic User Guide. To get started with the Amazon Nova Sonic in Amazon Bedrock, visit the Amazon Bedrock console.
At re:Invent 2024, AWS announced the preview of prompt caching, a new capability that can reduce costs by up to 90% and latency by up to 85% by caching frequently used prompts across multiple API calls. Today, AWS is launching prompt caching in generally availability on Amazon Bedrock.
Prompt caching allows you to cache repetitive inputs and avoid reprocessing context such as long system prompts and common examples that help guide the model’s response. When you use prompt caching, fewer computing resources are needed to process your inputs. As a result, not only can we process your request faster, but we can also pass along the cost savings.
Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while providing tools to build customer trust and data governance.
Prompt caching is now generally available for Anthropic’s Claude 3.5 Haiku and Claude 3.7 Sonnet, Nova Micro, Nova Lite, and Nova Pro models. Customers who were given access to Claude 3.5 Sonnet v2 during the prompt caching preview will retain their access, however no additional customers will be granted access to prompt caching on the Claude 3.5 Sonnet v2 model. For regional availability or to learn more about prompt caching, please see our documentation and blog.
Today, Amazon Simple Email Service (SES) launched support for adding attachments to emails when sending via SES simple sending v2 APIs. Customers can add attachments such as PDF documents to emails, or include images for rendering inline in email content without requiring image downloads. This makes it easier to send richer email content with the convenience of the SES sending APIs.
Previously, customers could send email content such as text and HTML through SES simple sending APIs. Customers did not have to worry about creating the email data structure which is actually sent to mailbox providers. However, if customers wanted to attach documents or include inline images, they had to use more complex APIs requiring construction of the email document structure prior to sending. Now, customers can add attachments in any of the SES supported MIME types (such as PDF, Word, and GIF), without knowledge of how MIME messages are constructed. This decreases code complexity and reduces time to implement sending over SES.
SES supports attachments in all AWS Regions where SES is available.
For more information, see the documentation for working with email attachments in SES.
Amazon Simple Email Service (SES) announces that its Mail Manager email modernization and infrastructure features now accept incoming connections from customer-provisioned Virtual Private Clouds (VPCs) to Mail Manager Ingress Endpoints. This makes use of PrivateLink connectivity features already provided by AWS.
Since Mail Manager launched in mid-2024, VPC connectivity has become the most-requested new feature from customers. These customers operate large fleets of applications hosted inside AWS, and want to route all their outgoing and incoming mail for those applications via Mail Manager. By adding VPC support via PrivateLink, those customers can now route all their outgoing mail securely entirely within AWS to Mail Manager, using its ‘Send to Internet’ action or by delivering mail to a downstream SMTP relay, hands the message off to its first external destination. The feature is enabled after a customer has created their VPC, by creating a new ‘Network’ Ingress Endpoint type and specifying the VPC’s unique endpoint ID. Customers can also choose whether or not to use authentication to their Ingress Endpoint for connections originating via PrivateLink. All VPC-enabled Mail Manager Ingress Endpoints support dual-stack (IPv4 and IPv6) connectivity by default.
Mail Manager VPC Ingress Endpoints are available in all 17 AWS Regions where Mail Manager is launched. There is no additional fee from SES to make use of this feature, though charges from AWS for VPC and PrivateLink activity may apply. Customers can learn more about SES Mail Manager by clicking here.
Cost Optimization Hub now supports DynamoDB and MemoryDB reservation recommendations. You can filter and aggregate recommendations across both services, making it easier to identify the highest cost-saving opportunities for DynamoDB and MemoryDB.
With this launch, you can view, consolidate, and prioritize reservation recommendations for DynamoDB and MemoryDB across your organization's member accounts and AWS Regions through a single dashboard. Cost Optimization Hub's comprehensive view enables you to see these recommendations as part of your total potential savings, helping you compare and prioritize them alongside other cost-saving opportunities.
DynamoDB and MemoryDB reservation recommendations are now available in Cost Optimization Hub across all AWS Regions where Cost Optimization Hub is supported.
Today, Amazon Web Services (AWS) announces the launch of two new EC2 I7ie bare metal instances. These instances are now available in US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, London), and Asia Pacific (Tokyo) regions. The I7ie instances feature 5th generation Intel Xeon Scalable processors with a 3.2GHz all-core turbo frequency. Compared to I3en instances, they deliver 40% better compute performance and 20% better price performance. I7ie instances offer up to 120TB local NVMe storage density (highest in the cloud) for storage optimized instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.
EC2 bare metal instances provide direct access to the 5th generation Intel Xeon Scalable processor and memory resources. They allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads incompatible with virtual environments, and licensing-restricted business critical applications. These instances feature three Intel accelerator technologies: Intel Data Streaming accelerator (DSA), Intel In-Memory Analytics Accelerator (IAA), and Intel QuickAssist Technology (QAT). These accelerators optimize workload performance through efficient data operation offloading and acceleration.
I7ie instances offer metal-24xl and metal-48xl sizes with 96 and 192 vCPUs respectively and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).
To learn more, visit the I7ie instances page.
Amazon Connect now provides the ability to dynamically set the text-to-speech (TTS) voices, language, or speaking styles used in voice bots and interactive voice response (IVR). These new capabilities enable you to deliver a more personalized experience for each of your end customers. For example, you can have a desired voice configured dynamically based on the primary speaking language set in their customer profile. These new capabilities are configurable in the “Set Voice” block in Amazon Connect flows and can be configured in the drag-and-drop flow designer or public APIs.
To learn more, see the public documentation on Set Voice block and the Amazon Connect Administrator Guide. These features are available in all AWS regions where Amazon Connect is available. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.
Today, AWS announces a new fulfillment experience for container products in AWS Marketplace, enhancing the deployment and management of container-based software from AWS Partners.
The new fulfillment experience helps to reduce complexity and improve workflow efficiency by making it easier to understand available deployment options, and providing explanations of each option's purpose and implications. The fulfillment experience also offers readily accessible help resources, including detailed guides from AWS Marketplace sellers. The experience is available across all AWS Regions and in local languages, delivering a consistent experience worldwide.
To learn more about the new fulfillment experience for container products in AWS Marketplace and how it can benefit your organization, visit the AWS Marketplace Buyer Guide or start exploring container products in AWS Marketplace today.
Amazon Elastic Kubernetes Service (Amazon EKS) now offers Bottlerocket FIPS (Federal Information Processing Standards) AMIs for EKS managed node groups, helping customers to meet federal compliance requirements while leveraging the security of Bottlerocket and the operational benefits of EKS managed node groups.
Bottlerocket is a Linux-based operating system optimized for running containers that follows a minimal, immutable design for enhanced security and performance. The FIPS-enabled Bottlerocket AMIs for EKS include FIPS 140-3 validated cryptographic modules and are configured by default to use FIPS-enabled AWS service endpoints, making it easier for customers in regulated industries to run containerized workloads while maintaining compliance with federal standards.
This feature is available in the following AWS regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), AWS GovCloud (US-East), AWS GovCloud (US-West). There are no additional charges for using Bottlerocket FIPS AMIs with EKS managed node groups beyond standard EKS and EC2 pricing.
To learn more, visit the Amazon EKS documentation for the Bottlerocket FIPS AMI.
Amazon Relational Database Service (Amazon RDS) for SQL Server now supports new minor versions for SQL Server 2019 (CU32 - 15.0.4430.1) and SQL Server 2022 (CU18 - 16.0.4185.3). These minor versions include performance improvements and bug fixes, and are available for SQL Server Express, Web, Standard, and Enterprise editions. Review the Microsoft release notes for CU32 and CU18 for details.
We recommend that you upgrade to the latest minor versions to benefit from the performance improvements and bug fixes. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS SDK or CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide.
These minor versions are available in all AWS regions where Amazon RDS for SQL Server is available. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.
Amazon ElastiCache for Memcached now supports horizontal autoscaling enabling you to automatically adjust capacity of your self-designed Memcached caches without the need for manual intervention. ElastiCache for Memcached leverages AWS Application Auto Scaling to manage the scaling process and Amazon CloudWatch metrics to determine when to scale in or out, ensuring your Memcached caches maintain steady, predictable performance at the lowest possible cost.
Hundreds and thousands of customers use ElastiCache to improve their database and application performance and optimize costs. ElastiCache for Memcached supports target tracking and scheduled auto scaling policies. With target tracking, you define a target metric and ElastiCache for Memcached adjusts resource capacity in response to live changes in resource utilization. For instance, when memory utilization rises, ElastiCache for Memcached will add nodes to your cache to increase memory capacity and reduce utilization back to the target level. This enables your cache to adjust capacity automatically to maintain high performance. Conversely, when memory utilization drops below the target amount, ElastiCache for Memcached will remove nodes from your cache to reduce over-provisioning and lower costs. With scheduled scaling, you can set specific days and times for ElastiCache to scale your cache to accommodate predictable workload capacity changes.
Horizontal autoscaling on ElastiCache for Memcached is now available in all AWS commercial regions. You can get started using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the ElastiCache features page and documentation.
Today, Amazon ElastiCache introduces the ability to perform vertical scaling on self-designed Memcached caches on ElastiCache. Amazon ElastiCache is a fully managed, Valkey-, Memcached- and Redis OSS-compatible service that delivers real-time, cost-optimized performance for modern applications with 99.99% availability. With this launch, you can now dynamically adjust the compute and memory resources of your ElastiCache for Memcached clusters, providing greater flexibility and scalability.
Hundreds and thousands of customers use ElastiCache to improve their database and application performance and optimize costs. With vertical scaling on ElastiCache for Memcached, you can now seamlessly scale up or down your Memcached instances to match your application's changing workload demands without disrupting your cluster architecture. You can scale up to boost performance and increase cache capacity during high-traffic periods, or scale down to optimize costs when demand is low. This enables you to align your caching infrastructure with your evolving application needs, enhancing cost efficiency and improving resource utilization.
Vertical scaling on ElastiCache for Memcached is now available in all AWS regions. You can get started using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the ElastiCache features page and documentation.
Amazon Relational Database Service (Amazon RDS) for Oracle now supports R6id and M6id instances. These instances offer up to 7.6 TB of NVMe-based local storage, making them well-suited for database workloads that require access to large amounts of intermediate data beyond the instance's memory capacity. Customers can configure their Oracle database to use the local storage for temporary tablespace and Database Smart Flash Cache.
Operations such as sorts, hash joins, and aggregations can generate large amounts of intermediate data that doesn’t fit in memory and is stored in temporary tablespace. With R6id and M6id, Customers can place temporary tablespaces in the local storage instead of the Amazon EBS volume attached to their instance to reduce latency, improve throughput, and lower the provisioned IOPS.
Customers with Oracle Enterprise Edition license can configure Database Smart Flash Cache to use the local storage. When configured, Smart Flash Cache will use the local storage to keep frequently accessed data that doesn’t fit in memory and improve the read performance of the database.
You can launch the new instance in the Amazon RDS Management Console or using the AWS CLI. Refer Amazon RDS for Oracle Pricing for available instance configurations, pricing details, and region availability.
Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL versions 16.8, 15.12, 14.17 and 13.20. Please note, this release supports the versions released by the PostgreSQL community on February 20,2025 which replaces the previous February 13, 2025 release. These releases contain product improvements and bug fixes made by the PostgreSQL community, along with Aurora-specific security and feature improvements such as dynamic resizing of the allocated space for Optimized Reads-enabled temporary objects on Aurora I/O-Optimized clusters and new features for Babelfish. For more details, please refer to the release notes.
These releases are now available in all commercial AWS regions and AWS GovCloud (US) Regions, except China regions. You can initiate a minor version upgrade by modifying your DB cluster. Please review the Aurora documentation to learn more. For a full feature parity list across regions, head to our feature parity page.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
Amazon Aurora PostgreSQL-Compatible Edition now supports pgvector 0.8.0, an open-source extension for PostgreSQL for storing vector embeddings in your database. pgvector provides vector similarity search capabilities that enables Aurora use in generative artificial intelligence (AI) semantic search and retrieval-augemented generation (RAG) applications. pgvector 0.8.0 includes improvements to PostgreSQL query planner’s selection of index when filters are present, which can deliver better query performance and improve search result quality.
pgvector 0.8.0 improves data filtering using conditions in WHERE clauses and joins that can improve query performance and usability. Additionally, the iterative index scans help prevent ‘overfiltering’, ensuring generation of sufficient results to satisfy the conditions of a query. If an initial index scan doesn't satisfy the query conditions, pgvector will continue to search the index until it hits a configurable threshold. pgvector 0.8.0 also has performance improvements for searching and building HNSW indexes.
pgvector 0.8.0 is available in Amazon Aurora clusters running PostgreSQL 16.8, 15.12, 14.17, and 13.20 and higher in all AWS Regions including AWS GovCloud (US) Regions, except China. You can initiate a minor version upgrade by modifying your DB cluster. Please review the Aurora documentation to learn more.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
Amazon Relational Database Service (Amazon RDS) Custom for SQL Server now supports a new minor version for SQL Server 2019 (CU32 - 15.0.4430.1). This minor version includes performance improvements and bug fixes, and is available for SQL Server Developer, Web, Standard, and Enterprise editions. Review the Microsoft release notes for CU32 for details.
We recommend that you upgrade to the latest minor version to benefit from the performance improvements and bug fixes. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS SDK or CLI. Learn more about upgrading your database instances from the Amazon RDS Custom User Guide.
This minor version is available in all AWS commercial regions where Amazon RDS Custom for SQL Server is available.
RDS Custom is a managed database service that allows customization of the underlying operating system and database environment. RDS Custom for SQL Server supports two licensing models: License Included (LI) and Bring Your Own Media (BYOM). By using Bring Your Own Media (BYOM), customers can use their existing SQL Server licenses with Amazon RDS Custom for SQL Server. See Amazon RDS Custom Pricing for pricing details and regional availability.
Today, AWS announces an updated service level agreement (SLA) for Amazon Neptune, increasing the Monthly Uptime Percentage for Multi-AZ DB Instance, Multi-AZ DB Cluster, and Multi-AZ Graph from 99.90% to 99.99%. This enhancement reflects AWS’s continued commitment to providing a highly available and reliable graph database service for your mission-critical applications.
With this new SLA, AWS will use commercially reasonable efforts to make each Amazon Neptune’s Multi-AZ DB Instance, Multi-AZ DB Cluster, and Multi-AZ Graph available with a Monthly Uptime Percentage, during any monthly billing cycle, of at least 99.99%. If Neptune does not meet this Service Commitment, customers will be eligible for Service Credits as outlined in the Amazon Neptune SLA.
This improved SLA is now available in all AWS regions where Amazon Neptune is offered. For more details, visit the AWS Global Region Table and learn more about Neptune on our product page, developer resources, and documentation.
Amazon Relational Database Service (RDS) for PostgreSQL announces Amazon RDS Extended Support minor version 11.22-rds.20250220 and 12.22-rds.20250220. We recommend that you upgrade to this version to fix known security vulnerabilities and bugs in prior versions of PostgreSQL.
Amazon RDS Extended Support provides you more time, up to three years, to upgrade to a new major version to help you meet your business requirements. During Extended Support, Amazon RDS will provide critical security and bug fixes for your RDS for PostgreSQL databases after the community ends support for a major version. You can run your PostgreSQL databases on Amazon RDS with Extended Support for up to three years beyond a major version’s end of standard support date.
You can use automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also use Amazon RDS Blue/Green deployments for RDS for PostgreSQL using physical replication for your minor version upgrades. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments in the Amazon RDS User Guide.
Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
Today, AWS announces that AWS Control Tower supports an additional 223 managed Config rules in Control Catalog for various use cases such as security, cost, durability, and operations. With this launch, you can now search, discover, enable and manage these additional rules directly from AWS Control Tower and govern more use cases for your multi-account environment.
To get started, in AWS Control Tower go to the Control Catalog and search for controls with the implementation filter AWS Config, you will then see all the AWS Config rules present in the Catalog. If you find rules that are relevant for you, you can then directly enable them from the AWS Control Tower console. You can also use ListControls, GetControl and EnableControl APIs. With this launch we’ve updated ListControls and GetControl APIs to support three new fields: Create Time, Severity and Implementation, that you can use when searching for a control in Control Catalog. For example, you can now programmatically find high severity Config rules which were created after your previous evaluation.
You can search the new AWS Config rules in all AWS Regions where AWS Control Tower is available, including AWS GovCloud (US). When you want to deploy a rule, reference the list of supported regions for that rule to see where it can be enabled. To learn more, visit the AWS Control Tower User Guide.
AWS CodeBuild now supports Node 22, Python 3.13, Go 1.24 and Ruby 3.4 in Lambda Compute. These new runtime versions are available in both x86_64 and aarch64 architectures. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.
The new Lambda Compute runtime versions are available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (São Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt).
To learn more about runtime versions provided by CodeBuild, please visit our documentation. To learn more about CodeBuild’s Lambda Compute mode, see CodeBuild’s documentation for Running builds on Lambda.
AWS CodeBuild now supports enhanced debugging experience through secure and isolated sandbox environments. You can connect to the sandbox environment via SSH clients or from an IDE to interactively troubleshoot your build or test runs. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.
With the enhanced debugging capability, you can investigate issues and validate the fixes in real-time, before committing changes to your buildspec configurations. The sandbox environment maintains a persistent file system during the debugging session, and offers the same native integration with source providers and AWS services as your build environment.
The enhanced debugging experience through sandbox environments is available in all AWS Regions where CodeBuild is offered. To learn more, please visit our documentation and pricing information. To get started with CodeBuild, visit the AWS CodeBuild product page.
AWS End User Messaging now allows customers to use Internet Protocol version 6 (IPv6) addresses for their new and existing service endpoints. Customers moving to IPv6 can simplify their network stack by running their AWS End User Messaging endpoints on a network that supports both IPv4 and IPv6.
The continued growth of the Internet, particularly in the areas of mobile applications, connected devices, and IoT, has spurred an industry-wide move to IPv6. IPv6 increases the number of available addresses by several orders of magnitude so customers will no longer need to manage overlapping address spaces in their VPCs. Customers can standardize their applications on the new version of Internet Protocol by moving their AWS End User Messaging endpoints to IPv6 with AWS CLI.
Support for IPv6 on AWS End User Messaging is available in all Regions where AWS End User Messaging is available.
The latest Well-Architected Framework update is now available in the Well-Architected Tool, featuring updates and improvements for 78 new best practices that offer actionable guidance to help organizations build more secure, resilient, scalable, and sustainable workloads.
With this release, the Well-Architected Framework has refreshed 100% of each pillar, including the Reliability Pillar, with 14 of its best practices updated for the first time since major Framework improvements started in 2022.
With the refreshed AWS Well-Architected Framework, organizations can use our actionable guidance to help achieve more operable, secure, sustainable, scalable, and resilient environment and workload solutions.
The updated AWS Well-Architected Framework is available now for all AWS customers. To learn more about the AWS Well-Architected Framework, visit the AWS Well-Architected Framework documentation.
AWS announces the end of sale for AWS Elemental Link HD devices effective April 15, 2024. AWS Elemental Link UHD devices will continue to be available for purchase. To support HD content contribution workflows, Link UHD has now added HD ingest pricing, providing a seamless path for new deployments. Existing Link HD devices will continue to be supported, with Link UHD now serving as the recommended solution for both HD and UHD contribution workflows.
To enable HD pricing on Link UHD devices, you can configure the device's input resolution on the Link device configuration page when the device is not actively streaming. The configuration option provides the flexibility to optimize costs when contributing HD content through Link UHD devices.
This feature is available immediately in all AWS Regions where Link UHD is supported. The input resolution configuration option is accessible through the AWS Management Console for all Link UHD devices.
To learn more about Link UHD's HD ingest rates and configuration options, visit the AWS Elemental Link documentation. For detailed pricing information, see AWS Elemental Link input pricing.
Today, we are launching two significant updates to AWS Elemental MediaTailor pricing:
A new VOD ad insertion usage type priced at a 50% discount to live. The new pricing model aligns better with how streaming providers monetize their content. By pricing VOD ad insertion at 50% of the cost of live workflows, you can expand your ad-supported VOD libraries more cost-effectively using MediaTailor.
Elimination of the $0.75 per 1000 ads pricing tier, so live ad insertion now has a single tier at $0.50 per 1000 ads. By removing the tiered pricing model of 1 to 5M insertions and greater than 5M insertions per month, this makes it easier for you to predict your bill and get started with MediaTailor.
The following summary illustrates the changes:
Previously, ad insertion pricing was tiered:
Under the new pricing model:
Both rates now apply regardless of volume.
The new pricing is effective immediately and will be applied automatically to your bill. Customers willing to make minimum commitments of more than 60 million ad insertions per month qualify for additional discounted pricing. Contact us or your account manager to learn more.
This new pricing structure is available in all AWS Regions where MediaTailor is available. To learn more, please visit the MediaTailor product page.
AWS Transfer Family announces new configuration options for SFTP connectors, providing you more flexibility and performance when connecting with remote SFTP servers. These enhancements include support for OpenSSH key format for authentication, ability to discover remote server’s host key for validating server identity, and ability to perform concurrent remote operations for improved transfer performance.
SFTP connectors provide a fully managed and low-code capability to copy files between remote SFTP servers and Amazon S3. You can now authenticate connections to remote servers using OpenSSH keys, in addition to the existing option of using PEM-formatted keys. Your connectors can now scan the remote servers for their public host keys that are used to validate the host identity, eliminating the need for manual retrieval of this information from server administrators. To improve transfer performance, connectors can now create up to five parallel connections with remote servers. These enhancements provide you greater control when connecting with remote SFTP servers to execute file operations.
The new configuration options for SFTP connectors are available in all AWS Regions where the Transfer Family is available. To learn more about SFTP connectors, visit the documentation. To get started with Transfer Family’s SFTP offerings, take the self-paced SFTP workshop.
AWS Transfer Family now enables you to delete, rename, or move files on remote SFTP servers using SFTP connectors. This enhancement allows you to easily manage files and directories stored on remote SFTP file systems and keep them up to date.
SFTP connectors provide fully managed and low-code capability to copy files between remote SFTP servers and Amazon S3. You can now organize your remote directories by deleting, renaming or moving source files to archive locations after they have been copied from the remote server. This helps you ensure that your remote directories remain current and prevent duplicate file transfers during subsequent job runs. These new features add to the existing SFTP connector functionalities, including capabilities to list remote directory contents and execute bidirectional file transfers.
SFTP connectors support for deleting, renaming and moving files on remote SFTP servers is available in all AWS Regions where AWS Transfer Family is available. To learn more about SFTP connectors, take the self paced workshop or visit the documentation. For information on pricing, see AWS Transfer Family pricing.
Starting today, we are making it easier for customers to understand their inter-availability zone (AZ) VPC Peering usage within the same AWS Region by introducing a new usage type in their bill. These changes won’t affect customers’ charges and will help them easily understand their VPC Peering costs, enabling them to choose the right architecture based on cost, performance, and ease of management.
VPC Peering is an Amazon VPC feature that allows customers to establish networking connection between two VPCs, helping them route traffic between two VPCs using private IPv4 or IPv6 addresses. Previously, VPC Peering usage was reported under the intra-regional Data Transfer usage, making it difficult for customers to understand their VPC Peering usage and charges. With this launch, customers can now view their VPC Peering usage using the new usage type “Region_Name-VpcPeering-In/Out-Bytes” in Cost Explorer or Cost and Usage Report. Customers do not need to make any changes to their existing VPC Peering connections to benefit from this change, as these changes will be automatically applied.
There are no changes to the pricing for data transferred over VPC Peering connections. These changes will apply to all AWS commercial and the AWS Gov Cloud (US) Regions.
Gateway Load Balancer (GWLB) now supports Load Balancer Capacity Unit (LCU) Reservation that allows you to proactively set a minimum bandwidth capacity for your load balancer, complementing its existing ability to auto-scale based on your traffic pattern.
Gateway Load Balancer helps you deploy, scale, and manage third-party virtual appliances. With this feature, you can reserve a guaranteed capacity for anticipated traffic surge. The LCU reservation is ideal for scenarios such as onboarding and migrating new workload to your GWLB gated services without the need to wait for organic scaling, or maintaining a minimum bandwidth capacity for your firewall applications to meet specific SLA or compliance requirements. When using this feature, you pay only for the reserved LCUs and any additional usage above the reservation. You can easily configure this feature through the ELB console or API.
The feature is available for GWLB in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) AWS Regions. This feature is not supported on Gateway Load Balancer Endpoint (GWLBe). To learn more, please refer to the GWLB documentation.
The Amazon Route 53 authoritative DNS service for public hosted zones is now generally available in the AWS GovCloud (US-East and US-West) Regions. With today’s release, AWS customers and AWS Partners who rely on public DNS for their applications in the AWS GovCloud (US) Regions can now take advantage of most of the features they have come to expect of Route 53 in commercial AWS Regions.
Previously, customers used Route 53 authoritative DNS served from commercial AWS Regions for routing traffic to their applications in the AWS GovCloud (US) Regions. Now, you can serve DNS queries to your public hosted zones from locations within the AWS GovCloud (US) Regions and without dependency on commercial AWS Region accounts. Features include authoritative DNS query logging, DNSSEC signing on AWS GovCloud (US) public hosted zones, and support for all Route 53 routing types except for IP-based routing. It also includes alias records to the following other AWS services: Amazon API Gateway, Amazon S3, Amazon VPC endpoints, AWS Elastic Beanstalk, and Elastic Load Balancing (ELB) load balancers.
Getting started with Route 53 in the AWS GovCloud (US) Regions is easy. All customers in the AWS GovCloud (US) Regions can use Route 53 authoritative DNS via the AWS Management Console and API in the AWS GovCloud (US-West) Region. For more information, visit the Route 53 documentation or review migration recommendations in the Route 53 Developer Guide. For details on pricing, visit section Authoritative DNS on the Route 53 pricing page.
Starting today, PartyRock is supporting an image playground that uses the Amazon Nova Canvas foundation model to transform your ideas into customizable images. You can access the image playground directly through the "Images" section, featuring an intuitive interface and comprehensive customization options.
This new capability enhances PartyRock's existing image generation features. While you could previously generate images using widgets in your apps, you can now also create images through the dedicated image playground. The playground offers configuration options including orientation choices (landscape, portrait, square), resolution sizes, and color guidance. The image playground comes with pre-filled prompts to help you get started, and provides suggested prompts after each generation to help refine and customize your images further.
We welcome your feedback and contributions to help shape our roadmap as we continue to enhance PartyRock's capabilities for improving everyday productivity. You can experiment with PartyRock using a free daily use grant, without worrying about exhausting free trial credits. To begin creating with the image playground, try PartyRock today.
Amazon Nova Canvas now supports fine-tuning through Amazon Bedrock, enabling customers to adapt the model to their unique datasets and brand characteristics. With custom model fine-tuning, customers can train the model on their proprietary data to generate fully customized images that align with their specific requirements and style guidelines. Fine-tuned models can be tested and deployed with provisioned throughput for consistent performance.
Text-to-image models are transforming how businesses across industries create and edit visual content. From architects visualizing floor plans to fashion designers iterating on new concepts, these models accelerate innovation by enabling rapid visualization of ideas through simple text descriptions. The technology streamlines creative workflows in manufacturing, retail, gaming, and advertising while enabling personalized customer experiences through interactive visual AI.
Fine-tuning for Amazon Nova Canvas is available in US East (N. Virginia).
To learn more about Amazon Nova Canvas, see the Amazon Nova creative models page and learn about Amazon Nova creative models responsible use of AI. To get started with Amazon Nova on Amazon Bedrock, visit the Amazon Bedrock console.
We are excited to announce Amazon Nova Reel 1.1, which enables the generation of multi-shot videos up to 2-minutes in length and style consistency across shots. In addition, Amazon Nova Reel 1.1 provides quality and latency improvements in 6-second single-shot video generation over Amazon Nova Reel 1.0.
Multi-shot videos in Amazon Nova Reel 1.1 offers two modes: automated mode and manual mode. In automated mode, users can generate multiple 6-second videos by providing a single prompt and specifying the total video duration, up to 2 minutes. In manual mode, the model provides granular control, allowing users to input a text prompt and an optional image for each 6-second shot. Users can choose to receive either individual 6-second shots stored separately or a single stitched video in their S3 location.
Amazon Nova Reel 1.1 is available in US East (N. Virginia).
To learn more about Amazon Nova Reel 1.1, visit the Amazon Nova creative models page. And to get started with Amazon Nova on Amazon Bedrock, visit the Amazon Bedrock console.
IAM Identity Center has released a new SDK plugin that simplifies AWS resource authorization for applications that authenticate with external identity providers (IdPs) such as Microsoft EntraID, Okta, and others. The plugin which supports trusted identity propagation (TIP), streamlines how external IdP tokens are exchanged for IAM Identity Center tokens. These tokens enable precise access control to AWS resources (e.g., Amazon S3 buckets) leveraging user and group memberships as defined in the external IdP.
The new SDK plugin automates the token exchange process eliminating the need for complex, custom-built workflows. Once configured, it seamlessly handles the IAM Identity Center token creation and the generation of user identity-aware credentials. These credentials can be used for creating identity-aware IAM role sessions while requesting access to different AWS resources. Currently available for Java 2.0 and JavaScript v3 SDK, this TIP plugin is AWS's recommended solution for implementing user identity-aware authorization.
IAM Identity Center enables you to connect your existing source of workforce identities to AWS once, and access the personalized experiences offered by AWS applications such as Amazon Q, define and audit user identity-aware access to data in AWS services, and manage access to multiple AWS accounts from a central place. For instructions on installation of this plug-in, see here. For an example of how Amazon Q business developers can integrate into this plugin to build user identity-aware GenAI experiences, see here. This plugin is available at no additional cost in all AWS Regions where IAM Identity Center is supported.
Amazon Security Lake customers can now use Internet Protocol version 6 (IPv6) addresses via new dual-stack endpoints to configure and manage the service. This update addresses the growing need for IPv6 adoption due to the exhaustion of available Internet Protocol version 4 (IPv4) addresses caused by continued internet growth.
Amazon Security Lake automatically centralizes security data from AWS environments, SaaS providers, on premises, and cloud sources into a purpose-built data lake stored in your account. With Security Lake, you can get a more complete understanding of your security data across your entire organization. You can also improve the protection of your workloads, applications, and data.
The new dual-stack endpoints support both IPv4 and IPv6 clients, helping you transition from IPv4 to IPv6-based systems and applications at your own pace. This approach can help you work toward IPv6 compliance requirements while reducing the need for additional networking equipment to handle address translation between IPv4 and IPv6.
Support for IPv6 on Amazon Security Lake is available in all commercial Regions and AWS GovCloud (US). To learn more on best practices for configuring IPv6 in your environment, you can visit our whitepaper on IPv6 in AWS. You can start with a 15-day free trial of Amazon Security Lake with a single-click in the AWS Management console. To learn more and get started, see the following resources:
AWS Serverless Application Model (AWS SAM) now supports custom domain names for private REST APIs feature of Amazon API Gateway. Developers building serverless applications using SAM can now seamlessly incorporate custom domain names for private APIs directly in their SAM templates, eliminating the need to configure custom domain names separately using other tools.
API Gateway allows you to create a custom domain name, like private.example.com, for your private REST APIs, enabling you to provide API callers with a simpler and intuitive URL. With a private custom domain name, you can reduce complexity, configure security measures with TLS encryption, and manage the lifecycle of the TLS certificate associated with your domain name. AWS SAM is a collection of open-source tools (e.g. SAM, SAM CLI) that make it easy for you to build and manage serverless applications through the authoring, building, deploying, testing, and monitoring phases of your development lifecycle. This launch enables you to easily configure custom domain names for your private REST APIs using SAM and SAM CLI.
To get started, update SAM CLI to the latest version and modify your SAM template to set the EndpointConfiguration to PRIVATE and specify a policy document in the Policy field in the Domain property of the AWS::Serverless::Api resource. SAM will then automatically generate DomainNameV2 and BasePathMappingV2 resources under AWS::Serverless::Api. To learn more, visit the AWS SAM documentation. You can learn more about custom domain name for private REST APIs in API Gateway blog post.
Amazon EventBridge Archive and Replay now supports AWS Key Management Service (KMS) customer managed keys for encrypting archived events. This expands your encryption options by letting you choose between default AWS owned keys for simpler, automated data protection or customer managed keys to help meet your organization's specific security and governance requirements.
Amazon EventBridge Event Bus receives and routes events between your applications, SaaS applications, and AWS services. The Archive and Replay feature enhances this capability by allowing you to store events from an event bus and replay them later, helping you build more durable event-driven applications. You can archive events using custom filters, set flexible retention periods, and replay events to specific rules within your chosen time ranges on the original event bus. With customer managed KMS keys, you can help meet your organization's compliance and governance requirements for encrypting archived events and use AWS CloudTrail to audit and track encryption key usage.
Customer managed key support for EventBridge Archive and Replay is available in all AWS Regions where the Archive and Replay feature is offered. Using this feature incurs no additional cost, but standard AWS KMS pricing applies.
To get started, refer to the EventBridge documentation. For details about customer managed keys, see the AWS Key Management Service documentation.
Starting today, Amazon S3 Express One Zone has reduced pricing for storage by 31%, PUT requests by 55%, and GET requests by 85%. In addition, S3 Express One Zone has reduced its per-gigabyte data upload and retrieval charges by 60% and now applies these charges to all bytes rather than just portions of requests exceeding 512 kilobytes.
Amazon S3 Express One Zone is a high-performance, single-Availability Zone storage class purpose-built to deliver consistent single-digit millisecond data access for your most frequently accessed data and latency-sensitive applications, such as machine learning training, analytics for live streaming events, and market analysis for financial services.
These pricing changes apply to S3 Express One Zone in all AWS Regions where the storage class is available. For updated pricing information, visit the S3 pricing page. To learn more about these pricing reductions, read the AWS News Blog, and to learn more about the S3 Express One Zone storage class, visit the product page and S3 User Guide.
Amazon FSx for NetApp ONTAP, a service that provides fully managed shared storage built on NetApp’s popular ONTAP file system, now supports ONTAP Autonomous Ransomware Protection (ARP). ARP is a NetApp ONTAP feature that proactively monitors your file system for unusual activity and automatically generates ONTAP snapshots when a potential attack is detected, helping you protect your business-critical data against a broader set of ransomware and malware attacks.
When enabled on your FSx for ONTAP storage volumes, ARP analyzes your workload access patterns to proactively detect suspicious activity – including changes in the randomness of data in files, the usage of unusual file extensions, and surges in abnormal volume activity with encrypted data – that might indicate a potential ransomware or malware attack. If activity is detected, ARP will automatically create a snapshot of the affected storage volume and generate alerts that you can monitor via EMS messages or the ONTAP CLI and REST API. By helping protect your file system against ransomware and malware attacks, ARP helps you maintain business continuity and improve data protection for your business-critical data stored on FSx for ONTAP.
NetApp Autonomous Ransomware Protection is now available at no additional cost for all FSx for ONTAP file systems in all AWS Regions where the service is available. For more information on how to enable ARP on your file system, see Amazon FSx for NetApp ONTAP documentation. For detailed information on ARP, see NetApp ONTAP ARP documentation.
AWS ParallelCluster 3.13 is now generally available. Key features of this release include support for Ubuntu 24.04, an updated Slurm version 24.05.07 and support for Elastic Fabric Adapter (EFA)-enabled Amazon FSx for Lustre filesystems on official ParallelCluster AMIs. You can use EFA with your FSx Lustre filesystems to achieve higher throughput and complete jobs faster, reducing overall costs. To get started with enabling EFA with FSx Lustre filesystems on your clusters, follow the tutorial in the ParallelCluster User Guide - Creating a cluster with an EFA-enabled FSx Lustre .
ParallelCluster is a fully-supported and maintained open-source cluster management tool that enables R&D customers and their IT administrators to operate high-performance computing (HPC) clusters on AWS. ParallelCluster is designed to automatically and securely provision cloud resources into elastically-scaling HPC clusters capable of running scientific and engineering workloads at scale on AWS.
ParallelCluster is available at no additional charge in the AWS Regions listed here, and you pay only for the AWS resources needed to run your applications. To learn more about launching HPC clusters on AWS, visit the ParallelCluster User Guide. To start using ParallelCluster, see the installation instructions for ParallelCluster UI and CLI.
Starting today, customers can use Amazon Managed Service for Apache Flink in the Mexico (Central) Region to build real-time stream processing applications.
Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors.
You can learn more about Amazon Managed Service for Apache Flink here. For Amazon Managed Service for Apache Flink region availability, refer to the AWS Region Table.
Today, Amazon Q Developer announced expanded multi-language support for the integrated development environment (IDE) and the Q Developer CLI. Among the many supported languages are Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available.
To get started, simply start a conversation with Q Developer using your preferred language. Q Developer will then automatically detect it and provide answers, code suggestions, and responses in the appropriate language, making development more accessible and efficient for global teams.
This update is available in all AWS Regions where Amazon Q Developer is available. To get started visit Amazon Q Developer or read the blog.