Organizations are increasingly using applications with multimodal data to drive business value, improve decision-making, and enhance customer experiences. Amazon Bedrock Guardrails now supports multimodal toxicity detection for image content, enabling organizations to apply content filters to images. This new capability with Guardrails, now in public preview, removes the heavy lifting required by customers to build their own safeguards for image data or spend cycles with manual evaluation that can be error-prone and tedious.
Bedrock Guardrails helps customers build and scale their generative AI applications responsibly for a wide range of use cases across industry verticals including healthcare, manufacturing, financial services, media and advertising, transportation, marketing, education, and much more. With this new capability, Amazon Bedrock Guardrails offers a comprehensive solution, enabling the detection and filtration of undesirable and potentially harmful image content while retaining safe and relevant visuals. Customers can now use content filters for both text and image data in a single solution with configurable thresholds to detect and filter undesirable content across categories such as hate, insults, sexual, and violence, and build generative AI applications based on their responsible AI policies.
This new capability in preview is available with all foundation models (FMs) on Amazon Bedrock that support images including fine-tuned FMs in 11 AWS regions globally: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Mumbai), and AWS GovCloud (US-West).
To learn more, visit the Amazon Bedrock Guardrails product page , read the News blog , and documentation .