My application for the job of Intel CEO seems to have come a bit of a cropper. I was asked to a meeting with the board at the Santa Clara …
The post Ed Has A Melt-Down Moment appeared first on Electronics Weekly .
Your Accurate Search for New Technology
My application for the job of Intel CEO seems to have come a bit of a cropper. I was asked to a meeting with the board at the Santa Clara …
The post Ed Has A Melt-Down Moment appeared first on Electronics Weekly .
By admin
We are excited to announce that Amazon OpenSearch Serverless is expanding availability to the Amazon OpenSearch Serverless to Europe (Spain) Region. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs). To control costs, customers can configure maximum number of OCUs per account.
Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation .
By admin
AWS Application Load Balancer (ALB) now allows customers to provide a pool of public IPv4 addresses for IP address assignment to load balancer nodes. Customers can configure a public IP Address Manager (IPAM) pool that can consist of either Bring Your Own IP addresses (BYOIPs) that is customer owned or a contiguous IPv4 address block provided by Amazon.
With this feature, customers can optimize public IPv4 cost by using BYOIP in public IPAM pools. Customers can also simplify their enterprise allowlisting and operations, by using Amazon-provided contiguous IPv4 blocks in public IPAM pools. The ALB’s IP addresses are sourced from the IPAM pool and automatically switch to AWS managed IP addresses when the public IPAM pool is depleted. This intelligent switching maximizes service availability during scaling events.
The feature is available in all commercial AWS Regions
and AWS GovCloud (US)
Regions where Amazon VPC IP Address Manager (IPAM) is available. To learn more, please refer to the ALB Documentation
.
By admin
Amazon Redshift Data API , which lets you connect to Amazon Redshift through a secure HTTPS endpoint, now supports single sign-on (SSO) through AWS IAM Identity Center. Amazon Redshift Data API removes the need to manage database drivers, connections, network configurations, and data buffering, simplifying how you access your data warehouses and data lakes.
AWS IAM Identity Center lets customers connect existing identity providers from a centrally managed location. You can now use AWS IAM Identity Center with your preferred identity provider, including Microsoft Entra Id, Okta, and Ping, to connect to Amazon Redshift clusters through Amazon Redshift Data API. This new SSO integration simplifies identity management, so that you don’t have to manage separate database credentials for your Amazon Redshift clusters. Once authenticated, your authorization rules are enforced using the permissions defined in Amazon Redshift or AWS Lake Formation.
You can get started by integrating your Amazon Redshift cluster or workgroup with AWS Identity Center (IdC) , and then allow Amazon Redshift to access AWS services programmatically using trusted identity propagation .
This feature is available in all AWS Regions where both AWS IAM Identity Center and Amazon Redshift are available. For more information, see our documentation
and blog
.
By admin
AWS HealthOmics now supports the latest NVIDIA L4 and L40S graphical processing units (GPUs) and larger compute options of up to 192 vCPUs for workflows. AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs with fully managed biological data stores and workflows. This release expands workflow compute capabilities to support more demanding workloads for genomics research and analysis.
In addition to current support for NVIDIA A10G and T4 GPUs, this release adds support for NVIDIA L4 and L40S GPUs, which enables researchers to efficiently run complex machine learning workloads such as protein structure prediction and biological foundation models (bioFMs). The enhanced CPU configurations with up to 192 vCPUs and 1,536 GiB of memory allows for faster processing of large-scale genomics datasets. These improvements help research teams reduce time-to-insight for critical life sciences work.
NVIDIA L4 and L40S GPUs and 128 and 192 vCPU omics instance types are now available in: US East (N. Virginia) and US West (Oregon). To get started with AWS HealthOmics workflows, see the documentation
.
By admin
Today, AWS announces the general availability of GraphRAG, a capability in Amazon Bedrock Knowledge Bases that enhances Retrieval-Augmented Generation (RAG) by incorporating graph data. GraphRAG delivers more comprehensive, relevant, and explainable responses by leveraging relationships within your data, improving how Generative AI applications retrieve and synthesize information.
Since public preview, customers have leveraged the managed GraphRAG capability to get improved responses to queries from their end users. GraphRAG automatically generates and stores vector embeddings in Amazon Neptune Analytics, along with a graph representation of entities and their relationships. GraphRAG combines vector similarity search with graph traversal, enabling higher accuracy when retrieving information from disparate yet interconnected data sources.
GraphRAG with Amazon Neptune is built right into Amazon Bedrock Knowledge Bases, offering an integrated experience with no additional setup or additional charges beyond the underlying services. GraphRAG is generally available in AWS Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are both available (see current list of supported regions ). To learn more, visit the Amazon Bedrock User Guide .
By admin
Contact Lens now enables you to create dynamic evaluation forms that automatically show or hide questions based on responses to previous questions, tailoring each evaluation to specific customer interaction scenarios. For example, when a manager answers “Yes” to the form question “Did the customer try to make a purchase on the call?”, the form automatically presents a follow-up question: “Did the agent read the sales disclosure?”. With this launch, you can consolidate evaluation forms that are applicable to different interaction scenarios into a single dynamic evaluation form which automatically hides irrelevant questions. This reduces manager effort in selecting the relevant evaluation form and determining which evaluation questions are applicable to the interaction, helping managers perform evaluations faster and more accurately.
This feature is available in all regions where Contact Lens performance evaluations are already available. To learn more, please visit our documentation
and our webpage
. For information about Contact Lens pricing, please visit our pricing page
.
By admin
Amazon WorkSpaces Pools now offers Federal Information Processing Standard 140-2 (FIPS) validated endpoints (FIPS endpoints) for user streaming sessions. FIPS 140-2 is a U.S. government standard that specifies the security requirements for cryptographic modules that protect sensitive information. WorkSpaces Pools FIPS endpoints use FIPS-validated cryptographic standards, which may be required for certain sensitive information or regulated workloads.
To enable FIPS endpoint encryption for end user streaming via AWS Console, navigate to Directories, and verify that the Pools directory where you want to add FIPS is in a STOPPED state, and that the preferred protocol is set to TCP. Once verified, select the directory and on the Directory Details page update the endpoint encryption to FIPS 140-2 Validated Mode and save.
FIPS support is available for WorkSpaces Pools in 4 AWS regions: AWS GovCloud (US-East); AWS GovCloud (US-West); US East (N. Virginia); and US West (Oregon). For more information about using FIPS endpoints in WorkSpaces Pools, see Configure FedRAMP authorization or DoD SRG validated for WorkSpaces Pools . For more information about how AWS supports FIPS, including a list of WorkSpaces Pools endpoints, see Federal Information Processing Standard (FIPS) 140-2 .
By admin
Amazon Athena Provisioned Capacity is now available in the Asia Pacific (Mumbai) Region. Provisioned Capacity allows you to run SQL queries on dedicated serverless resources for a fixed price, with no long-term commitment, and control workload performance characteristics such as query concurrency and cost.
Athena is a serverless, interactive query service that makes it possible to analyze petabyte-scale data with ease and flexibility. Provisioned Capacity provides workload management capabilities that help you prioritize, isolate, and scale your workloads. For example, use Provisioned Capacity when you need to run a high number of queries at the same time or isolate important queries from other queries that run in the same account. To get started, use the Athena console, AWS SDK, or CLI to request capacity and then select workgroups with queries you want to run on dedicated capacity.
For more information on AWS Regions where Provisioned Capacity is available, see Manage query processing capacity .
To learn more, visit Manage query processing capacity in the Amazon Athena User Guide and the Athena pricing page .
The announcement at the weekend that TSMC would build three more fabs in Arizona on top of the three it had previously announced, caused a few worries in Taiwan which …
The post What If TSMC becomes ASMC? appeared first on Electronics Weekly .