Conquer Amazon AWS Certified Solutions Architect - Professional Exam SAP-C02: Your Blueprint to Cloud Mastery
A delivery company is running a serverless solution in tneAWS Cloud The solution manages user data, delivery information and past purchase details The solution consists of several microservices The central user service stores sensitive data in an Amazon DynamoDB table Several of the other microservices store a copy of parts of the sensitive data in different storage services
The company needs the ability to delete user information upon request As soon as the central user service deletes a user every other microservice must also delete its copy of the data immediately
Which solution will meet these requirements?
Correct : C
Set Up EventBridge Event Bus:
Step 1: Open the Amazon EventBridge console and create a custom event bus. This bus will be used to handle user deletion events.
Step 2: Name the event bus appropriately (e.g., user-deletion-bus).
Post Events on User Deletion:
Step 1: Modify the central user service to post an event to the custom EventBridge event bus whenever a user is deleted.
Step 2: Ensure the event includes relevant details such as the user ID and any other necessary metadata.
Create EventBridge Rules for Microservices:
Step 1: For each microservice that needs to delete user data, create a new rule in EventBridge that triggers on the user deletion event.
Step 2: Define the event pattern to match the user deletion event. This pattern should include the event details posted by the central user service.
Invoke Microservice Logic:
Step 1: Configure the EventBridge rule to invoke a target, such as an AWS Lambda function, which contains the logic to delete the user data from the microservice's data store.
Step 2: Each microservice should have its Lambda function or equivalent logic to handle the deletion of user data upon receiving the event.
Using Amazon EventBridge ensures a scalable, reliable, and decoupled approach to handle the deletion of user data across multiple microservices. This setup allows each microservice to independently process user deletion events without direct dependencies on other services.
Reference
DynamoDB Streams and AWS Lambda Triggers
Start a Discussions
To abide by industry regulations, a solutions architect must design a solution that will store a company's critical data in multiple public AWS Regions, including in the United States, where the company's headquarters is located The solutions architect is required to provide access to the data stored in AWS to the company's global WAN network The security team mandates that no traffic accessing this data should traverse the public internet
How should the solutions architect design a highly available solution that meets the requirements and is cost-effective'?
Correct : D
Establish AWS Direct Connect Connections:
Step 1: Set up two AWS Direct Connect (DX) connections from the company headquarters to a chosen AWS Region. This provides a redundant and high-availability setup to ensure continuous connectivity.
Step 2: Ensure that these DX connections terminate in a specific Direct Connect location associated with the chosen AWS Region.
Use Company WAN:
Step 1: Configure the company's global WAN to route traffic through the established Direct Connect connections.
Step 2: This setup ensures that all traffic between the company's headquarters and AWS does not traverse the public internet, maintaining compliance with security requirements.
Set Up Direct Connect Gateway:
Step 1: Create a Direct Connect Gateway in the AWS Management Console. This gateway allows you to connect your Direct Connect connections to multiple VPCs across different AWS Regions.
Step 2: Associate the Direct Connect Gateway with the VPCs in the various Regions where your critical data is stored. This enables access to data in multiple Regions through a single Direct Connect connection.
By using Direct Connect and Direct Connect Gateway, the company can achieve secure, reliable, and cost-effective access to data stored across multiple AWS Regions without using the public internet, ensuring compliance with industry regulations.
Reference
AWS Direct Connect Documentation
Start a Discussions
A company has developed an application that is running Windows Server on VMware vSphere VMs that the company hosts on premises The application data is stored in a proprietary format that must be read through the application The company manually provisioned the servers and the application
As part of its disaster recovery plan, the company wants the ability to host its application on AWS temporarily if the company's on-premises environment becomes unavailable The company wants the application to return to on-premises hosting after a disaster recovery event is complete The RPO is 5 minutes.
Which solution meets these requirements with the LEAST amount of operational overhead?
Correct : B
Set Up AWS Elastic Disaster Recovery:
Navigate to the AWS Elastic Disaster Recovery (DRS) console.
Configure the Elastic Disaster Recovery service to replicate your on-premises VMware vSphere VMs to Amazon EC2 instances. This involves installing the AWS Replication Agent on your VMs.
Configure Replication Settings:
Define the replication settings, including the Amazon EC2 instance type and the Amazon EBS volume configuration. Ensure that the replication frequency meets your Recovery Point Objective (RPO) of 5 minutes.
Monitor Data Replication:
Monitor the initial data replication process in the Elastic Disaster Recovery console. Once the initial sync is complete, the status should show as 'Healthy' indicating that the data replication is up-to-date and within the RPO requirements.
Disaster Recovery (Failover):
In the event of a disaster, initiate a failover from the Elastic Disaster Recovery console. This will launch the replicated Amazon EC2 instances using the Amazon EBS volumes with the latest data.
Failback Process:
Once the on-premises environment is restored, perform a failback operation to synchronize the data from AWS back to your on-premises VMware environment. Use the failback client provided by AWS Elastic Disaster Recovery to ensure data consistency and minimal downtime during the failback process.
Start a Discussions
A company provides a centralized Amazon EC2 application hosted in a single shared VPC The centralized application must be accessible from client applications running in the VPCs of other business units The centralized application front end is configured with a Network Load Balancer (NLB) for scalability
Up to 10 business unit VPCs will need to be connected to the shared VPC Some ot the business unit VPC CIDR blocks overlap with the shared VPC and some overlap with each other Network connectivity to the centralized application in the shared VPC should be allowed from authorized business unit VPCs only
Which network configuration should a solutions architect use to provide connectivity from the client applications in the business unit VPCs to the centralized application in the shared VPC?
Correct : B
Create VPC Endpoint Service:
In the shared VPC, create a VPC endpoint service using the Network Load Balancer (NLB) that fronts the centralized application.
Enable the option to require endpoint acceptance to control which business unit VPCs can connect to the service.
Set Up VPC Endpoints in Business Unit VPCs:
In each business unit VPC, create a VPC endpoint that points to the VPC endpoint service created in the shared VPC.
Use the service name of the endpoint service created in the shared VPC for configuration.
Accept Endpoint Requests:
From the VPC endpoint service console in the shared VPC, review and accept endpoint connection requests from authorized business unit VPCs. This ensures that only authorized VPCs can access the centralized application.
Configure Routing:
Update the route tables in each business unit VPC to direct traffic destined for the centralized application through the VPC endpoint.
Start a Discussions
A company is running a large containerized workload in the AWS Cloud. The workload consists of approximately 100 different services. The company uses Amazon Elastic Container Service (Amazon ECS) to orchestrate the workload.
Recently, the company's development team started using AWS Fargate instead of Amazon EC2 instances in the ECS cluster. In the past, the workload has come close to running the maximum number of EC2 instances that are available in the account.
The company is worried that the workload could reach the maximum number of ECS tasks that are allowed. A solutions architect must implement a solution that will notify the development team when Fargate reaches 80% of the maximum number of tasks.
What should the solutions architect do to meet this requirement?
Correct : B
Start a Discussions
Total 483 questions