Master Amazon SOA-C03: AWS Certified CloudOps Engineer Practice Exams
A company's security policy prohibits connecting to Amazon EC2 instances through SSH and RDP. Instead, staff must use AWS Systems Manager Session Manager. Users report they cannot connect to one Ubuntu instance, even though they can connect to others.
What should a CloudOps engineer do to resolve this issue?
Correct : B
According to AWS Cloud Operations and Systems Manager documentation, Session Manager requires that each managed instance be associated with an IAM instance profile that grants Systems Manager core permissions. The required permissions are provided by the AmazonSSMManagedInstanceCore AWS-managed policy.
If this policy is missing or misconfigured, the Systems Manager Agent (SSM Agent) cannot communicate with the Systems Manager service, causing connection failures even if the agent is installed and running. This explains why other instances work---those instances likely have the correct IAM role attached.
Enabling port 22 (Option A) violates the company's security policy, while configuring user names (Option C) and key pairs (Option D) are irrelevant because Session Manager operates over secure API channels, not SSH keys.
Therefore, the correct resolution is to attach or update the instance profile with the AmazonSSMManagedInstanceCore policy, restoring Session Manager connectivity.
Start a Discussions
An Amazon EC2 instance is running an application that uses Amazon Simple Queue Service (Amazon SQS) queues. A CloudOps engineer must ensure that the application can read, write, and delete messages from the SQS queues.
Which solution will meet these requirements in the MOST secure manner?
Correct : D
The most secure pattern is to use an IAM role for Amazon EC2 with the minimum required permissions. AWS guidance states: ''Use roles for applications that run on Amazon EC2 instances'' and ''grant least privilege by allowing only the actions required to perform a task.'' By attaching a role to the instance, short-lived credentials are automatically provided through the instance metadata service; this removes the need to create long-term access keys or embed secrets. Granting only sqs:SendMessage, sqs:ReceiveMessage, and sqs:DeleteMessage against the specific SQS queues enforces least privilege and aligns with CloudOps security controls. Options A and B rely on IAM user access keys, which contravene best practices for workloads on EC2 and increase credential-management risk. Option C uses a role but grants sqs:*, violating least-privilege principles. Therefore, Option D meets the security requirement with scoped, temporary credentials and precise permissions.
References (AWS CloudOps Documents / Study Guide):
* AWS Certified CloudOps Engineer -- Associate (SOA-C03) Exam Guide -- Security & Compliance
* IAM Best Practices -- ''Use roles instead of long-term access keys,'' ''Grant least privilege''
* IAM Roles for Amazon EC2 -- Temporary credentials for applications on EC2
* Amazon SQS -- Identity and access management for Amazon SQS
Start a Discussions
A company uses Amazon ElastiCache (Redis OSS) to cache application data. A CloudOps engineer must implement a solution to increase the resilience of the cache. The solution also must minimize the recovery time objective (RTO).
Which solution will meet these requirements?
Correct : C
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
For high availability and fast failover, ElastiCache for Redis supports replication groups with Multi-AZ and automatic failover. CloudOps guidance states that a primary node can be paired with one or more replicas across multiple Availability Zones; if the primary fails, Redis automatically promotes a replica to primary in seconds, thereby minimizing RTO. This architecture maintains in-memory data continuity without waiting for backup restore operations. Backups (Options B and D) provide durability but require restore and re-warm procedures that increase RTO and may impact application latency. Switching engines (Option A) to Memcached does not provide Redis replication/failover semantics and would not inherently improve resilience for this use case. Therefore, creating a read replica in a different AZ and enabling Multi-AZ with automatic failover is the prescribed CloudOps pattern to increase resilience and achieve the lowest practical RTO for Redis caches.
References (AWS CloudOps Documents / Study Guide):
* AWS Certified CloudOps Engineer -- Associate (SOA-C03) Exam Guide -- Reliability and Business Continuity
* Amazon ElastiCache for Redis -- Replication Groups, Multi-AZ, and Automatic Failover
* AWS Well-Architected Framework -- Reliability Pillar
Start a Discussions
A CloudOps engineer configures an application to run on Amazon EC2 instances behind an Application Load Balancer (ALB) in a simple scaling Auto Scaling group with the default settings. The Auto Scaling group is configured to use the RequestCountPerTarget metric for scaling. The CloudOps engineer notices that the RequestCountPerTarget metric exceeded the specified limit twice in 180 seconds.
How will the number of EC2 instances in this Auto Scaling group be affected in this scenario?
Correct : B
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
With simple scaling policies, an Auto Scaling group performs one scaling activity when the alarm condition is met, then observes a default cooldown period (300 seconds) before another scaling activity of the same type can begin. CloudOps guidance explains that cooldown prevents rapid successive scale-outs by allowing time for the newly launched instance(s) to register with the load balancer and impact the metric. Even if the alarm breaches multiple times during the cooldown window, the group waits until the cooldown completes before evaluating and acting again. In this case, although RequestCountPerTarget exceeded the threshold twice within 180 seconds, the group will launch a single instance and then wait for cooldown before any additional scale-out can occur. Options A, C, and D do not reflect the behavior of simple scaling with cooldowns; A describes step/target-tracking-like behavior, and C/D are not Auto Scaling mechanics.
References (AWS CloudOps Documents / Study Guide):
* Amazon EC2 Auto Scaling -- Simple Scaling Policies and Cooldown (User Guide)
* Elastic Load Balancing Metrics -- ALB RequestCountPerTarget (CloudWatch Metrics)
* AWS Well-Architected Framework -- Performance Efficiency & Operational Excellence
Start a Discussions
A company's website runs on an Amazon EC2 Linux instance. The website needs to serve PDF files from an Amazon S3 bucket. All public access to the S3 bucket is blocked at the account level. The company needs to allow website users to download the PDF files.
Which solution will meet these requirements with the LEAST administrative effort?
Correct : B
Per the AWS Cloud Operations, Networking, and Security documentation, the best practice for serving private S3 content securely to end users is to use Amazon CloudFront with Origin Access Control (OAC).
OAC enables CloudFront to access S3 buckets privately, even when Block Public Access settings are enabled at the account level. This allows content to be delivered globally and securely without making the S3 bucket public. The bucket policy explicitly allows access only from the CloudFront distribution, ensuring that users can retrieve PDF files only via CloudFront URLs.
This configuration offers:
Automatic scalability through CloudFront caching,
Improved security via private access control,
Minimal administration effort with fully managed services.
Other options require manual handling or make the bucket public, violating AWS security best practices.
Therefore, Option B---using CloudFront with Origin Access Control and a restrictive bucket policy---provides the most secure, efficient, and low-maintenance CloudOps solution.
Start a Discussions
Total 165 questions