AWSSpecSpecTest2

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

If you use a custom DNS resolver, then GuardDuty cannot access and process data from this data source

A company has migrated most of its business to AWS Cloud using Amazon EC2 instances for Windows to host its applications. The domain services used by these applications are built on Active Directory servers which have been retained as on-premises servers. The company has issued guidelines to enable GuardDuty for all its applications. While analyzing GuardDuty reports, the security team realized that DNS logs are not being tracked/reported by GuardDuty. How will you fix this issue? Check the permissions attached on the IAM role used by GuardDuty for accessing DNS logs If you use a custom DNS resolver, then GuardDuty cannot access and process data from this data source GuardDuty reports only on VPC Flow Logs, CloudTrail global events, and Kubernetes audit logs. GuardDuty does not analyze DNS logs GuardDuty analyzes your DNS logs from the stream of data provided through the Route 53 Resolver query logging feature. Check the path specified in this configuration

Upload the threat list to an Amazon S3 bucket and share the access with the administrator account Specify an administrator account in GuardDuty and then use the administrator account to invite other AWS accounts to become member accounts. Add the threat list to the administrator account by referencing the S3 object that contains the threat list

A company maintains separate AWS accounts for its various lines of business. All the accounts are configured with Amazon GuardDuty to detect threats and malicious activities. A partner security firm generates a common threat list quarterly and shares it with all the business lines. As a Security Engineer, how will you configure the threat list across all AWS accounts with minimum effort? (Select two) Specify an administrator account in GuardDuty and then use the administrator account to invite other AWS accounts to become member accounts. Add the threat list to the administrator account by referencing the S3 object that contains the threat list Upload the threat list to an Amazon S3 bucket and trigger an Amazon EventBridge event every time a new threat list is added to the bucket. Define Amazon GuardDuty as a target to EventBridge to automatically configure the threat list to the administrator account in GuardDuty Upload the threat list to an Amazon S3 bucket and share the access with the administrator account Configure all AWS accounts to be part of AWS Organizations and add the threat list to all members of the organization using AWS Resource Access Manager (RAM) Upload the threat list to an Amazon S3 bucket and share the access with the organization's delegated administrator for GuardDuty

Create a new Amazon S3 bucket in the centralized account to store all the CloudTrail log files. Enable log file validation on all Trails in AWS accounts of all business units. Use unique log file prefixes for trails in each AWS account Apply a bucket policy to the new centralized S3 bucket that permits the CloudTrail service to use the "s3 PutObject" action and the "s3 GetBucketACL" action, and specify the appropriate resource ARNs for the CloudTrail trails -

A company manages separate AWS accounts for each of its business units. An enhanced monitoring solution has been proposed by the security team that mandates tracking all the API calls using CloudTrail for all the AWS accounts. The centralized monitoring logs will be available in a new AWS account created for security and audit purposes. Logs of one business unit should be distinguishable from others via its own top-level prefix. Also, any updates to the log files should be traceable. As a Security Engineer, which of the following options will you combine to implement this requirement? (Select two) Create a new Amazon S3 bucket in the centralized account to store all the CloudTrail log files. Enable log file validation on all Trails in AWS accounts of all business units. Use unique log file prefixes for trails in each AWS account Apply a bucket policy to all S3 buckets to permit the CloudTrail service to use the s3 PutObject action, s3 GetObjectAcl action, and the s3 GetObject action. Specify the appropriate resource ARNs for the CloudTrail trails Apply a bucket policy to the new centralized S3 bucket that permits the CloudTrail service to use the s3 PutObject action and the s3 GetBucketACL action, and specify the appropriate resource ARNs for the CloudTrail trails Create a new Amazon S3 bucket in the centralized account to store all the CloudTrail log files. Enable log file validation on all Trails in all AWS accounts including the centralized account. Use unique log file prefixes for trails in each AWS account Create a new Amazon S3 bucket in each of the AWS accounts. Use S3 bucket replication to copy the CloudTrail logs to the S3 bucket in the centralized account. Enable log file validation on all Trails in all of the AWS accounts used. Use unique log file prefixes for trails in each AWS account

Enable WAF comprehensive logs that are delivered through Amazon Kinesis Firehose to a destination of your choice (Correct) CloudTrail does not create metrics

A Security Engineer has configured an AWS Web Application Firewall (WAF) for all the Application Load Balancers (ALBs) after getting a possible threat alert from the company's IT security department. How can the Engineer validate if the AWS WAF rules are working? Request penetration testing for login request flooding or API request flooding, whichever is applicable for your configuration Use iPerf Testing tool to emulate DDoS attack on the resources and check for WAF responses through Amazon CloudWatch logs Enable WAF comprehensive logs that are delivered through Amazon Kinesis Firehose to a destination of your choice AWS WAF reports metrics once a minute to CloudTrail. You can use statistics in Amazon CloudTrail to gather insights about the WAF responses

If auto remediate any non-compliant resources isn't turned on, then the Firewall Manager created web ACL won't be associated with in-scope resources

A Security Engineer has created a web ACL using an AWS Firewall Manager AWS WAF policy. Still, the web ACL isn't correctly associated with its in-scope resources. What could be the underlying reason for this issue? If auto remediate any non-compliant resources and Replace web ACLs that are currently associated with in-scope resources with the web ACLs created by this policy is turned on for a Firewall Manager AWS WAF policy and if an in-scope resource has a Web ACL created by AWS Shield Advanced policy, then it cannot be replaced by Firewall Manager AWS WAF policy web ACL If auto remediate any non-compliant resources and Replace web ACLs that are currently associated with in-scope resources with the web ACLs created by this policy are turned on for a Firewall Manager AWS WAF policy, then if an in-scope resource has a Web ACL created by Firewall Manager AWS WAF policy, then it gets replaced by Firewall Manager AWS WAF policy web ACL If auto remediate any non-compliant resources isn't turned on, then the Firewall Manager created web ACL won't be associated with in-scope resources If only auto remediate any non-compliant resources is turned on, Firewall Manager associates the web ACL with all non-compliant resources in the accounts irrespective of whether it has a web ACL associated with it

Ensure that IP addresses added in the trusted IP list are publicly routable IPv4 addresses (Correct) Ensure that the trusted IP lists are uploaded in the same AWS Region as your GuardDuty findings (Correct) Amazon GuardDuty only supports publicly routable IPv4 addresses in the trusted IP list. GaurdDuty is region specific

A Security Engineer has followed the best practices to set up a trusted IP address list for Amazon GuardDuty. However, GuardDuty is generating alert findings for the configured trusted IP addresses. Which of the following checks will you perform to ensure GuardDuty works as expected? (Select two) Ensure that IP addresses added in the trusted IP list are publicly routable IPv4 addresses Ensure that multiple trusted IP lists per AWS account per Region have been configured Ensure that the same IP is not enlisted on both a trusted IP list as well as a threat list, as it will be processed by the threat list on priority, thereby resulting in a finding Ensure that the trusted IP lists are uploaded in the same AWS Region as your GuardDuty findings Ensure that in multi-account environments, GuardDuty generates findings for member accounts based on activity that involves IP addresses from the administrator's trusted IP lists

Set up a service control policy (SCP) that prohibits changes to CloudTrail, and attach it to the developer accounts

A Security Engineer is designing a solution for a company that wants to provide developers with individual AWS accounts through AWS Organizations, while also maintaining standard security controls. Since the individual developers will have AWS account root user-level access to their own accounts, the engineer wants to ensure that the mandatory AWS CloudTrail configuration that is applied to new developer accounts is not modified. Which of the following actions meets the given requirements? Set up a service control policy (SCP) that prohibits changes to CloudTrail, and attach it to the developer accounts Set up a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the master account Set up an IAM policy that prohibits changes to CloudTrail and attach it to the root user Configure a new trail in CloudTrail from within the developer accounts with the organization trails option enabled

If you are subscribed to AWS Shield Advanced, you can register Elastic IP addresses as Protected Resources (Correct) When using Amazon CloudFront and AWS WAF with Amazon API Gateway, configure the cache behavior for your distributions to forward all headers to the API Gateway regional endpoint (Correct) The security groups assigned to Application Load Balancers should be configured to not use connection tracking (Correct)

A Security Engineer is planning for a DDoS-resilient architecture for a three-tier web application. What are the best practices to consider for DDoS mitigation? (Select three) Use Network Load Balancer to route traffic to targets based on content and accept only well-formed web requests. Network Load Balancer blocks many common DDoS attacks, such as SYN floods or UDP reflection attacks If you are subscribed to AWS Shield Advanced, you can register Elastic IP addresses as Protected Resources When using Amazon CloudFront and AWS WAF with Amazon API Gateway, configure the cache behavior for your distributions to forward all headers to the API Gateway regional endpoint The security groups assigned to Application Load Balancers should be configured to not use connection tracking Use AWS WAF to configure web access control lists (Web ACLs) on your Amazon S3 buckets with critical data to filter and block requests based on request signatures Configure Amazon API Gateway with edge-optimized API endpoints whenever possible and associate it with your Amazon CloudFront distribution

Create your own AWS WAF rules in your web ACL to mitigate the attack (Correct) You can contact the AWS Support Center to get help with mitigations if you're a Shield Advanced customer (Correct)

A Security Engineer noticed that an application layer (layer 7) DDoS attack is underway on one of the critical systems. What should the immediate response of the engineer be to control the damage? (Select two) Monitor the CloudWatch metrics: The maximum size of the Auto Scaling group, Amazon EC2 instance's CPUUtilization and NetworkIn parameters to detect a DDoS attack and send an SNS notification to the security team Create your own AWS WAF rules in your web ACL to mitigate the attack You can contact the AWS Support Center to get help with mitigations if you're a Shield Advanced customer Define an AWS Systems Manager document (SSM document) to block all vulnerable ports, lock public access to Amazon S3 buckets, and stop internet traffic to affected EC2 instances. Run the SSM using AWS Systems Manager CLI Enable Amazon GuardDuty to automatically monitor for malicious activity and block unauthorized access

When you change a security group rule, its tracked connections are not immediately interrupted. The tracked connections need to be configured to change to untracked connections and then apply the isolation security group to isolate the compromised instance

A Security Engineer received a GuardDuty security alert pertaining to one of the Amazon EC2 instances that is attempting to communicate with the IP address of a remote host known to hold credentials and stolen data captured by malware. The Security Engineer immediately tried to isolate the instance by activating the isolation security group on the instance. However, within a few minutes, the engineer received a similar alert again. Which of the following represents the underlying reason for this behavior and what is the solution to remediate the issue? When you change a security group rule, its tracked connections are not immediately interrupted. The tracked connections need to be configured to change to untracked connections and then apply the isolation security group to isolate the compromised instance If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Hence, to isolate the instance, cut off Internet Gateway from the instance When you associate multiple security groups with an instance, rules with deny access need to be mutually exclusive. Delete all the security groups and create only the isolation security group to isolate the compromised instance When the isolation security group is unable to isolate an instance, the immediate fix is to shut down the compromised instance to cut off further damage to your AWS resources

Verify that the EC2Launch v2 service is running. Detach the EBS root volume from the instance Launch a temporary instance and attach the volume to it as a secondary volume. Delete the .run-once file from the instance, located at %ProgramData%/Amazon/EC2Launch/state/.run-once Reattach the volume to the original instance as the root volume and connect to the instance using its key pair to retrieve the administrator password. Connect to the instance using its current public DNS name

A Systems Administrator is no longer able to access the Windows Amazon EC2 instance because the Windows administrator password is lost. As a Security Engineer, you have been tasked with the job of resetting the password of the instance. Which of the following steps would you suggest to reset the password using EC2Launch v2? (Select three) Verify that the EC2Launch v2 service is running. Detach the EBS root volume from the instance Select Offline Instance Option -> Diagnose and Rescue -> Reset Administrator Password. Reattach the volume to the original instance, then restart the instance To reset the administrator password, Download the EC2Rescue for Windows Server zip file, extract the contents, and run EC2Rescue.exe Reattach the volume to the original instance as the root volume and connect to the instance using its key pair to retrieve the administrator password. Connect to the instance using its current public DNS name Launch a temporary instance and attach the volume to it as a secondary volume. Delete the .run-once file from the instance, located at %ProgramData%/Amazon/EC2Launch/state/.run-once When you launch the temporary instance, to avoid disk signature collisions, you must select an AMI for the same version of Windows

Create an IAM group having the development team users. Add a customer-managed policy with Deny Effect to the group for all s3:*actions with the condition defined as "Condition" : { "BoolIfExists" : { "aws:MultiFactorAuthPresent" : "false" } }

A company uses an Amazon S3 bucket to store its business-critical data. Recently, all the members of the development team, that access the given S3 bucket, have been given MFA devices. A security engineer must configure permissions such that access to the given S3 bucket is allowed only after MFA authentication. How will you implement this requirement? Create an IAM group having the development team users. Add a customer-managed policy with Deny Effect to the group for all s3:*actions with the condition defined as "Condition" : { "BoolIfExists" : { "aws:MultiFactorAuthPresent" : "false" } } Create an IAM group having the development team users. Add a customer managed policy with Allow Effect to the group for all s3:*actions with the condition defined as "Condition": {"BoolIfExists": {"aws:MultiFactorAuthPresent": "true"}} Create an IAM group having the development team users. Add a customer-managed policy with Deny Effect to the group for all s3:*actions with the condition defined as "Condition" : { "Bool" : { "aws:MultiFactorAuthPresent" : "false" } } Create an IAM group having the development team users. Add a customer-managed policy with Allow Effect to the group for all s3:*actions with the condition defined as "Condition" : { "BoolIfExists" : { "aws:MultiFactorAuthPresent" : "false" } }

Create a Lambda function containing the logic to determine if a resource is compliant or non-compliant. Create a custom Config rule that uses this Lambda function as its source Create a Lambda function that polls Config to detect non-compliant resources daily and send notifications via Amazon SNS

A company's security policy mandates enforcing VPC Flow Logs for all the VPCs defined on AWS. A Security Engineer has been tasked to automate this compliance check and subsequently inform the governance teams if any VPC is found to be non-compliant. Which steps will you combine for automating the process to meet the compliance guidelines? (Select two) Publish VPC Flow Logs to Amazon S3 bucket and query the data with AWS Athena for determining the non-compliant resources Create a Lambda function containing the logic to determine if a resource is compliant or non-compliant. Create a custom Config rule that uses this Lambda function as its source Create a Lambda function that checks the AWS Athena query status on a daily basis for detecting any non-compliant resources daily and sending notifications via Amazon SNS Create a Lambda function containing the logic to determine if a resource is compliant or non-compliant. Create an Amazon CloudWatch Event rule that triggers when the state of the earlier declared Lambda function changes to non-compliant Create a Lambda function that polls Config to detect non-compliant resources daily and send notifications via Amazon SNS

Configure AWS Security Hub to have a central dashboard for higher visibility of the environment and remediate issues quickly Use Amazon Athena to analyze data in Amazon Simple Storage Service (Amazon S3) to retrieve any amount of data from anywhere—using standard SQL Store the data cost-effectively on Amazon S3 buckets and use Amazon Macie to automatically discover, classify and protect the highly sensitive data

A data analytics company processes the sensitive data of several financial institutions across the country. The company needs an automated and efficient way to identify sensitive information and operationalize security for its customers while keeping costs low. The solution should also have a security dashboard that aggregates alerts and facilitates automated remediation of security issues while having a complete view of the security architecture of the systems. A high-performing interactive query service is also needed for business purposes. As a Security Engineer, which options will you combine to implement a cost-optimal and high-performance solution for the given requirements? (Select three) Use Amazon S3 buckets to store data and include S3 Intelligent-Tiering for automatic cost savings for data with unknown or changing access patterns Use Amazon QuickSight to quickly embed interactive dashboards and visualizations into your applications without needing to build your own analytics capabilities Configure AWS Security Hub to have a central dashboard for higher visibility of the environment and remediate issues quickly Configure Amazon Detective to analyze, investigate, and quickly identify the root cause of potential security issues along with Amazon GuardDuty. Use Amazon GuardDuty to continuously monitor for malicious activity and unauthorized behavior to protect your AWS accounts, and Amazon Elastic Compute Cloud (EC2) workloads Use Amazon Athena to analyze data in Amazon Simple Storage Service (Amazon S3) to retrieve any amount of data from anywhere—using standard SQL Store the data cost-effectively on Amazon S3 buckets and use Amazon Macie to automatically discover, classify and protect the highly sensitive data

Use Amazon Detective in conjunction with Amazon GuardDuty to monitor malicious activity and unauthorized behavior on the AWS resources and quickly identify the root cause of potential security issues through linked datasets

A fault management application at a company connects to several other systems to monitor the status of the systems hosting the suite of flagship applications for the company. As per the security policy of the company, Cloudtrail and VPC flow logs have been enabled for all AWS resources. A recent internal error from the support team led to several minutes of outage on the fault management application and a few hours of analysis to understand the root cause of the error. The company is now looking for a solution that can analyze data from various logs as well as security findings to quickly triage the root-cause linked to the security issues. What is the best-fit solution for the company's requirements? Use Amazon Detective in conjunction with Amazon GuardDuty to monitor malicious activity and unauthorized behavior on the AWS resources and quickly identify the root cause of potential security issues through linked datasets Use Amazon Inspector to automatically scan and manage the known vulnerabilities and integration with AWS Security Hub and Amazon EventBridge to automate workflows for root cause analysis Use AWS Security Hub, a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services to help analyze the security data under one service for easy root cause analyses Configure Amazon GuardDuty that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. Integrate it into your workflow system and initiate AWS Lambda to automatically remediate the issue

Use S3 Object Lock

A financial services company is evaluating storage options on Amazon S3 standard storage to meet regulatory guidelines. The data should be stored in such a way on S3 that it cannot be deleted until the regulatory period has expired. As an AWS Certified Security Specialist, which of the following would you recommend for the given requirement? Activate MFA delete on the S3 bucket Use S3 Glacier Vault Lock Use S3 cross-Region Replication Use S3 Object Lock

Create a new CMK and import the new key material into it. Point the key alias of the older CMK to the new CMK created

A financial services company is revamping its technology solutions on AWS to meet the company's new security guidelines that mandate the use of the company's own imported key material to create Customer Master keys (CMKs) to be used with AWS services. All encryption keys must also be rotated annually. How will you implement this requirement? Delete the old KMS key first and create a new key with the same name immediately. Import new key material into this newly created KMS key Associate the existing CMK with the new key material and run the List operation to update the association Create a new CMK and import the new key material into it. Point the key alias of the older CMK to the new CMK created Enable automatic key rotation for the KMS key with imported key material. Use this method to rotate the keys annually

If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can't perform that action SCPs affect all users and roles in attached accounts, including the root user SCPs do not affect service-linked role

A financial services company wants to develop a solution called Financial Information System (FIS) on AWS Cloud that would allow the financial institutions and government agencies to collaborate, anticipate and navigate the changing finance landscape. While pursuing this endeavor, the company would like to decrease its IT operational overhead. The solution should help the company eliminate the bottleneck created by manual provisioning of development pipelines while adhering to crucial governance and control requirements. As a means to this end, the company has set up "AWS Organizations" to manage several of these scenarios and would like to use Service Control Policies (SCP) for central control over the maximum available permissions for the various accounts in their organization. This allows the organization to ensure that all accounts stay within the organization's access control guidelines. Which of the following scenarios would you identify as correct regarding the given use-case? (Select three) SCPs affect all users and roles in attached accounts, excluding the root user SCPs do not affect service-linked role SCPs affect service-linked roles If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can still perform that action If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can't perform that action SCPs affect all users and roles in attached accounts, including the root user

Create an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key

A financial services company wants to share sensitive accounting data that is stored in an Amazon RDS DB instance with an external auditor. The auditor has another AWS account and must own a copy of the database. Which of the following would you recommend to securely share the database with the auditor? Create an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key Set up a read replica of the database and configure IAM standard database authentication to grant the auditor access Create a snapshot of the database in Amazon S3 and assign an IAM role to the auditor to grant access to the object in that bucket Export the database contents to text files, store the files in Amazon S3, and create a new IAM user for the auditor with access to that bucket

The aws:PrincipalOrgID global condition key can be used with the Principal element in a resource-based policy with AWS KMS. You need to specify the Organization ID in the Condition element

A healthcare company has recently completed a security review that has highlighted several gaps in the security standards mandated by the company while using the AWS Key Management Service (AWS KMS) keys. As an initial step to address the gap, the security team has decided that access to AWS KMS keys should be restricted to only the principals belonging to their AWS Organizations. How will you implement this requirement? The aws:PrincipalIsAWSService global condition key can be used with the Principal element in a resource-based policy with AWS KMS. List all the AWS account IDs in the Condition element The aws:PrincipalOrgID global condition context key can be used to restrict access to an AWS service principal The aws:PrincipalOrgID global condition key can be used with the Principal element in a resource-based policy with AWS KMS. You need to specify the Organization ID in the Condition element The aws:PrincipalOrgID global condition key can be used with the Principal element in a resource-based policy with AWS KMS. List all the AWS account IDs in the Condition element

Set up a new S3 bucket in the us-east-1 region with replication enabled from this new bucket into another bucket in us-west-1 region. Enable SSE-KMS encryption on the new bucket in us-east-1 region by using an AWS KMS multi-region key. Copy the existing data from the current S3 bucket in us-east-1 region into this new S3 bucket in us-east-1 region

A healthcare company only operates in the us-east-1 region and stores encrypted data in S3 using SSE-KMS. Since the company wants to improve the backup and recovery architecture, it wants the encrypted data in S3 to be replicated into the us-west-1 AWS region. The security policies mandate that the data must be encrypted and decrypted using the same key in both AWS regions. Which of the following represents the best solution to address these requirements? Set up a new S3 bucket in the us-east-1 region with replication enabled from this new bucket into another bucket in us-west-1 region. Enable SSE-KMS encryption on the new bucket in us-east-1 region by using an AWS KMS multi-region key. Copy the existing data from the current S3 bucket in us-east-1 region into this new S3 bucket in us-east-1 region Change the AWS KMS single region key used for the current S3 bucket into an AWS KMS multi-region key. Enable S3 batch replication for the existing data in the current bucket in us-east-1 region into another bucket in us-west-1 region Set up a CloudWatch scheduled rule to invoke a Lambda function to copy the daily data from the source bucket in us-east-1 region to the destination bucket in us-west-1 region. Provide AWS KMS key access to the Lambda function for encryption and decryption operations on the data in the source and destination S3 buckets Enable replication for the current bucket in us-east-1 region into another bucket in us-west-1 region. Share the existing AWS KMS key from us-east-1 region to us-west-1 region

An outbound rule must be added to the Network ACL (NACL) to allow the response to be sent to the client on the ephemeral port range

A junior developer has been asked to configure access to an Amazon EC2 instance hosting a web application. The developer has configured a new security group to permit incoming HTTP traffic from 0.0.0.0/0 and retained any default outbound rules. A custom Network Access Control List (NACL) connected with the instance's subnet is configured to permit incoming HTTP traffic from 0.0.0.0/0 and retained any default outbound rules. As a Security Engineer, which of the following solutions would you suggest if the EC2 instance needs to accept and respond to requests from the internet? An outbound rule must be added to the Network ACL (NACL) to allow the response to be sent to the client on the ephemeral port range Outbound rules need to be configured both on the security group and on the NACL for sending responses to the Internet Gateway The configuration is complete on the EC2 instance for accepting and responding to requests An outbound rule on the security group has to be configured, to allow the response to be sent to the client on the HTTP port

You can only use access points to perform operations on objects. You can't use access points to perform Amazon S3 operations, such as modifying or deleting buckets You can't configure Cross-Region Replication to operate through an access point The cross-account access points don't grant access to data until you are granted permissions from the bucket owner

A media company stores all of its business data on Amazon S3 buckets. Since a massive growth in the number of customers has resulted in complicated bucket policies, the company has now hired you as an AWS Certified Security Specialist for simplifying the company's S3 buckets configuration to facilitate access for the company's customers as well as other connected applications. What are the important configuration characteristics to consider while defining access points for the S3 buckets? (Select three) After you create an access point, you can't change its virtual private cloud (VPC) configuration from the console anymore. An AWS CLI has to be used for modifying the configuration You can only use access points to perform operations on objects. You can't use access points to perform Amazon S3 operations, such as modifying or deleting buckets Aliases for S3 Access Points are interchangeable with S3 bucket names. Aliases can be used as a logging destination for AWS CloudTrail logs and S3 server access logs. However, an alias cannot be used in AWS Identity and Access Management (IAM) policies You can't configure Cross-Region Replication to operate through an access point Access points support access over HTTP and HTTPS alone. It does not offer support over TCP and UDP protocols (Incorrect) The cross-account access points don't grant access to data until you are granted permissions from the bucket owner

As the CMK was deleted a day ago, it must be in the 'pending deletion' status and hence you can just cancel the CMK deletion and recover the key

A media company uses Amazon S3 to store the images uploaded by the users. These images are kept encrypted in S3 by using AWS-KMS and the company manages its own Customer Master Key (CMK) for encryption. A member of the security team accidentally deleted the CMK a day ago, thereby rendering the user's photo data unrecoverable. As an AWS Certified Security Specialist, you have been tasked by the company to provide a solution for this issue. Which of the following steps would you recommend to solve this issue? Contact AWS support to retrieve the CMK from their backup As the CMK was deleted a day ago, it must be in the 'pending deletion' status and hence you can just cancel the CMK deletion and recover the key The CMK can be recovered by the AWS root account user The company should issue a notification on its web application informing the users about the loss of their data

Implement local firewall rules using iptables based restrictions on the instances Disabling will cause resources that use IMDS to crash and IPS is used for instruction detection.

A pharmaceutical company is showcasing its new business lines and is promoting them to its partner organizations. These flagship applications are hosted on Amazon EC2 instances. The technology teams at the partner organizations are expected to access these instances for a first-hand understanding of these applications. The EC2 instances will be shared, and non-root SSH access is needed for the teams. As a Security Engineer, how will you block the EC2 instance metadata service for the given use case to avoid an assault on other AWS account resources? Configure the instance metadata service on each instance so that users must use Instance Metadata Service Version 2 (IMDSv2). The session-oriented methods will not respond to usual request/response queries Disable the instance metadata service on all the instances Install intrusion prevention software (IPS) on each instance to disable access to instance metadata Implement local firewall rules using iptables based restrictions on the instances

If the requests are routed through a VPC endpoint, then check for any restrictions coming from the associated VPC endpoint policy A session policy is in place and is causing an authorization issue

A project manager has connected with you for the resolution of an issue. Although an AWS Identity and Access Management (IAM) entity has admin permissions, it has received an access denied error. As an AWS Certified Security Specialist, how will you troubleshoot and resolve this issue? (Select two) A session policy is in place and is causing an authorization issue An Organization NACL can restrict access to member account IAM entities. Check for restrictions coming from an NACL using the management account of the Organization If you use a permissions boundary, then the entity can only perform actions that are allowed in both the identity-based policy and the concerned resource-based policy. Check for any restrictive permissions boundary If the requests are routed through a VPC endpoint, then check for any restrictions coming from the associated VPC endpoint policy A resource-based policy defines the maximum permissions that an identity-based policy can grant to an entity. Check for any restrictive resource-based policies

Set up an AWS Web Application Firewall (WAF) web ACL. Create a rule to deny any requests that do not originate from the specified country. Attach the rule with the web ACL. Attach the web ACL with the ALB

A retail company has its flagship application hosted on Amazon EC2 instances that are configured in an Auto Scaling group behind a public-facing Application Load Balancer (ALB). The application should only be accessible to users from a specific country. The company also needs the ability to monitor any prohibited requests for further analysis by the security team. What will you suggest as the most optimal and low-maintenance solution for the given use case? Set up an AWS Web Application Firewall (WAF) web ACL. Create a rule to deny any requests that do not originate from the specified country. Attach the rule with the web ACL. Attach the web ACL with the ALB Set up an AWS WAF web ACL. Create a rule to block the requests that do not originate from the IP range defined in an IP set containing a list of IP ranges that belong to the specified country. Attach the rule with the web ACL. Attach the web ACL with the ALB Set up AWS Shield to block any request that does not originate from the specified country. Attach AWS Shield with the ALB Create a Global Accelerator and attach the WAF to it. Create a rule to block any requests that do not originate from the specified country. Create the Global Accelerator to front the existing ALB

Create a new AWS account with limited privileges. Allow the newly created account to access the AWS KMS CMK key used to encrypt the EBS snapshots. Copy the encrypted snapshots to the new account on a regular basis

A retail company recently faced a cyber attack and lost all its data stored in the EBS volumes for the EC2 instances. However, the EBS snapshots were not manipulated. The company could restore the data from the EBS snapshots. However, the incident highlighted the security gaps in the current security plan. An immediate need is to protect the EBS snapshots from any manipulation or deletion. As a Security Engineer, what measures will you take to protect these AWS KMS Customer Master Keys (CMKs) encrypted snapshots? Use S3 Lifecycle transitions to regularly copy EBS snapshots to Amazon S3 through automation Create a new AWS account with limited privileges. Configure the snapshot to use encryption with the default AWS-managed key while copying the encrypted snapshots to the new account. Since default encryption is used, you don't need to share access to the AWS KMS CMK key Create a new Amazon S3 bucket. Use AWS Systems Manager to move EBS snapshots to the new S3 bucket. Use S3 lifecycle policies to move the snapshots to Amazon S3 Glacier and subsequently apply Glacier Vault policies to prevent deletion Create a new AWS account with limited privileges. Allow the newly created account to access the AWS KMS CMK key used to encrypt the EBS snapshots. Copy the encrypted snapshots to the new account on a regular basis

Check if the IAM user credentials are stored in the .aws/credentials file. Because these credentials have higher precedence over role credentials, IAM user credentials will be used to make the API calls. Delete this credentials file

A security engineer has attached an AWS Identity and Access Management (IAM) role to an Amazon Elastic Compute Cloud (Amazon EC2) instance. Upon testing, the engineer realized that the Amazon EC2 instance makes API calls with an IAM user instead of the attached IAM role. What is the issue and how will you fix it? The EC2 instance needs to be refreshed after attaching the necessary IAM role. Refresh the instance and the API calls with be done using the newly attached IAM role The IAM role attached does not have enough permissions to make the API calls. Hence, the default user credentials of the instance are being used for the API calls. Add the required permissions to the role Check if the IAM user credentials are stored in the .aws/credentials file. Because these credentials have higher precedence over role credentials, IAM user credentials will be used to make the API calls. Delete the credentials file You cannot associate an IAM role with your Amazon Elastic Container Service (Amazon ECS) task definitions. While this association does not result in an error, the IAM role credentials are not used. Use service-based roles for container applications

When Bob uploaded the object, the upload event occurred in Bob's account and it matches the settings for Bob's trail. Bob's trail processes and logs the event. The Owner's trail settings also match the event, so the event is logged in Owner's trail too When the Owner uploaded the object, the upload event occurs in Owner's account and it matches the settings for Owner's trail. The trail processes and logs the event in Owner's account

A security engineer has been asked to enable AWS CloudTrail trail to log data events on an S3 bucket with an empty object prefix. The S3 bucket is owned by the Owner user. Another user Bob has a separate account that has been granted access to the S3 bucket. Bob also wants to log data events for all objects in the same S3 bucket, so Bob configures a trail and specifies the same S3 bucket with an empty object prefix. Consider the following events: Bob uploads an object to the S3 bucket with the PutObject API operation. Owner uploads an object to the S3 bucket. What will be the outcome of the two events defined above? (Select two) When Bob uploaded the object, the upload event occurred in Bob's account and it matches the settings for Bob's trail. Bob's trail processes and logs the event. The Owner's trail settings do not match the event, so the event is not logged in Owner's trail When the Owner uploaded the object, the upload event occurs in Owner's account and it matches the settings for Owner's trail. The trail processes and logs the event in Owner's account. The event also matches trail settings of Bob, so Bob's trail logs the event in Bob's account When the Owner uploaded the object, the upload event occurs in Owner's account and it matches the settings for Owner's trail. The trail processes and logs the event in Owner's account When Bob uploaded the object, the upload event occurred in Bob's account and it matches the settings for Bob's trail. Bob's trail processes and logs the event. The Owner's trail settings also match the event, so the event is logged in Owner's trail too In both the upload events, the trail is logged only in the uploaded user's account ie Bob's upload event is only logged in Bob's account and Owner's upload event is logged only in Owner's account

Creating an Amazon Machine Image (AMI) after the CloudWatch agent is installed can lead to errors in the CloudWatch agent IAM user or IAM role policy should include the following IAM permissions: "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogStreams"

A security engineer has configured a unified CloudWatch agent to push Amazon EC2 logs to Amazon CloudWatch Logs. However, the security team can't see any logs in the CloudWatch Logs console. Why isn't the unified CloudWatch agent pushing log events? (Select two) IAM user or IAM role policy should include the following IAM permissions: "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogStreams" CloudWatch agent runs into errors if run_as_user parameter is any user other than the root user Installing the CloudWatch agent after creating the Amazon Machine Image (AMI) can lead to errors in the CloudWatch agent If the CloudWatch Logs endpoint is configured to be a public endpoint using an internet gateway the connectivity fails. VPC endpoints have to be used to keep the log files in AWS network Creating an Amazon Machine Image (AMI) after the CloudWatch agent is installed can lead to errors in the CloudWatch agent

The IP will be processed by the trusted IP list first, and will not generate a finding Attach AmazonGuardDutyFullAccess managed policy to provide full access privileges to an identity to work with trusted IP lists and threat lists. You also need to add the following privileges

A security engineer has configured trusted IP lists and threat lists on Amazon GuardDuty to monitor the security of the AWS environment. Consider the following scenarios: a) While configuring the lists the engineer mistakenly added the same IP to both lists. What is the outcome of this configuration? b) To grant the identities full access (such as renaming, deactivating, uploading, activating, deleting) for working with trusted IP lists and threat lists, which managed policy needs to be added? (Select two) Attach AmazonGuardDutyFullAccess managed policy to provide full access privileges to an identity to work with trusted IP lists and threat lists The IP will be processed by the threat IP list first, and will generate findings The IP will be processed by the trusted IP list first, and will not generate a finding Attach AWSServiceRoleForAmazonGuardDuty policy to your IAM entities to provide full access privileges to an identity to work with trusted IP lists and threat lists Attach AmazonGuardDutyFullAccess managed policy to provide full access privileges to an identity to work with trusted IP lists and threat lists. You also need to add the following privileges { "Effect": "Allow", "Action": [ "iam:PutRolePolicy", "iam:DeleteRolePolicy" ], "Resource": "arn:aws:iam::123456789123:role/aws-service-role/guardduty.amazonaws.com/AWSServiceRoleForAmazonGuardDuty" }

Add the logs:CreateLogStream action to the second Allow statement

A security specialist with administrator permissions is using the AWS management console to access the CloudWatch logs for a Lambda function named "myFunc". However, upon choosing the option to view the logs in the AWS Lambda console, the specialist encountered an error message reading "error loading Log Streams". The specialist was unable to retrieve the logs as desired and must now find a solution to this issue. Following is an example IAM policy for the Lambda function's execution role: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:<region>:<accountId>:*" }, { "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:<region>:<accountId>:log-group:/aws/lambda/myFunc:*" ] } ] } Which of the following solutions would you suggest to the specialist for addressing the issue? Add the logs:CreateLogStream action to the second Allow statement Add the logs:GetLogEvents action to the second Allow statement Move the logs:CreateLogGroup action to the second Allow statement Add the logs:DescribeLogStreams action to the second Allow statement

CloudWatch alarm might be configured to treat a missing data point the same way as a breaching data point. Configure the alarm to evaluate missing data points as NOT BREACHING

A security team configured an Amazon CloudWatch alarm to notify one of the team members when a metric breaches a defined threshold for multiple periods in a row. But, the CloudWatch alarm is notifying the team after just one breach of the threshold. What is the issue and how will you fix the CloudWatch alarm to behave as expected? The metric must be reporting data only intermittently by design. For such metrics, the AWS Lambda function is used to send continuous data as per business logic CloudWatch alarm might be configured to treat a missing data point the same way as a breaching data point. Configure the alarm to evaluate missing data points as NOT BREACHING CloudWatch alarm might not be configured to treat a missing data point the same way as a breaching data point. Configure the alarm to evaluate missing data points as BREACHING If the CloudWatch alarm is unable to access the metric to be monitored, the alarm is raised as a default behavior

Configure the application security groups to ensure that only the necessary ports are open. Use Amazon Inspector to periodically scan the EC2 instances for vulnerabilities

A standard three-tier application is hosted on Amazon EC2 instances that are fronted by an Application Load Balancer. The application maintenance team has reported several small-scale malicious attacks on the application. The project manager has decided to ramp up the security of the application. As an AWS Certified Security Specialist, which of the following would you recommend as part of the best practices to scan and mitigate the known vulnerabilities? Use AWS Key Management Services to encrypt all the traffic between the client and application servers. Configure the application security groups to ensure that only the necessary ports are open Configure the application security groups to ensure that only the necessary ports are open. Use Amazon Systems Manager to periodically scan the EC2 instances for vulnerabilities Install AWS Certificate Manager (ACM) SSL/TLS certificate on the EC2 instances to secure traffic moving to and from the application servers Configure the application security groups to ensure that only the necessary ports are open. Use Amazon Inspector to periodically scan the EC2 instances for vulnerabilities

The security group of RDS should have an inbound rule from the security group of the EC2 instances in the ASG on port 5432 The security group of the EC2 instances should have an inbound rule from the security group of the ALB on port 80 The security group of the ALB should have an inbound rule from anywhere on port 443

A web application is deployed on EC2 instances running under an Auto Scaling Group. The application needs to be accessible from an Application Load Balancer that provides HTTPS termination, and accesses a PostgreSQL database managed by RDS. As an AWS Certified Security Specialist, how would you configure the security groups? (Select three) The security group of RDS should have an inbound rule from the security group of the EC2 instances in the ASG on port 5432 The security group of the ALB should have an inbound rule from anywhere on port 80 The security group of the EC2 instances should have an inbound rule from the security group of the ALB on port 80 The security group of RDS should have an inbound rule from the security group of the EC2 instances in the ASG on port 80 The security group of the ALB should have an inbound rule from anywhere on port 443 The security group of the EC2 instances should have an inbound rule from the security group of the RDS database on port 5432

Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking AWS WAF logs all web requests, but does not store logs by default, so configuring it to deliver logs to Kinesis Data Firehose ensures that logs are continuously streamed and allows for scalable, real-time streaming of logs

A web application is hosted on Amazon EC2 instances that are fronted by Application Load Balancer (ALB) configured with an Auto Scaling group (ASG). Enhanced security is provided to the ALB by AWS WAF web ACLs. As per the company's security policy, AWS CloudTrail is activated and logs are configured to be stored on Amazon S3 and CloudWatch Logs. A discount sales offer was run on the application for a week. The support team has noticed that a few of the instances have rebooted taking down the log files and all temporary data with them. Initial analysis has confirmed that the incident took place during off-peak hours. Even though the incident did not cause any sales or revenue loss, the CTO has asked the security team to fix the security error that has allowed the incident to go unnoticed and eventually untraceable. As Security Engineer, which series of steps will you implement to permanently record all traffic coming into the application? Configure the WAF web ACL to deliver logs to Amazon CloudTrail and create a trail that applies to all Regions. This delivers log files from all Regions to an S3 bucket. Use Athena to query the logs for errors and tracking Configure the WAF web ACL to deliver logs to Amazon Kinesis Data Firehose, which should be configured to eventually store the logs in an Amazon S3 bucket. Use Athena to query the logs for errors and tracking Configure Elastic Load Balancing to write access logs to Amazon Kinesis Data Firehose. The logs can be further directed from Firehose into an Amazon S3 bucket for further analysis and reporting To capture information about the IP traffic going to and from network interfaces, configure VPC Flow Logs to be directly streamed to Kinesis Data Streams and create alarms for automatic monitoring

Use Amazon Route 53 to distribute traffic Move the static content to Amazon S3, and front this with an Amazon CloudFront distribution. Configure another layer of protection by adding AWS Web Application Firewall (AWS WAF) to the CloudFront distribution

After a recent DDoS assault, the IT security team of a media company has asked the Security Engineer to revamp the security of the application to prevent future attacks. The website is hosted on an Amazon EC2 instance and data is maintained on Amazon RDS. A large part of the application data is static and this data is in the form of images. Which of the following steps can be combined to constitute the revamped security model? (Select two) Use Amazon Route 53 to distribute traffic Move the static content to Amazon S3, and front this with an Amazon CloudFront distribution. Configure another layer of protection by adding AWS Web Application Firewall (AWS WAF) to the CloudFront distribution Configure the Amazon EC2 instance with an Auto Scaling Group (ASG) to scale in case of a DDoS assault. Front the ASG with AWS Web Application Firewall (AWS WAF) for another layer of security Use Global Accelerator to distribute traffic Configure Amazon Inspector with AWS Security Hub to mitigate DDoS attacks by continual scanning that delivers near real-time vulnerability findings

The associated AWS Config managed rules are deleted. Any associated AWS WAF web access control lists (web ACLs) that don't contain any resources are deleted

An AWS Firewall Manager policy scope has been defined for all resources of an AWS Organization. Due to a recent organization-wide resource optimization effort, a Security Engineer is reviewing the status of several out-of-scope resources that were earlier covered under the policy. Which of the following correctly outlines the default behavior of AWS Firewall Manager for the given context? An Application Load Balancer that's associated with a web ACL is deleted from the web ACL while the protection remains in place. Any associated AWS WAF web access control lists (web ACLs) that don't contain any resources are deleted Any protected resource that goes out of scope is automatically disassociated and removed from protection when it leaves the policy scope The associated AWS Config managed rules are deleted. Any associated AWS WAF web access control lists (web ACLs) that don't contain any resources are deleted Any associated AWS WAF web access control lists (web ACLs) that don't contain any resources are deleted. Similarly, an Amazon EC2 instance is automatically disassociated from the replicated security group when it leaves the policy scope

Use the aggregator feature of AWS Config to provide access to AWS Config data to both accounts without the need to share the management account details

An AWS organization manages its security and compliance units through two different AWS accounts. Both the accounts need AWS Config configuration and compliance data from multiple AWS accounts and Regions to get a centralized view of the resource inventory. Currently, the teams use shared access to the management account to fetch the required data. To enforce enhanced security measures, the company is looking at eliminating the need to share management account credentials with the team. As a Security Engineer, how will you implement this requirement with the least time and effort? Configure AWS Systems Manager Explorer with a customizable operations dashboard that displays information from AWS Config Use OpsCenter capability of AWS Systems Manager with AWS Config to detect a resource that is out of compliance and automate remediation Use the aggregator feature of AWS Config to provide access to AWS Config data to both accounts without the need to share the management account details Use Configuration snapshots feature of AWS Config to create point-in-time capture of all your resources and their configurations. Save these snapshots in an Amazon S3 bucket and analyze the data using AWS Athena for trends and security aberrations

Revoke all active sessions for the IAM role. Update the S3 bucket policy to deny access to the IAM role. Remove the IAM role from the EC2 instance profile

An Amazon EC2 instance connects to an Amazon S3 bucket using an IAM role with necessary permissions. While analyzing the logs, a security engineer raised the possibility of the instance being compromised. The instance hosts a critical application and cannot be immediately terminated. As an AWS Certified Security Specialist, which of the following will you suggest as the fastest way to block further access to sensitive data from the compromised instance? Remove the IAM role from the EC2 instance profile. Block all public access to the S3 bucket. To allow access to your S3 objects to trusted entities outside your account, create a pre-signed URL through S3 Revoke all active sessions for the IAM role. Update the S3 bucket policy to deny access to the IAM role. Remove the IAM role from the EC2 instance profile Update the S3 bucket policy to deny access to the IAM role. Remove the IAM role from the EC2 instance profile. Use S3 Access Points to create permission sets that restrict access to only those within your private network Update the S3 bucket policy to deny access to the IAM role. Create an Amazon EBS snapshot of the instance and terminate the instance

Install and configure the unified CloudWatch agent on the application's EC2 instance. Create a CloudWatch metric filter to monitor the application logs. Configure CloudWatch alerts based on these metrics

An application hosted on an Amazon EC2 instance writes its request logs, availability logs, and threat logs to a text file. This file is read by a custom program to track and process any security issues inferred from the logs. An increase in log data has resulted in the malfunctioning of the custom program. The company is looking at a scalable solution to collect and analyze log files. Which design will ensure that the aforementioned criteria are met with the LEAST amount of effort? Configure Amazon Inspector to collect all log files from the EC2 instance. Use Amazon EventBridge integration with Amazon Inspector to trigger Lambda function that can read the logs and raise notifications for events on security Install and configure the unified CloudWatch agent on the application's EC2 instance. Create a CloudWatch metric filter to monitor the application logs. Configure CloudWatch alerts based on these metrics Create a scheduled process to copy the application log files to AWS CloudTrail. Configure a Lambda function that processes CloudTrail logs and sends an SNS notification whenever a log file is created Create a cron job on the Amazon EC2 instance to copy the logs into the Amazon S3 bucket. Use S3 events to trigger a Lambda function that refreshes Amazon CloudWatch metrics with the log data. Set up CloudWatch alerts based on the metrics

Set up an Amazon CloudWatch alarm that monitors Shield Advanced metrics for an active DDoS event

An e-commerce company is using Amazon Macie, AWS Shield Advanced, Amazon Inspector and AWS Firewall Manager in its AWS account. The company wants to receive alerts in case a DDoS attack occurs against the account. As an AWS Certified Security Specialist, what would you recommend? Set up an Amazon CloudWatch alarm that monitors Amazon Macie findings for an active DDoS event Set up an Amazon CloudWatch alarm that monitors AWS Firewall Manager metrics for an active DDoS event Set up an Amazon CloudWatch alarm that monitors Shield Advanced metrics for an active DDoS event Set up an Amazon CloudWatch alarm that monitors Amazon Inspector findings for an active DDoS event

Add launch constraint(s) to each product in the service catalog portfolio

An organization has added virtual machine images, software, and a few databases to its AWS Service Catalog. These will be used by multiple development teams to build their business workloads. The organization does not want the end users to launch and manage products using their own IAM credentials. How will you address this requirement and implement it in the LEAST POSSIBLE TIME? Use the service actions feature of the service catalog to define rules and constraints for the products in the portfolio. A CloudFormation template can also be used for easier implementation Create a new portfolio and add the new products to it. Attach a launch constraint to this portfolio Add the newly added products under a single tag. Add tag constraints to control the end user behavior and permissions on the products Add launch constraint(s) to each product in the service catalog portfolio

Configure CloudWatch Event to filter GuardDuty findings when a malicious activity is suspected. Configure the CloudWatch Event to invoke a Lambda function to parse the GuardDuty finding and store it in the Amazon DynamoDB table, if required (Correct) After checking the existing entries in the Amazon DynamoDB table, AWS Lambda function creates a Rule inside AWS WAF and in a VPC NAC, and a notification email is sent via Amazon Simple Notification Service (SNS) (Correct) Notice DETECT and REMEDIATE STEPS IN EACH

As a Security Engineer, you have been tasked with the job of automating the detection and remediation of threats against your AWS environments using Amazon GuardDuty findings. Which steps will you follow to implement this solution most efficiently? (Select two) Configure GuardDuty to trigger an AWS Lambda function every time a finding is generated. Configure an Amazon DynamoDB table to store the data received by Lambda from GuardDuty integration Configure GuardDuty to export its findings to an Amazon S3 bucket. Configure a Lambda function to be triggered every time an object is added to the Amazon S3 bucket Configure AWS Lambda function to create a Rule inside AWS WAF and in a VPC NAC for every GuardDuty finding and trigger an email notification via Amazon Simple Notification Service (SNS) Configure CloudWatch Event to filter GuardDuty findings when a malicious activity is suspected. Configure the CloudWatch Event to invoke a Lambda function to parse the GuardDuty finding and store it in the Amazon DynamoDB table, if required After checking the existing entries in the Amazon DynamoDB table, AWS Lambda function creates a Rule inside AWS WAF and in a VPC NAC, and a notification email is sent via Amazon Simple Notification Service (SNS)

If you must retain an EC2 instance for regulatory, compliance, or legal reasons, then create an Amazon EBS snapshot before terminating the instance (Correct) In the IAM console, under the Permissions tab, look for a policy named AWSExposedCredentialPolicy_DO_NOT_REMOVE. If the user has this policy attached, then rotate the access keys for the user (Correct) Create new access keys and modify the application to use new ones. Deactivate the exposed account access keys immediately. Subsequently, delete the exposed keys only when you have verified the proper functioning of the application

As a Security Engineer, you received a notification from AWS about suspicious activity in your account. What are the security checks/actions that you will need to perform before responding to the AWS Support Center? (Select three) There is no need to revoke any temporary IAM security credentials If you must retain an EC2 instance for regulatory, compliance, or legal reasons, then create an Amazon EBS snapshot before terminating the instance To contain access for an IAM principal where an IAM access key has been compromised, the access key can be deactivated or deleted. It is important to note that an IAM principal can have up to five access keys at any given time If the exposed EC2 instance cannot be shut down, move it to Isolation VPC to contain the expose of other resources while having the ability to keep the instance working In the IAM console, under the Permissions tab, look for a policy named AWSExposedCredentialPolicy_DO_NOT_REMOVE. If the user has this policy attached, then rotate the access keys for the user Create new access keys and modify the application to use new ones. Deactivate the exposed account access keys immediately. Subsequently, delete the exposed keys only when you have verified the proper functioning of the application

Apply a custom IAM policy to restrict the permissions of the IAM role for creating EC2 instances in a specified VPC with tags using the policy condition ec2:ResourceTags to limit control to instances

As a Security Specialist, you have been asked to create an AWS Identity and Access Management (IAM) policy that explicitly grants permissions to an IAM role for creating and managing Amazon Elastic Compute Cloud (Amazon EC2) instances in a specified VPC. The policy must limit permissions so that the IAM role can only create EC2 instances with specific tags and then manage those EC2 instances in a VPC by using those tags. Which of the following solutions will meet this requirement? Apply a custom IAM policy to restrict the permissions of the IAM role for creating EC2 instances in a specified VPC with tags using the policy condition ec2:ResourceTags to limit control to instances Apply a custom IAM policy to restrict the permissions of the IAM role for creating EC2 instances in a specified VPC with tags using the policy condition aws:sourceVPC so that it also limits the instances within the specified VPC Apply a custom IAM policy to restrict the permissions of the IAM role for creating EC2 instances in a specified VPC with tags. Use policy condition ec2:CreateTags to limit control to instances Apply a custom IAM policy to restrict the permissions of the IAM role for creating EC2 instances in a specified VPC with tags. Replace the TAG-KEY or TAG-VALUE parameters with the IAM policy variable ${aws:username}

Review the abuse notice and reply explaining how you will prevent the abusive activity from recurring in the future

As a security engineer for an IT company, you have received a notice from AWS that the resources for your company's AWS account were reported for abusive activity. What should be your course of action after receiving the notice? The technical support provided by AWS Trust & Safety Team is available for only Enterprise and Business accounts. Upgrade your account and then contact the AWS Trust & Safety Team for technical support. Otherwise, you need to contact the AWS Support team Review the abuse notice and reply explaining how you will prevent the abusive activity from recurring in the future The AWS Trust & Safety Team provides technical support for issues related to abusive activity. Contact the team and resolve the issue with their assistance Make sure that your instances and all applications are properly secured as per the shared responsibility model

Use Patch Manager, a capability of AWS Systems Manager to automatically scan your instances and report compliance on a schedule, install available software updates on a schedule, and scan targets on demand

As part of the organization-wide security best practices, a company has mandated that all software installed on the EC2 instances should be upgraded to its most recent authorized version every 30 days. For this requirement, the Security Administrator has to provide a weekly report that lists all the instances that do not have the latest software updates deployed. What is the most optimal way to implement this requirement? Use Change Manager, a capability of AWS Systems Manager to automatically scan your instances and report compliance on a schedule and install available software updates on a schedule through automated runbooks Use Amazon Inspector to determine the systems that do not have the latest patches applied after a time of 30 days and configure Inspector to redeploy these instances with the latest AMI version Use Version Manager, a capability of AWS Systems Manager to automatically scan your instances and report compliance on a schedule, install available software versions on a schedule, and scan targets on demand Use Patch Manager, a capability of AWS Systems Manager to automatically scan your instances and report compliance on a schedule, install available software updates on a schedule, and scan targets on demand

Use AWS Config with a managed rule cloudtrail-enabled to trigger a remediation action to fix the non-compliant status using AWS Systems Manager Automation documents

During an internal IT Audit, the security team realized that AWS CloudTrail was disabled for a few AWS Regions leading to security and audit lapses. Now, the management wants to tighten the security measures across the company. As an AWS Certified Security Specialist, you have been tasked to build a solution for automatic re-enabling of AWS CloudTrail in any AWS Region if it happens to be turned off. What is the most optimal way of addressing this requirement? Use AWS Trusted Advisor security check on AWS CloudTrail Logging to trigger a Lambda function in case logging is disabled. The Lambda function implements the functionality to enable CloudTrail logging if it is disabled Use AWS Security Hub with a managed rule cloudtrail-enabled to trigger a remediation action to fix the non-compliant status using AWS Systems Manager Automation documents Use AWS Config with a managed rule cloudtrail-enabled to trigger a remediation action to fix the non-compliant status using AWS Systems Manager Automation documents Create an Amazon CloudWatch alarm with a cloudtrail.amazonaws.com event source and a StartLogging event name to trigger an AWS Lambda function to call the StartLogging API

Use AWS CloudTrail Event history to review security group changes in your AWS account Create an AWS CloudTrail trail configured to log to an Amazon Simple Storage Service (Amazon S3) bucket. Use Athena to query CloudTrail Logs over the last 30-45 days Use AWS Config to view configuration history for security groups. You must have the AWS Config configuration recorder turned on

For auditing purposes, a company needs to showcase a report of changes made to the security group(s) for an Amazon Virtual Private Cloud (Amazon VPC). What are the different ways to review security group changes in an AWS account? (Select three) Use AWS Config to view configuration history for security groups. You must have the AWS Config configuration recorder turned on Configure CloudTrail with CloudWatch Logs to monitor your trail logs and be notified when security group changes occur. CloudTrail supports sending only data events to CloudWatch Logs. Configure Amazon EventBridge to monitor management events Create an AWS CloudTrail trail configured to log to an Amazon Simple Storage Service (Amazon S3) bucket. Use Athena to query CloudTrail Logs over the last 30-45 days Use CloudTrail Lake to create trails that aggregate information from multiple AWS accounts across regions Use AWS CloudTrail Event history to review security group changes in your AWS account Use AWS AppConfig capability of AWS Systems Manager, to create, manage, and quickly deploy application configurations and track changes in them

Use encryption context that you can use to verify the authenticity of AWS KMS API calls, and the integrity of the ciphertext returned by the AWS Decrypt API

Question 1: Correct A project manager has connected with you for a security requirement from the client. The client wants to ensure that the authenticated encryption with associated data encryption is used when calling AWS Key Management Service (AWS KMS) Encrypt, Decrypt, and ReEncrypt APIs. As an AWS Certified Security Specialist, which of the following would you recommend to address this requirement? Use encryption context that you can use to verify the authenticity of AWS KMS API calls and the integrity of the ciphertext returned by the AWS Decrypt API Use envelope encryption strategy of AWS KMS to verify the authenticity of AWS KMS API calls and safeguard the integrity of ciphertext Use multi-factor authentication (MFA) to verify the authenticity of AWS KMS API calls and safeguard the integrity of ciphertext Use AWS CloudHSM to verify the authenticity of AWS KMS API calls and safeguard the integrity of ciphertext

Use WAF geo match statement listing the countries that you want to block Use WAF IP set statement that specifies the IP addresses that you want to allow through

Question 46: Incorrect A social media company wants to block access to its application from specific countries; however, the company wants to allow its remote development team (from one of the blocked countries) to have access to the application. The application is deployed on EC2 instances running under an Application Load Balancer (ALB) with AWS WAF. As an AWS Certified Security Specialist, which of the following solutions will you combine to address the given use case? (Select two) Create a deny rule for the blocked countries in the NACL associated with each of the EC2 instances Use WAF IP set statement that specifies the IP addresses that you want to allow through Use WAF geo match statement listing the countries that you want to block Use ALB IP set statement that specifies the IP addresses that you want to allow through Use ALB geo match statement listing the countries that you want to block

Delete or rotate the user's key. Review the AWS CloudTrail logs in all AWS regions and delete any unauthorized resources created or updated

The IT Security team at a financial services firm has informed that a user's AWS access key has been found on the internet. As a security engineer, you must ensure that the access key is immediately disabled and the user's activities must be assessed for a potential breach. Which steps must be taken to meet the above needs? Delete or rotate the user's key. Review the AWS CloudTrail logs in all AWS regions and delete any unauthorized resources created or updated Delete the IAM user and all the resources created by the user. Create fresh user credentials and relaunch the resources from this user Call on the user to remove the access credentials from the internet. Rotate the user's key and re-deploy all the resources with the new credentials Call on the user to remove the access credentials from the internet. Report abuse to AWS Trust & Safety team

Configure an origin access identity (OAI) and associate it with the CloudFront distribution. Set up the permissions in the S3 bucket policy so that only the OAI can read the objects Create an AWS WAF ACL and use an IP match condition to allow traffic only from those IPs that are allowed in the EC2 security group. Associate this new WAF ACL with the CloudFront distribution

The development team at a company is moving the static content from the company's e-commerce website hosted on EC2 instances to an S3 bucket. The team wants to use a CloudFront distribution to deliver the static content. The security group used by the EC2 instances allows the website to be accessed by a limited set of IP ranges from the company's suppliers. Post the migration to CloudFront, access to the static content should only be allowed from the aforementioned IP addresses. Which options would you combine to build a solution to meet these requirements? (Select two) Create a new NACL that allows traffic from the same IPs as specified in the current EC2 security group. Associate this new NACL with the CloudFront distribution Configure an origin access identity (OAI) and associate it with the CloudFront distribution. Set up the permissions in the S3 bucket policy so that only the OAI can read the objects Create an AWS WAF ACL and use an IP match condition to allow traffic only from those IPs that are allowed in the EC2 security group. Associate this new WAF ACL with the CloudFront distribution Create an AWS WAF ACL and use an IP match condition to allow traffic only from those IPs that are allowed in the EC2 security group. Associate this new WAF ACL with the S3 bucket policy Create a new security group that allows traffic from the same IPs as specified in the current EC2 security group. Associate this new security group with the CloudFront distribution (Incorrect)

CloudFront can route to multiple origins based on the content type (Correct) Use field-level encryption in CloudFront to protect sensitive data for specific content (Correct) Use an origin group with primary and secondary origins to configure CloudFront for high-availability and failover (Correct)

The development team at an e-commerce company has recently migrated to AWS Cloud from its on-premises data center. The team is evaluating CloudFront to be used as a CDN for its flagship application. The team has hired you as an AWS Certified Security Specialist to advise on CloudFront capabilities on routing and security. Which of the following would you identify as correct regarding CloudFront? (Select three) Use geo-restriction to configure CloudFront for high-availability and failover CloudFront can route to multiple origins based on the price class CloudFront can route to multiple origins based on the content type Use field-level encryption in CloudFront to protect sensitive data for specific content Use an origin group with primary and secondary origins to configure CloudFront for high-availability and failover Use KMS encryption in CloudFront to protect sensitive data for specific content

Block requests that contain a specific User-Agent in the request using custom Rules. Block requests that don't contain a User-Agent header using either AWS Managed Rules or custom rules

The latest guidelines issued by the security team at a company mandate an application to block HTTP requests that don't have a User-Agent header or have a specific User-Agent in the request. How will you block these requests using AWS WAF? Block requests that contain a specific User-Agent in the request using AWS Managed Rules. Block requests that don't contain a User-Agent header using security group rules Block requests that contain a specific User-Agent in the request using custom Rules. Block requests that don't contain a User-Agent header using either AWS Managed Rules or custom rules Block requests that contain a specific User-Agent in the request using custom rules. Block requests that don't contain a User-Agent header using security group rules Block requests that contain a specific User-Agent in the request using AWS Managed Rules. Block requests that don't contain a User-Agent header using either AWS Managed Rules or custom rules

Use the Amazon S3 console to update the prefix in the current bucket policy, and then use the CloudTrail console to specify the same prefix for the bucket in the trail

The security team at a company has recently decided that CloudTrail logs of each department will be prefixed with the department code. Currently, CloudTrail logs are created with similar names across the company with no immediate way of identifying the departments sending those logs. When the security team tried to add the prefix to the log files in the CloudTrail console, the following error popped up: 'There is a problem with the bucket policy'. How will you fix this issue? Use the Amazon S3 console to update the prefix in the current bucket policy, and then use the CloudTrail console to specify the same prefix for the bucket in the trail Manually edit your Amazon S3 bucket policy to add an aws:SourceArn condition key to the policy statement attached for CloudTrail Update the permissions of all users in the security team to AWSCloudTrail_FullAccess policy to impart all the necessary permissions to the users on CloudTrail logs Update the service-specific context keys used in the Condition element of the Amazon S3 bucket policy statements

Add the <action> as kms:CreateGrant to the IAM user policy (Correct) Add the condition as { "Bool": { "kms:GrantIsForAWSResource": true } to the IAM user policy (Correct)

The security team at a company has set up an IAM user with full permissions for the EC2 service, yet the user is unable to start an Amazon EC2 instance after it was stopped for maintenance purposes. The instance would change its state to "Pending" but would eventually switch back to "Stopped" with the error "client error on launch". Upon investigating the issue, it was discovered that the EC2 instance had attached Amazon EBS volumes that were encrypted using a Customer Master Key (CMK). Detaching the encrypted volumes from the EC2 instance resolved the issue and allowed the user to start the instance successfully. Following is a snippet of the existing IAM user policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ <action> ], "Resource": "arn:aws:kms:<region>:<accountId>:key/kms-encryption-key-for-ebs", "Condition": <condition> } ] } You have been tasked to build a solution to fix this issue. What do you recommend? (Select two) Add the <action> as kms:CreateGrant to the IAM user policy Add the condition as { "Bool": { "kms:GrantIsForAWSResource": true } to the IAM user policy Add the <action> as kms:DescribeKey to the IAM user policy Add the condition as { "Bool": { "kms:ViaService": "ec2.<region>.amazonaws.com"} to the IAM user policy Add the <action> as kms:CreateKey to the IAM user policy

Configure AWS Web Application Firewall (AWS WAF) to feed logs to Amazon S3 bucket via Amazon Kinesis Data Firehose. Set up an AWS Glue crawler job and an Amazon Athena table to query for required data and create visualizations using Amazon QuickSight dashboards

The security team at a company needs to analyze the AWS Web Application Firewall (AWS WAF) logs quickly and it wants to build multiple dashboards using a serverless architecture. The logging process should be automated so that the log data for dashboards is available on a real-time or near-real-time basis. Which of the following represents the most optimal solution for this requirement? Configure AWS Web Application Firewall (AWS WAF) to send logs to CloudWatch Logs. Use CloudWatch Logs Insights to interactively search and analyze your log data. Publish logs data to Amazon QuickSight dashboards through direct CloudWatch integration with QuickSight Configure AWS Web Application Firewall (AWS WAF) to feed logs to Amazon S3 bucket via Amazon Kinesis Data Streams. Set up an AWS Glue crawler job and an Amazon Athena table to query for required data and create visualizations using Amazon QuickSight dashboards Configure AWS Web Application Firewall (AWS WAF) to feed logs to Amazon S3 bucket via Amazon Kinesis Data Firehose. Set up an AWS Glue crawler job and an Amazon Athena table to query for required data and create visualizations using Amazon QuickSight dashboards Configure AWS Web Application Firewall (AWS WAF) to send your web ACL traffic logs to Amazon S3 bucket. Use Amazon Redshift Spectrum to query data directly from files on Amazon S3. Using the existing integration of RedShift Spectrum with Amazon QuickSight, generate visualizations to be used in manager dashboards

kms:GenerateDataKey

The security team at a company needs to implement a client-side encryption mechanism for objects that will be stored in a new Amazon S3 bucket. The team created a CMK that is stored in AWS Key Management Service (AWS KMS) for this purpose. The team created the following IAM policy and attached it to an IAM role: { "Version": "2012-10-17", "Id": "key-policy-1", "Statement": [ { "Sid": "GetPut", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": "arn:aws:s3:::ExampleBucket/*" }, { "Sid": "KMS", "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:Encrypt" ], "Resource": "arn:aws:kms:us-west-1:111122223333:key/keyid-12345" } ] } The team was able to successfully get existing objects from the S3 bucket while testing. But any attempts to upload a new object resulted in an error. The error message stated that the action was forbidden. Which IAM policy action should be added to the IAM policy to resolve the error? kms:GetPublicKey kms:GetKeyPolicy kms:GenerateDataKey kms:GetDataKey

Rotate and delete all root and AWS Identity and Access Management (IAM) access keys (Correct) Use AWS Git projects to scan for evidence of unauthorized use (Correct) Check your AWS account bill to know the charged resources (Correct)

The security team at a financial services company has received a notification that the resources in the company's AWS account might be compromised. What actions would you recommend to handle this issue? (Select three) Use Amazon Inspector to detect the compromised resources of your account Rotate and delete all root and AWS Identity and Access Management (IAM) access keys Use AWS Git projects to scan for evidence of unauthorized use Check your AWS account bill to know the charged resources Use AWS Trusted Advisor security check report to find out the details about the compromised AWS resources Use the health check report in AWS Systems Manager (formerly known as SSM) to find out the details about the compromised AWS resources

You can use an Internet Gateway ID as the custom source for the inbound rule

The security team at an IT company has recently migrated to AWS and they are configuring security groups for their two-tier application with public web servers and private database servers. The team wants to understand the allowed configuration options for an inbound rule for a security group. As an AWS Certified Security Specialist, which of the following would you identify as an INVALID option for setting up such a configuration? You can use an Internet Gateway ID as the custom source for the inbound rule You can use a range of IP addresses in CIDR block notation as the custom source for the inbound rule You can use an IP address as the custom source for the inbound rule You can use a security group as the custom source for the inbound rule

Leverage the Amazon EC2 console to enable encryption of new EBS volumes by default. Leverage the Auto Scaling group's instance refresh feature to replace existing instances with new instances

The security team at an e-commerce company has noticed that several Amazon Elastic Block Store (Amazon EBS) volumes are not encrypted. These unencrypted EBS volumes are attached to Amazon EC2 instances that are provisioned with an Auto Scaling group and a launch template. You have been hired as an AWS Certified Security Specialist to implement a solution that ensures all EBS volumes are encrypted both now and in the future. What would you recommend? Configure a new launch template from the existing launch template, such that the encrypted flag for all EBS volumes is set to true in the new launch template. Update the Auto Scaling group to use the new launch template. In due course of time, let the Auto Scaling group replace all the old instances that have unencrypted EBS volumes Leverage the Amazon EC2 console to enable encryption of new EBS volumes by default. Propagate this setting to the Auto Scaling group so it will automatically replace existing instances with new instances Modify the launch template by setting the encrypted flag for all EBS volumes to true. Leverage the Auto Scaling group's instance refresh feature to replace existing instances with new instances Leverage the Amazon EC2 console to enable encryption of new EBS volumes by default. Leverage the Auto Scaling group's instance refresh feature to replace existing instances with new instances

Create an Amazon CloudWatch metric filter that processes CloudTrail logs having API call details and looks at any errors by factoring in all the error codes that need to be tracked. Create an alarm based on this metric's rate to send an SNS notification to the required team

While consolidating logs for the weekly reporting, a development team at a retail company realized that an unusually large number of illegal AWS API queries were made sometime during the week. Due to the off-season, there was no visible impact on the systems. However, this event led the management team to seek an automated solution that can trigger near-real-time warnings in case such an event recurs. Which of the following represents the best solution for the given scenario? Configure AWS CloudTrail to stream event data to Amazon Kinesis. Use Kinesis stream-level metrics in the CloudWatch to trigger an AWS Lambda function that will trigger an error workflow Create an Amazon CloudWatch metric filter that processes CloudTrail logs having API call details and looks at any errors by factoring in all the error codes that need to be tracked. Create an alarm based on this metric's rate to send an SNS notification to the required team Trusted Advisor publishes metrics about check results to CloudWatch. Create an alarm to track status changes for checks in the Service Limits category for the APIs. The alarm will then notify when the service quota is reached or exceeded Run Amazon Athena SQL queries against CloudTrail log files stored in Amazon S3


Ensembles d'études connexes

L11 Compression, System Backup, and Software Installation

View Set

Unit 2 Lesson 2: Evidence of Science and Technology during Ancient Times

View Set

CH 22 READING QUIZ, Evaluating Sources

View Set