New Year Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: best70

Note! Amazon Web Services has retired the DBS-C01 Exam Contact us through Live Chat or email us for more information.

DBS-C01 AWS Certified Database - Specialty Questions and Answers

Questions 4

A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?

Options:

A.

Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

B.

Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.

C.

Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

D.

Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.

Questions 5

A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failovers.

Which approach meets these requirements?

Options:

A.

Set the max_connections parameter to 16,000 in the instance-level parameter group.

B.

Modify the client connection timeout to 300 seconds.

C.

Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.

D.

Enable the query cache at the instance level.

Buy Now
Questions 6

A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The companys operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience

Which solution will solve the throttling issue without requiring changes to the app?

Options:

A.

Add a DynamoD3 table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.

B.

Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.

C.

use on-demand capacity mode tor the DynamoDB table.

D.

use DynamoDB Accelerator (DAX).

Buy Now
Questions 7

An internet advertising firm stores its data in an Amazon DynamoDb table. Amazon DynamoDB Streams are enabled on the table, and one of the keys has a global secondary index. The table is encrypted using a customer-managed AWS Key Management Service (AWS KMS) key.

The firm has chosen to grow worldwide and want to duplicate the database using DynamoDB global tables in a new AWS Region.

An administrator observes the following upon review:

  • No role with the dynamodb: CreateGlobalTable permission exists in the account.
  • An empty table with the same name exists in the new Region where replication is desired.
  • A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.

Which settings will prevent you from creating a global table or replica in the new Region? (Select two.)

Options:

A.

A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.

B.

An empty table with the same name exists in the Region where replication is desired.

C.

No role with the dynamodb:CreateGlobalTable permission exists in the account.

D.

DynamoDB Streams is enabled for the table.

E.

The table is encrypted using a KMS customer managed key.

Buy Now
Questions 8

In one AWS account, a business runs a two-tier ecommerce application. An Amazon RDS for MySQL Multi-AZ database instance serves as the application's backend. A developer removed the database instance in the production environment by accident. Although the organization recovers the database, the incident results in hours of outage and financial loss.

Which combination of adjustments would reduce the likelihood that this error will occur again in the future? (Select three.)

Options:

A.

Grant least privilege to groups, IAM users, and roles.

B.

Allow all users to restore a database from a backup.

C.

Enable deletion protection on existing production DB instances.

D.

Use an ACL policy to restrict users from DB instance deletion.

E.

Enable AWS CloudTrail logging and Enhanced Monitoring.

Buy Now
Questions 9

A database professional is developing an application that will respond to single-instance requests. The program will query large amounts of client data and offer end users with results.

These reports may include a variety of fields. The database specialist want to enable users to query the database using any of the fields offered.

During peak periods, the database's traffic volume will be significant yet changeable. However, the database will see little activity over the rest of the day.

Which approach will be the most cost-effective in meeting these requirements?

Options:

A.

Amazon DynamoDB with provisioned capacity mode and auto scaling

B.

Amazon DynamoDB with on-demand capacity mode

C.

Amazon Aurora with auto scaling enabled

D.

Amazon Aurora in a serverless mode

Questions 10

A database specialist is managing an application in the us-west-1 Region and wants to set up disaster recovery in the us-east-1 Region. The Amazon Aurora MySQL DB cluster needs an RPO of 1 minute and an RTO of 2 minutes.

Which approach meets these requirements with no negative performance impact?

Options:

A.

Enable synchronous replication.

B.

Enable asynchronous binlog replication.

C.

Create an Aurora Global Database.

D.

Copy Aurora incremental snapshots to the us-east-1 Region.

Questions 11

A company’s Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.

Which combination of actions should the Database Specialist take? (Choose three.)

Options:

A.

Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.

B.

Modify the RDS SQL Server DB instance to use the directory for Windows authentication. Create appropriate new logins.

C.

Use the AWS Management Console to create an AWS Managed Microsoft AD. Create a trust relationship with the corporate AD.

D.

Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and start it again. Create appropriate new logins.

E.

Use the AWS Management Console to create an AD Connector. Create a trust relationship with the corporate AD.

F.

Configure the AWS Managed Microsoft AD domain controller Security Group.

Questions 12

A database expert is responsible for building a highly available online transaction processing (OLTP) solution that makes use of Amazon RDS for MySQL production databases. Disaster recovery criteria include a cross-regional deployment and an RPO and RTO of 5 and 30 minutes, respectively.

What should the database professional do to ensure that the database meets the criteria for high availability and disaster recovery?

Options:

A.

Use a Multi-AZ deployment in each Region.

B.

Use read replica deployments in all Availability Zones of the secondary Region.

C.

Use Multi-AZ and read replica deployments within a Region.

D.

Use Multi-AZ and deploy a read replica in a secondary Region.

Buy Now
Questions 13

A company is using Amazon Redshift as its data warehouse solution. The Redshift cluster handles the following types of workloads:

*Real-time inserts through Amazon Kinesis Data Firehose

*Bulk inserts through COPY commands from Amazon S3

*Analytics through SQL queries

Recently, the cluster has started to experience performance issues.

Which combination of actions should a database specialist take to improve the cluster's performance? (Choose three.)

Options:

A.

Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.

B.

Stream real-time data into Redshift temporary tables before loading the data into permanent tables.

C.

For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.

D.

For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.

E.

Optimize analytics SQL queries to use sort keys.

F.

Avoid using temporary tables in analytics SQL queries.

Buy Now
Questions 14

A retail company manages a web application that stores data in an Amazon DynamoDB table. The company is undergoing account consolidation efforts. A database engineer needs to migrate the DynamoDB table from the current AWS account to a new AWS account.

Which strategy meets these requirements with the LEAST amount of administrative work?

Options:

A.

Use AWS Glue to crawl the data in the DynamoDB table. Create a job using an available blueprint to export the data to Amazon S3. Import the data from the S3 file to a DynamoDB table in the new account.

B.

Create an AWS Lambda function to scan the items of the DynamoDB table in the current account and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items of a DynamoDB table in the new account.

C.

Use AWS Data Pipeline in the current account to export the data from the DynamoDB table to a file in Amazon S3. Use Data Pipeline to import the data from the S3 file to a DynamoDB table in the new account.

D.

Configure Amazon DynamoDB Streams for the DynamoDB table in the current account. Create an AWS Lambda function to read from the stream and write to a file in Amazon S3. Create another Lambda function

to read the S3 file and restore the items to a DynamoDB table in the new account.

Questions 15

A marketing company is developing an application to track responses to email message campaigns. The company needs a database storage solution that is optimized to work with highly connected data. The database needs to limit connections and programmatic access to the data by using IAM policies.

Which solution will meet these requirements?

Options:

A.

Amazon ElastiCache for Redis cluster

B.

Amazon Aurora MySQL DB cluster

C.

Amazon DynamoDB table

D.

Amazon Neptune DB cluster

Buy Now
Questions 16

A huge gaming firm is developing a centralized method for storing the status of various online games' user sessions. The workload requires low-latency key-value storage and will consist of an equal number of reads and writes. Across the games' geographically dispersed user base, data should be written to the AWS Region nearest to the user. The design should reduce the burden associated with managing data replication across Regions.

Which solution satisfies these criteria?

Options:

A.

Amazon RDS for MySQL with multi-Region read replicas

B.

Amazon Aurora global database

C.

Amazon RDS for Oracle with GoldenGate

D.

Amazon DynamoDB global tables

Buy Now
Questions 17

A company is building a new web platform where user requests trigger an AWS Lambda function that performs an insert into an Amazon Aurora MySQL DB cluster. Initial tests with less than 10 users on the new platform yielded successful execution and fast response times. However, upon more extensive tests with the actual target of 3,000 concurrent users, Lambda functions are unable to connect to the DB cluster and receive too many connections errors.

Which of the following will resolve this issue?

Options:

A.

Edit the my.cnf file for the DB cluster to increase max_connections

B.

Increase the instance size of the DB cluster

C.

Change the DB cluster to Multi-AZ

D.

Increase the number of Aurora Replicas

Buy Now
Questions 18

A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.

Which action will allow AVS DMS to perform the replication?

Options:

A.

Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.

B.

Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.

C.

Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.

D.

Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.

Questions 19

A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.

How should the Database Specialist apply the parameter group change for the DB instance?

Options:

A.

Select the option to apply the change immediately

B.

Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied

C.

Apply the change manually by rebooting the DB instance during the approved maintenance window

D.

Reboot the secondary Multi-AZ DB instance

Questions 20

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company’s network bandwidth is available.

How should the company perform this data load?

Options:

A.

Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

B.

Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

C.

Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

D.

Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

Buy Now
Questions 21

A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us-east-2 Regions.

This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location.

Which set of actions will meet these requirements?

Options:

A.

Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

B.

Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us- west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

C.

Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

D.

Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

Buy Now
Questions 22

A healthcare company is running an application on Amazon EC2 in a public subnet and using Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An audit reveals that the traffic between

the application and Amazon DocumentDB is not encrypted and that the DocumentDB cluster is not encrypted at rest. A database specialist must correct these issues and ensure that the data in transit and the

data at rest are encrypted.

Which actions should the database specialist take to meet these requirements? (Select TWO.)

Options:

A.

Download the SSH RSA public key for Amazon DocumentDB. Update the application configuration to use the instance endpoint instead of the cluster endpoint and run queries over SSH.

B.

Download the SSL .pem public key for Amazon DocumentDB. Add the key to the application package and make sure the application is using the key while connecting to the cluster.

C.

Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as a new cluster with the —storage-encrypted parameter set to true. Update the application to point to the new cluster.

D.

Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to the Amazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only the application instance's security group to connect.

E.

Activate encryption at rest using the modify-db-cluster command with the —storage-encrypted parameter set to true. Set the security group of the cluster to allow only the application instance's security group to connect.

Buy Now
Questions 23

A database specialist wants to ensure that an Amazon Aurora DB cluster is always automatically upgraded to the most recent minor version available. Noticing that there is a new minor version available, the database specialist has issues an AWS CLI command to enable automatic minor version updates. The command runs successfully, but checking the Aurora DB cluster indicates that no update to the Aurora version has been made.

What might account for this? (Choose two.)

Options:

A.

The new minor version has not yet been designated as preferred and requires a manual upgrade.

B.

Configuring automatic upgrades using the AWS CLI is not supported. This must be enabled expressly using the AWS Management Console.

C.

Applying minor version upgrades requires sufficient free space.

D.

The AWS CLI command did not include an apply-immediately parameter.

E.

Aurora has detected a breaking change in the new minor version and has automatically rejected the upgrade.

Buy Now
Questions 24

Recently, a financial institution created a portfolio management service. The application's backend is powered by Amazon Aurora, which supports MySQL.

The firm demands a response time of five minutes and a response time of five minutes. A database professional must create a disaster recovery system that is both efficient and has a low replication latency.

How should the database professional tackle these requirements?

Options:

A.

Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.

B.

Configure an Amazon Aurora global database and add a different AWS Region.

C.

Configure a binlog and create a replica in a different AWS Region.

D.

Configure a cross-Region read replica.

Buy Now
Questions 25

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table. The backend application runs in a private subnet in a VPC and must connect to this table.

The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application.

Which solution will meet these requirements?

Options:

A.

Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.

B.

Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table.

C.

Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application.

D.

Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure.

Questions 26

A company has an AWS CloudFormation template written in JSON that is used to launch new Amazon RDS for MySQL DB instances. The security team has asked a database specialist to ensure that the master password is automatically rotated every 30 days for all new DB instances that are launched using the template.

What is the MOST operationally efficient solution to meet these requirements?

Options:

A.

Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password.

B.

Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30.

C.

Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***.

D.

Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password.

Questions 27

A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:

ERROR: cloud not write block 7507718 of temporary file: No space left on device

What is the cause of this error and what should the Database Specialist do to resolve this issue?

Options:

A.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.

B.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.

C.

The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.

D.

The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.

Buy Now
Questions 28

A business uses Amazon EC2 instances in VPC A to serve an internal file-sharing application. This application is supported by an Amazon ElastiCache cluster in VPC B that is peering with VPC A. The corporation migrates the instances of its applications from VPC A to VPC B. The file-sharing application is no longer able to connect to the ElastiCache cluster, as shown by the logs.

What is the best course of action for a database professional to take in order to remedy this issue?

Options:

A.

Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.

B.

Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.

C.

Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC CIDR blocks from the ElastiCache cluster.

D.

Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances security group to the ElastiCache cluster.

Questions 29

A company plans to migrate a MySQL-based application from an on-premises environment to AWS. The application performs database joins across several tables and uses indexes for faster query response times. The company needs the database to be highly available with automatic failover.

Which solution on AWS will meet these requirements with the LEAST operational overhead?

Options:

A.

Deploy an Amazon RDS DB instance with a read replica.

B.

Deploy an Amazon RDS Multi-AZ DB instance.

C.

Deploy Amazon DynamoDB global tables.

D.

Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured.

Buy Now
Questions 30

A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.

Which combination of actions must the application development team take to meet these requirements? (Choose two.)

Options:

A.

Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.

B.

Make a copy of the DB snapshot, and set the encryption option to disable.

C.

Share the DB snapshot by setting the DB snapshot visibility option to public.

D.

Make a copy of the DB snapshot, and set the encryption option to enable.

E.

Share the DB snapshot by using the default AWS KMS encryption key.

Questions 31

After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?

Options:

A.

The restored DB instance does not have Enhanced Monitoring enabled

B.

The production DB instance is using a custom parameter group

C.

The restored DB instance is using the default security group

D.

The production DB instance is using a custom option group

Questions 32

A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis.

Which solution meets these requirements?

Options:

A.

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

B.

Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

C.

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs.

D.

Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster.

Questions 33

A database specialist is constructing an AWS CloudFormation stack using AWS CloudFormation. The database expert wishes to avoid the stack's Amazon RDS ProductionDatabase resource being accidentally deleted.

Which solution will satisfy this criterion?

Options:

A.

Create a stack policy to prevent updates. Include ג€Effectג€ : ג€ProductionDatabaseג€ and ג€Resourceג€ : ג€Denyג€ in the policy.

B.

Create an AWS CloudFormation stack in XML format. Set xAttribute as false.

C.

Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.

D.

Create a stack policy to prevent updates. Include Effect, Deny, and Resource :ProductionDatabase in the policy.

Buy Now
Questions 34

A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries are executed. Amazon CloudWatch metrics indicate that the instance requires more I/O capacity.

Which actions can a database specialist perform to resolve this issue? (Choose two.)

Options:

A.

Restart the application tool used to execute queries.

B.

Change to a database instance class with higher throughput.

C.

Convert from Single-AZ to Multi-AZ.

D.

Increase the I/O parameter in Amazon RDS Enhanced Monitoring.

E.

Convert from General Purpose to Provisioned IOPS (PIOPS).

Buy Now
Questions 35

A company has migrated a single MySQL database to Amazon Aurora. The production data is hosted in a DB cluster in VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in minimal changes to the test data. The Development team wants each environment refreshed nightly so each test database contains fresh production data every day.

Which migration approach will be the fastest and most cost-effective to implement?

Options:

A.

Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

B.

Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.

C.

Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.

D.

Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

Buy Now
Questions 36

A development team at an international gaming company is experimenting with Amazon DynamoDB to store in-game events for three mobile games. The most popular game hosts a maximum of 500,000 concurrent users, and the least popular game hosts a maximum of 10,000 concurrent users. The average size of an event is 20 KB, and the average user session produces one event each second. Each event is tagged with a time in milliseconds and a globally unique identifier.

The lead developer created a single DynamoDB table for the events with the following schema:

  • Partition key: game name
  • Sort key: event identifier
  • Local secondary index: player identifier
  • Event time

The tests were successful in a small-scale development environment. However, when deployed to production, new events stopped being added to the table and the logs show DynamoDB failures with the ItemCollectionSizeLimitExceededException error code.

Which design change should a database specialist recommend to the development team?

Options:

A.

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B.

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C.

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D.

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Buy Now
Questions 37

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent data. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

Options:

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Questions 38

A company is running a business-critical application on premises by using Microsoft SQL Server. A database specialist is planning to migrate the instance with several databases to the AWS Cloud. The database specialist will use SQL Server Standard edition hosted on Amazon EC2 Windows instances. The solution must provide high availability and must avoid a single point of failure in the SQL Server deployment architecture.

Which solution will meet these requirements?

Options:

A.

Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases.

B.

Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

C.

Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

D.

Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases.

Buy Now
Questions 39

A company is using an Amazon ElastiCache for Redis cluster to host its online shopping website. Shoppers receive the following error when the website's application queries the cluster:

Which solutions will resolve this memory issues with the LEAST amount of effort? (Choose three.)

Options:

A.

Reduce the TTL value for keys on the node.

B.

Choose a larger node type.

C.

Test different values in the parameter group for the maxmemory-policy parameter to find the ideal value to use.

D.

Increase the number of nodes.

E.

Monitor the EngineCPUUtilization Amazon CloudWatch metric. Create an AWS Lambda function to delete keys on nodes when a threshold is reached.

F.

Increase the TTL value for keys on the node.

Questions 40

A company is migrating its on-premises database workloads to the AWS Cloud. A database specialist performing the move has chosen AWS DMS to migrate an Oracle database with a large table to Amazon RDS. The database specialist notices that AWS DMS is taking significant time to migrate the data.

Which actions would improve the data migration speed? (Choose three.)

Options:

A.

Create multiple AWS DMS tasks to migrate the large table.

B.

Configure the AWS DMS replication instance with Multi-AZ.

C.

Increase the capacity of the AWS DMS replication server.

D.

Establish an AWS Direct Connect connection between the on-premises data center and AWS.

E.

Enable an Amazon RDS Multi-AZ configuration.

F.

Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.

Buy Now
Questions 41

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.

Which combination of actions should a database specialist take to meet these requirements? (Choose two.)

Options:

A.

Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.

B.

Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.

C.

Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.

D.

Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.

E.

Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

Questions 42

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.

How can this solution be implemented?

Options:

A.

Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.

B.

Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.

C.

Use the AWS CLI to update the DynamoDB table and modify the partition key.

D.

Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.

Questions 43

A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.

Which step will provide additional security?

Options:

A.

Set up NACLs that allow the entire EC2 subnet to access the DB instance

B.

Disable the master user account

C.

Set up a security group that blocks SSH to the DB instance

D.

Set up RDS to use SSL for data in transit

Buy Now
Questions 44

A social media company is using Amazon DynamoDB to store user profile data and user activity data. Developers are reading and writing the data, causing the size of the tables to grow significantly. Developers have started to face performance bottlenecks with the tables.

Which solution should a database specialist recommend to read items the FASTEST without consuming all the provisioned throughput for the tables?

Options:

A.

Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.

B.

Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.

C.

Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read a single item that has a specific primary key. Use the BatchGetItem API operation to read multiple items.

D.

Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read a single item that has a specific primary key Use the BatchGetItem API operation to read multiple items.

Questions 45

Recently, a gaming firm purchased a popular iOS game that is especially popular during the Christmas season. The business has opted to include a leaderboard into the game, which will be powered by Amazon DynamoDB. The application's load is likely to increase significantly throughout the Christmas season.

Which solution satisfies these criteria at the lowest possible cost?

Options:

A.

DynamoDB Streams

B.

DynamoDB with DynamoDB Accelerator

C.

DynamoDB with on-demand capacity mode

D.

DynamoDB with provisioned capacity mode with Auto Scaling

Buy Now
Questions 46

A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi-AZ DB instance. When using the AWS Management Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created.

What is the most likely reason for this?

Options:

A.

The source DB instance has to be converted to Single-AZ first to create a read replica from it.

B.

Enhanced Monitoring is not enabled on the source DB instance.

C.

The minor MySQL version in the source DB instance does not support read replicas.

D.

Automated backups are not enabled on the source DB instance.

Buy Now
Questions 47

A company is due for renewing its database license. The company wants to migrate its 80 TB transactional database system from on-premises to the AWS Cloud. The migration should incur the least possible downtime on the downstream database applications. The company’s network infrastructure has limited network bandwidth that is shared with other applications.

Which solution should a database specialist use for a timely migration?

Options:

A.

Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Use AWS DMS to migrate change data capture (CDC) data from the source database to Amazon S3. Use a second AWS DMS task to migrate all the S3 data to the target database.

B.

Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Periodically perform incremental backups of the source database to be shipped in another Snowball Edge appliance to handle syncing change data capture (CDC) data from the source to the target database.

C.

Use AWS DMS to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS DMS to handle syncing change data capture (CDC) data from the source to the target database.

D.

Use the AWS Schema Conversion Tool (AWS SCT) to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS SCT to handle syncing change data capture (CDC) data from the source to the target database.

Buy Now
Questions 48

A bike rental company operates an application to track its bikes. The application receives location and condition data from bike sensors. The application also receives rental transaction data from the associated mobile app.

The application uses Amazon DynamoDB as its database layer. The company has configured DynamoDB with provisioned capacity set to 20% above the expected peak load of the application. On an average day, DynamoDB used 22 billion read capacity units (RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usage changes smoothly over the course of the day and is generally shaped like a bell curve. The timing and magnitude of peaks vary based on the weather and season, but the general shape is consistent.

Which solution will provide the MOST cost optimization of the DynamoDB database layer?

Options:

A.

Change the DynamoDB tables to use on-demand capacity.

B.

Use AWS Auto Scaling and configure time-based scaling.

C.

Enable DynamoDB capacity-based auto scaling.

D.

Enable DynamoDB Accelerator (DAX).

Questions 49

A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group.

What should the Database Specialist do to automatically collect the database logs for the Administrator?

Options:

A.

Enable DocumentDB to export the logs to Amazon CloudWatch Logs

B.

Enable DocumentDB to export the logs to AWS CloudTrail

C.

Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs

D.

Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3

Buy Now
Questions 50

A company's application team needs to select an AWS managed database service to store application and user data. The application team is familiar with MySQL but is open to new solutions. The application and user data is stored in 10 tables and is de-normalized. The application will access this data through an API layer using an unique ID in each table. The company expects the traffic to be light at first, but the traffic Will Increase to thousands of transactions each second within the first year- The database service must support active reads

and writes in multiple AWS Regions at the same time_ Query response times need to be less than 100 ms Which AWS database solution will meet these requirements?

Options:

A.

Deploy an Amazon RDS for MySQL environment in each Region and leverage AWS Database Migration Service (AWS DMS) to set up a multi-Region bidirectional replication

B.

Deploy an Amazon Aurora MySOL global database with write forwarding turned on

C.

Deploy an Amazon DynamoDB database with global tables

D.

Deploy an Amazon DocumentDB global cluster across multiple Regions.

Buy Now
Questions 51

A database professional is tasked with the task of migrating 25 GB of data files from an on-premises storage system to an Amazon Neptune database.

Which method of data loading is the FASTEST?

Options:

A.

Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database.

B.

Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database.

C.

Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database.

D.

Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database.

Buy Now
Questions 52

A pharmaceutical company's drug search API is using an Amazon Neptune DB cluster. A bulk uploader process automatically updates the information in the database a few times each week. A few weeks ago during a bulk upload, a database specialist noticed that the database started to respond frequently with a

ThrottlingException error. The problem also occurred with subsequent uploads.

The database specialist must create a solution to prevent ThrottlingException errors for the database. The solution must minimize the downtime of the cluster.

Which solution meets these requirements?

Options:

A.

Create a read replica that uses a larger instance size than the primary DB instance. Fail over the primary DB instance to the read replica.

B.

Add a read replica to each Availability Zone. Use an instance for the read replica that is the same size as the primary DB instance. Keep the traffic between the API and the database within the Availability Zone.

C.

Create a read replica that uses a larger instance size than the primary DB instance. Offload the reads from the primary DB instance.

D.

Take the latest backup, and restore it in a DB cluster of a larger size. Point the application to the newly created DB cluster.

Buy Now
Questions 53

A corporation wishes to move a 1 TB Oracle database from its current location to an Amazon Aurora PostgreSQL DB cluster. The database specialist at the firm noticed that the Oracle database stores 100 GB of large binary objects (LOBs) across many tables. The Oracle database supports LOBs up to 500 MB in size and an average of 350 MB. AWS DMS was picked by the Database Specialist to transfer the data with the most replication instances.

How should the database specialist improve the transfer of the database to AWS DMS?

Options:

A.

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together

B.

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs

C.

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs

D.

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together

Buy Now
Questions 54

A business need a data warehouse system that stores data consistently and in a highly organized fashion. The organization demands rapid response times for end-user inquiries including current-year data, and users must have access to the whole 15-year dataset when necessary. Additionally, this solution must be able to manage a variable volume of incoming inquiries. Costs associated with storing the 100 TB of data must be maintained to a minimum.

Which solution satisfies these criteria?

Options:

A.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.

B.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.

C.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.

D.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.

Buy Now
Questions 55

An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most cost-effective solution that will automatically scale and is highly available.

Which solution meets these requirements?

Options:

A.

Amazon DynamoDB with on-demand capacity mode

B.

Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled

C.

Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs)

D.

Amazon Aurora with one writer node and two cross-Region Aurora Replicas

Buy Now
Questions 56

A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS-specified maintenance window.

What is the MOST cost-effective action that should be taken to avoid downtime?

Options:

A.

Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB

B.

Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down

C.

Enable a read replicas and direct read traffic to it when Amazon RDS is down

D.

Enable an Amazon RDS for MySQL Multi-AZ configuration

Buy Now
Questions 57

A company has an ecommerce web application with an Amazon RDS for MySQL DB instance. The marketing team has noticed some unexpected updates to the product and pricing information on the website, which is impacting sales targets. The marketing team wants a database specialist to audit future database activity to help identify how and when the changes are being made.

What should the database specialist do to meet these requirements? (Choose two.)

Options:

A.

Create an RDS event subscription to the audit event type.

B.

Enable auditing of CONNECT and QUERY_DML events.

C.

SSH to the DB instance and review the database logs.

D.

Publish the database logs to Amazon CloudWatch Logs.

E.

Enable Enhanced Monitoring on the DB instance.

Buy Now
Questions 58

A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.

Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?

Options:

A.

Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.

B.

Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.

C.

Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.

D.

Create an AWS Backup plan and assign the DynamoDB table as a resource.

Buy Now
Questions 59

A company is writing a new survey application to be used with a weekly televised game show. The application will be available for 2 hours each week. The company expects to receive over 500,000 entries every week, with each survey asking 2-3 multiple choice questions of each user. A Database Specialist needs to select a platform that is highly scalable for a large number of concurrent writes to handle he anticipated volume.

Which AWS services should the Database Specialist consider? (Choose two.)

Options:

A.

Amazon DynamoDB

B.

Amazon Redshift

C.

Amazon Neptune

D.

Amazon Elasticsearch Service

E.

Amazon ElastiCache

Questions 60

A company developed a new application that is deployed on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances use the security group named sg-application-servers. The company needs a database to store the data from the application and decides to use an Amazon RDS for MySQL DB instance. The DB instance is deployed in private DB subnet.

What is the MOST restrictive configuration for the DB instance security group?

Options:

A.

Only allow incoming traffic from the sg-application-servers security group on port 3306.

B.

Only allow incoming traffic from the sg-application-servers security group on port 443.

C.

Only allow incoming traffic from the subnet of the application servers on port 3306.

D.

Only allow incoming traffic from the subnet of the application servers on port 443.

Buy Now
Questions 61

A database specialist is designing an enterprise application for a large company. The application uses Amazon DynamoDB with DynamoDB Accelerator (DAX).

The database specialist observes that most of the queries are not found in the DAX cache and that they still require DynamoDB table reads.

What should the database specialist review first to improve the utility of DAX?

Options:

A.

The DynamoDB ConsumedReadCapacityUnits metric

B.

The trust relationship to perform the DynamoDB API calls

C.

The DAX cluster's TTL setting

D.

The validity of customer-specified AWS Key Management Service (AWS KMS) keys for DAX encryption at rest

Questions 62

An information management services company is storing JSON documents on premises. The company is using a MongoDB 3.6 database but wants to migrate to

AWS. The solution must be compatible, scalable, and fully managed. The solution also must result in as little downtime as possible during the migration.

Which solution meets these requirements?

Options:

A.

Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of Amazon DocumentDB (with MongoDB compatibility).

B.

Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of a MongoDB image that is hosted on Amazon EC2

C.

Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to Amazon DocumentDB (with MongoDB compatibility).

D.

Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to a MongoDB image that is hosted on Amazon EC2.

Questions 63

A financial services company is using AWS Database Migration Service (AWS OMS) to migrate Its databases from on-premises to AWS. A database administrator is working on replicating a database to AWS from on-premises using full load and change data capture (CDC). During the CDC replication, the database administrator observed that the target latency was high and slowly increasing-

What could be the root causes for this high target latency? (Select TWO.)

Options:

A.

There was ongoing maintenance on the replication instance

B.

The source endpoint was changed by modifyng the task

C.

Loopback changes had affected the source and target instances-

D.

There was no primary key or index in the target database.

E.

There were resource bottlenecks in the replication instance

Buy Now
Questions 64

An ecommerce company is running AWS Database Migration Service (AWS DMS) to replicate an on-premises Microsoft SQL Server database to Amazon RDS for SQL Server. The company has set up an AWS Direct Connect connection from its on-premises data center to AWS. During the migration, the company's security team receives an alarm that is related to the migration. The security team mandates that the DMS replication instance must not be accessible from public IP addresses.

What should a database specialist do to meet this requirement?

Options:

A.

Set up a VPN connection to encrypt the traffic over the Direct Connect connection.

B.

Modify the DMS replication instance by disabling the publicly accessible option.

C.

Delete the DMS replication instance. Recreate the DMS replication instance with the publicly accessible option disabled.

D.

Create a new replication VPC subnet group with private subnets. Modify the DMS replication instance by selecting the newly created VPC subnet group.

Questions 65

A business is transferring its on-premises database workloads to the Amazon Web Services (AWS) Cloud. A database professional migrating an Oracle database with a huge table to Amazon RDS has picked AWS DMS. The database professional observes that AWS DMS is consuming considerable time migrating the data.

Which activities would increase the pace of data migration? (Select three.)

Options:

A.

Create multiple AWS DMS tasks to migrate the large table.

B.

Configure the AWS DMS replication instance with Multi-AZ.

C.

Increase the capacity of the AWS DMS replication server.

D.

Establish an AWS Direct Connect connection between the on-premises data center and AWS.

E.

Enable an Amazon RDS Multi-AZ configuration.

F.

Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.

Buy Now
Questions 66

A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replica. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests.

Which should the database specialist do to allow the database team to create the test tables?

Options:

A.

Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.

B.

Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.

C.

Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.

D.

Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.

Questions 67

A database specialist is launching a test graph database using Amazon Neptune for the first time. The database specialist needs to insert millions of rows of test observations from a .csv file that is stored in Amazon S3. The database specialist has been using a series of API calls to upload the data to the Neptune DB instance.

Which combination of steps would allow the database specialist to upload the data faster? (Choose three.)

Options:

A.

Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.

B.

Ensure the vertices and edges are specified in different .csv files with proper header column formatting.

C.

Use AWS DMS to move data from Amazon S3 to the Neptune Loader.

D.

Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.

E.

Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.

F.

Create an S3 VPC endpoint and issue an HTTP POST to the database's loader endpoint.

Buy Now
Questions 68

A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time- consuming, so it is not an option.

How should the Database Specialist satisfy this new requirement?

Options:

A.

Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.

B.

Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.

C.

Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.

D.

Create an encrypted read replica of the RDS DB instance. Promote it the master.

Buy Now
Questions 69

A company uses an Amazon RDS for PostgreSQL database in the us-east-2 Region. The company wants to have a copy of the database available in the us-west-2 Region as part of a new disaster recovery strategy.

A database architect needs to create the new database. There can be little to no downtime to the source database. The database architect has decided to use AWS Database Migration Service (AWS DMS) to replicate the database across Regions. The database architect will use full load mode and then will switch to change data capture (CDC) mode.

Which parameters must the database architect configure to support CDC mode for the RDS for PostgreSQL database? (Choose three.)

Options:

A.

Set wal_level = logical.

B.

Set wal_level = replica.

C.

Set max_replication_slots to 1 or more, depending on the number of DMS tasks.

D.

Set max_replication_slots to 0 to support dynamic allocation of slots.

E.

Set wal_sender_timeout to 20,000 milliseconds.

F.

Set wal_sender_timeout to 5,000 milliseconds.

Buy Now
Questions 70

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime.

Which solution would meet these requirements?

Options:

A.

Create a snapshot of the old databases and restore the snapshot with the required storage

B.

Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS

C.

Create a new database using native backup and restore

D.

Create a new read replica and make it the primary by terminating the existing primary

Buy Now
Questions 71

A company has a web-based survey application that uses Amazon DynamoDB. During peak usage, when

survey responses are being collected, a Database Specialist sees the ProvisionedThroughputExceededException error.

What can the Database Specialist do to resolve this error? (Choose two.)

Options:

A.

Change the table to use Amazon DynamoDB Streams

B.

Purchase DynamoDB reserved capacity in the affected Region

C.

Increase the write capacity units for the specific table

D.

Change the table capacity mode to on-demand

E.

Change the table type to throughput optimized

Questions 72

A company has applications running on Amazon EC2 instances in a private subnet with no internet connectivity. The company deployed a new application that uses Amazon DynamoDB, but the application cannot connect to the DynamoDB tables. A developer already checked that all permissions are set correctly.

What should a database specialist do to resolve this issue while minimizing access to external resources?

Options:

A.

Add a route to an internet gateway in the subnet’s route table.

B.

Add a route to a NAT gateway in the subnet’s route table.

C.

Assign a new security group to the EC2 instances with an outbound rule to ports 80 and 443.

D.

Create a VPC endpoint for DynamoDB and add a route to the endpoint in the subnet’s route table.

Buy Now
Questions 73

A gaming company is designing a mobile gaming app that will be accessed by many users across the globe. The company wants to have replication and full support for multi-master writes. The company also wants to ensure low latency and consistent performance for app users.

Which solution meets these requirements?

Options:

A.

Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling

B.

Use Amazon Aurora for storage and enable cross-Region Aurora Replicas

C.

Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache

D.

Use Amazon Neptune for storage

Buy Now
Questions 74

A database professional maintains a fleet of Amazon RDS database instances that are configured to utilize the default database parameter group. A database expert must connect a custom parameter group with certain database instances.

When will the instances be allocated to this new parameter group once the database specialist performs this change?

Options:

A.

Instantaneously after the change is made to the parameter group

B.

In the next scheduled maintenance window of the DB instances

C.

After the DB instances are manually rebooted

D.

Within 24 hours after the change is made to the parameter group

Buy Now
Questions 75

A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.

Where should the AWS DMS replication instance be placed for the MOST optimal performance?

Options:

A.

In the same Region and VPC of the source DB instance

B.

In the same Region and VPC as the target DB instance

C.

In the same VPC and Availability Zone as the target DB instance

D.

In the same VPC and Availability Zone as the source DB instance

Buy Now
Questions 76

A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.

A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.

Which AWS service or feature will help automate and achieve this objective?

Options:

A.

AWS Systems Manager Parameter Store

B.

DB parameter group

C.

AWS Config

D.

AWS Secrets Manager

Buy Now
Questions 77

A retail company uses Amazon Redshift Spectrum to run complex analytical queries on objects that are stored in an Amazon S3 bucket. The objects are joined with multiple dimension tables that are stored in an Amazon Redshift database. The company uses the database to create monthly and quarterly aggregated reports. Users who attempt to run queries are reporting the following error message: error: Spectrum Scan Error: Access throttled

Which solution will resolve this error?

Options:

A.

Check file sizes of fact tables in Amazon S3, and look for large files. Break up large files into smaller files of equal size between 100 MB and 1 GB

B.

Reduce the number of queries that users can run in parallel.

C.

Check file sizes of fact tables in Amazon S3, and look for small files. Merge the small files into larger files of at least 64 MB in size.

D.

Review and optimize queries that submit a large aggregation step to Redshift Spectrum.

Questions 78

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.

Which approach has the least risk and the highest likelihood of a successful data transfer?

Options:

A.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.

B.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.

C.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.

D.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.

Questions 79

A company hosts a 2 TB Oracle database in its on-premises data center. A database specialist is migrating the database from on premises to an Amazon Aurora

PostgreSQL database on AWS.

The database specialist identifies a problem that relates to compatibility Oracle stores metadata in its data dictionary in uppercase, but PostgreSQL stores the metadata in lowercase. The database specialist must resolve this problem to complete the migration.

What is the MOST operationally efficient solution that meets these requirements?

Options:

A.

Override the default uppercase format of Oracle schema by encasing object names in quotation marks during creation.

B.

Use AWS Database Migration Service (AWS DMS) mapping rules with rule-action as convert-lowercase.

C.

Use the AWS Schema Conversion Tool conversion agent to convert the metadata from uppercase to lowercase.

D.

Use an AWS Glue job that is attached to an AWS Database Migration Service (AWS DMS) replication task to convert the metadata from uppercase to lowercase.

Questions 80

A company has a on-premises Oracle Real Application Clusters (RAC) database. The company wants to migrate the database to AWS and reduce licensing costs. The company's application team wants to store JSON payloads that expire after 28 hours. The company has development capacity if code changes are required.

Which solution meets these requirements?

Options:

A.

Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.

B.

Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.

C.

Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.

D.

Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.

Buy Now
Questions 81

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi- AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.

What should the company do to address this space constraint issue?

Options:

A.

Log in to the host and run the rm $PGDATA/pg_logs/* command

B.

Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted

C.

Create a ticket with AWS Support to have the logs deleted

D.

Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs

Buy Now
Questions 82

A manufacturing company stores its inventory details in an Amazon DynamoDB table in the us-east-2 Region. According to new compliance and regulatory policies, the company is required to back up all of its tables nightly and store these backups in the us-west-2 Region for disaster recovery for 1 year

Which solution MOST cost-effectively meets these requirements?

Options:

A.

Convert the existing DynamoDB table into a global table and create a global table replica in the us-west-2 Region.

B.

Use AWS Backup to create a backup plan. Configure cross-Region replication in the plan and assign the DynamoDB table to this plan

C.

Create an on-demand backup of the DynamoDB table and restore this backup in the us-west-2 Region.

D.

Enable Amazon S3 Cross-Region Replication (CRR) on the S3 bucket where DynamoDB on-demand backups are stored.

Buy Now
Questions 83

A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Aurora. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora.

Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)

Options:

A.

Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.

B.

Use Oracle’s Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.

C.

Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.

D.

Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

E.

Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

Buy Now
Questions 84

A company is using AWS CloudFormation to provision and manage infrastructure resources, including a production database. During a recent CloudFormation stack update, a database specialist observed that changes were made to a database resource that is named ProductionDatabase. The company wants to prevent changes to only ProductionDatabase during future stack updates.

Which stack policy will meet this requirement?

Options:

A.

B.

A close-up of a computer code Description automatically generated

C.

A screen shot of a computer code Description automatically generated

D.

A screenshot of a computer program Description automatically generated

Buy Now
Questions 85

For the first time, a database professional is establishing a test graph database on Amazon Neptune. The database expert must input millions of rows of test observations from an Amazon S3.csv file. The database professional uploaded the data to the Neptune DB instance through a series of API calls.

Which sequence of actions enables the database professional to upload the data most quickly? (Select three.)

Options:

A.

Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.

B.

Ensure the vertices and edges are specified in different .csv files with proper header column formatting.

C.

Use AWS DMS to move data from Amazon S3 to the Neptune Loader.

D.

Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.

E.

Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.

F.

Create an S3 VPC endpoint and issue an HTTP POST to the databaseג€™s loader endpoint.

Buy Now
Questions 86

Amazon Aurora MySQL is being used by an ecommerce business to migrate its main application database. The firm is now doing OLTP stress testing using concurrent database connections. A database professional detected sluggish performance for several particular write operations during the first round of testing.

Examining the Amazon CloudWatch stats for the Aurora DB cluster revealed a CPU usage of 90%.

Which actions should the database professional take to determine the main cause of excessive CPU use and sluggish performance most effectively? (Select two.)

Options:

A.

Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.

B.

Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.

C.

Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.

D.

Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.

E.

Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.

Buy Now
Questions 87

An online shopping company has a large inflow of shopping requests daily. As a result, there is a consistent load on the company’s Amazon RDS database. A database specialist needs to ensure the database is up and running at all times. The database specialist wants an automatic notification system for issues that may cause database downtime or for configuration changes made to the database.

What should the database specialist do to achieve this? (Choose two.)

Options:

A.

Create an Amazon CloudWatch Events event to send a notification using Amazon SNS on every API call logged in AWS CloudTrail.

B.

Subscribe to an RDS event subscription and configure it to use an Amazon SNS topic to send notifications.

C.

Use Amazon SES to send notifications based on configured Amazon CloudWatch Events events.

D.

Configure Amazon CloudWatch alarms on various metrics, such as FreeStorageSpace for the RDS instance.

E.

Enable email notifications for AWS Trusted Advisor.

Buy Now
Questions 88

An online retailer uses Amazon DynamoDB for its product catalog and order data. Some popular items have led to frequently accessed keys in the data, and the company is using DynamoDB Accelerator (DAX) as the caching solution to cater to the frequently accessed keys. As the number of popular products is growing, the company realizes that more items need to be cached. The company observes a high cache miss rate and needs a solution to address this issue.

What should a database specialist do to accommodate the changing requirements for DAX?

Options:

A.

Increase the number of nodes in the existing DAX cluster.

B.

Create a new DAX cluster with more nodes. Change the DAX endpoint in the application to point to the new cluster.

C.

Create a new DAX cluster using a larger node type. Change the DAX endpoint in the application to point to the new cluster.

D.

Modify the node type in the existing DAX cluster.

Buy Now
Questions 89

A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.

How can the Database Specialists accomplish this?

Options:

A.

Enable the option to push all database logs to Amazon CloudWatch for advanced analysis

B.

Create appropriate Amazon CloudWatch dashboards to contain specific periods of time

C.

Enable Amazon RDS Performance Insights and review the appropriate dashboard

D.

Enable Enhanced Monitoring will the appropriate settings

Questions 90

A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.

Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?

Options:

A.

Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.

B.

Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.

C.

Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.

D.

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Buy Now
Questions 91

An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources.

What should a Database Specialist do in this situation to increase performance and return latency to sub- second levels?

Options:

A.

Increase the size of the DB instance storage

B.

Change the underlying EBS storage type to General Purpose SSD (gp2)

C.

Disable EBS optimization on the DB instance

D.

Change the DB instance to an instance class with a higher maximum bandwidth

Questions 92

A stock market analysis firm maintains two locations: one in the us-east-1 Region and another in the eu-west-2 Region. The business want to build an AWS database solution capable of providing rapid and accurate updates.

Dashboards with advanced analytical queries are used to present data in the eu-west-2 office. Because the corporation will use these dashboards to make purchasing choices, they must have less than a second to obtain application data.

Which solution satisfies these criteria and gives the MOST CURRENT dashboard?

Options:

A.

Deploy an Amazon RDS DB instance in us-east-1 with a read replica instance in eu-west-2. Create an Amazon ElastiCache cluster in eu-west-2 to cache data from the read replica to generate the dashboards.

B.

Use an Amazon DynamoDB global table in us-east-1 with replication into eu-west-2. Use multi-active replication to ensure that updates are quickly propagated to eu-west-2.

C.

Use an Amazon Aurora global database. Deploy the primary DB cluster in us-east-1. Deploy the secondary DB cluster in eu-west-2. Configure the dashboard application to read from the secondary cluster.

D.

Deploy an Amazon RDS for MySQL DB instance in us-east-1 with a read replica instance in eu-west-2. Configure the dashboard application to read from the read replica.

Buy Now
Questions 93

A financial company is running an Amazon Redshift cluster for one of its data warehouse solutions. The company needs to generate connection logs, user logs, and user activity logs. The company also must make these logs available for future analysis.

Which combination of steps should a database specialist take to meet these requirements? (Choose two.)

Options:

A.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified log group in Amazon CloudWatch Logs.

B.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified Amazon S3 bucket

C.

Modify the cluster by enabling continuous delivery of AWS CloudTrail logs to Amazon S3.

D.

Create a new parameter group with the enable_user_activity_logging parameter set to true. Configure the cluster to use the new parameter group.

E.

Modify the system table to enable logging for each user.

Buy Now
Questions 94

An worldwide gaming company's development team is experimenting with using Amazon DynamoDB to store in-game events for three mobile titles. Maximum concurrent users for the most popular game is 500,000, while the least popular game is 10,000. The typical event is 20 KB in size, while the average user session generates one event each second. Each event is assigned a millisecond time stamp and a globally unique identification.

The lead developer generated a single DynamoDB database with the following structure for the events:

  • Partition key: game name
  • Sort key: event identifier
  • Local secondary index: player identifier
  • Event time

In a small-scale development setting, the tests were successful. When the application was deployed to production, however, new events were not being added to the database, and the logs indicated DynamoDB failures with the ItemCollectionSizeLimitExceededException issue code.

Which design modification should a database professional offer to the development team?

Options:

A.

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B.

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C.

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D.

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Buy Now
Questions 95

A company is running an Amazon RDS for MySQL Multi-AZ DB instance for a business-critical workload. RDS encryption for the DB instance is disabled. A recent security audit concluded that all business-critical applications must encrypt data at rest. The company has asked its database specialist to formulate a plan to accomplish this for the DB instance.

Which process should the database specialist recommend?

Options:

A.

Create an encrypted snapshot of the unencrypted DB instance. Copy the encrypted snapshot to Amazon S3. Restore the DB instance from the encrypted snapshot using Amazon S3.

B.

Create a new RDS for MySQL DB instance with encryption enabled. Restore the unencrypted snapshot to this DB instance.

C.

Create a snapshot of the unencrypted DB instance. Create an encrypted copy of the snapshot. Restore the DB instance from the encrypted snapshot.

D.

Temporarily shut down the unencrypted DB instance. Enable AWS KMS encryption in the AWS Management Console using an AWS managed CMK. Restart the DB instance in an encrypted state.

Questions 96

A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.

Which solution would meet these requirements and deploy the DynamoDB tables?

Options:

A.

Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.

B.

Create an AWS CloudFormation template and deploy the template to all the Regions.

C.

Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.

D.

Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by- step guide for future deployments.

Exam Code: DBS-C01
Exam Name: AWS Certified Database - Specialty
Last Update: Dec 21, 2024
Questions: 324