Which of the following features, associated with Continuous Data Protection (CDP), require additional Snowflake-provided data storage? (Choose two.)
Tri-Secret Secure
Time Travel
Fail-safe
Data encryption
External stages
The features associated with Continuous Data Protection (CDP) that require additional Snowflake-provided data storage are Time Travel and Fail-safe. Time Travel allows users to access historical data within a defined period, while Fail-safe provides an additional layer of data protection beyond the Time Travel period. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Assume there is a table consisting of five micro-partitions with values ranging from A to Z.
Which diagram indicates a well-clustered table?
A well-clustered table in Snowflake means that the data is organized in such a way that related data points are stored close to each other within the micro-partitions. This optimizes query performance by reducing the amount of scanned data. The diagram indicated by option C shows a well-clustered table, as it likely represents a more evenly distributed range of values across the micro-partitions1.
References = Snowflake Micro-partitions & Table Clustering
Which statement is true about running tasks in Snowflake?
A task can be called using a CALL statement to run a set of predefined SQL commands.
A task allows a user to execute a single SQL statement/command using a predefined schedule.
A task allows a user to execute a set of SQL commands on a predefined schedule.
A task can be executed using a SELECT statement to run a predefined SQL command.
In Snowflake, a task allows a user to execute a single SQL statement/command using a predefined schedule (B). Tasks are used to automate the execution of SQL statements at scheduled intervals.
Which command should be used to download files from a Snowflake stage to a local folder on a client's machine?
PUT
GET
COPY
SELECT
The GET command is used to download files from a Snowflake stage to a local folder on a client’s machine2.
What is the default file size when unloading data from Snowflake using the COPY command?
5 MB
8 GB
16 MB
32 MB
The default file size when unloading data from Snowflake using the COPY command is not explicitly stated in the provided resources. However, Snowflake documentation suggests that the file size can be specified using the MAX_FILE_SIZE option in the COPY INTO
Which Snowflake feature allows a user to substitute a randomly generated identifier for sensitive data, in order to prevent unauthorized users access to the data, before loading it into Snowflake?
External Tokenization
External Tables
Materialized Views
User-Defined Table Functions (UDTF)
The feature in Snowflake that allows a user to substitute a randomly generated identifier for sensitive data before loading it into Snowflake is known as External Tokenization. This process helps to secure sensitive data by ensuring that it is not exposed in its original form, thus preventing unauthorized access3.
Which of the following objects can be directly restored using the UNDROP command? (Choose two.)
Schema
View
Internal stage
Table
User
Role
The UNDROP command in Snowflake can be used to directly restore Views and Tables. These objects, when dropped, are moved to a ‘Recycle Bin’ where they can be restored within a time limit before they are permanently deleted. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A user created a new worksheet within the Snowsight Ul and wants to share this with teammates
How can this worksheet be shared?
Create a zero-copy clone of the worksheet and grant permissions to teammates
Create a private Data Exchange so that any teammate can use the worksheet
Share the worksheet with teammates within Snowsight
Create a database and grant all permissions to teammates
Worksheets in Snowsight can be shared directly with other Snowflake users within the same account. This feature allows for collaboration and sharing of SQL queries or Python code, as well as other data manipulation tasks1.
Which of the following accurately describes shares?
Tables, secure views, and secure UDFs can be shared
Shares can be shared
Data consumers can clone a new table from a share
Access to a share cannot be revoked once granted
Shares in Snowflake are named objects that encapsulate all the information required to share databases, schemas, tables, secure views, and secure UDFs. These objects can be added to a share by granting privileges on them to the share via a database role
The Snowflake Search Optimization Services supports improved performance of which kind of query?
Queries against large tables where frequent DML occurs
Queries against tables larger than 1 TB
Selective point lookup queries
Queries against a subset of columns in a table
The Snowflake Search Optimization Service is designed to support improved performance for selective point lookup queries. These are queries that retrieve specific records from a database, often based on a unique identifier or a small set of criteria3.
Which tasks are performed in the Snowflake Cloud Services layer? (Choose two.)
Management of metadata
Computing the data
Maintaining Availability Zones
Infrastructure security
Parsing and optimizing queries
The Snowflake Cloud Services layer performs a variety of tasks, including the management of metadata and the parsing and optimization of queries. This layer is responsible for coordinating activities across Snowflake, including user session management, security, and query compilation3.
Which of the following features are available with the Snowflake Enterprise edition? (Choose two.)
Database replication and failover
Automated index management
Customer managed keys (Tri-secret secure)
Extended time travel
Native support for geospatial data
The Snowflake Enterprise edition includes database replication and failover for business continuity and disaster recovery, as well as extended time travel capabilities for longer data retention periods1.
Why would a Snowflake user load JSON data into a VARIANT column instead of a string column?
A VARIANT column is more secure than a string column
A VARIANT column compresses data and a string column does not.
A variant column can be used to create a data hierarchy and a string column cannot
A VARIANT column will have a better query performance than a string column.
A VARIANT column in Snowflake is specifically designed to store semi-structured data, such as JSON, and allows for the creation of a data hierarchy. Unlike string columns, VARIANT columns can natively handle JSON data structures, enabling complex querying and manipulation of hierarchical data using functions designed for semi-structured data.
References:
Snowflake Documentation: VARIANT Data
True or False: Snowpipe via REST API can only reference External Stages as source.
True
False
Snowpipe via REST API can reference both named internal stages within Snowflake and external stages, such as Amazon S3, Google Cloud Storage, or Microsoft Azure1. This means that Snowpipe is not limited to only external stages as a source for data loading.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation1
When loading data into Snowflake via Snowpipe what is the compressed file size recommendation?
10-50 MB
100-250 MB
300-500 MB
1000-1500 MB
For loading data into Snowflake via Snowpipe, the recommended compressed file size is between 100-250 MB. This size range is optimal for balancing the performance of parallel processing and minimizing the overhead associated with handling many small files2.
Which snowflake objects will incur both storage and cloud compute charges? (Select TWO)
Materialized view
Sequence
Secure view
Transient table
Clustered table
In Snowflake, both materialized views and transient tables will incur storage charges because they store data. They will also incur compute charges when queries are run against them, as compute resources are used to process the queries. References: [COF-C02] SnowPro Core Certification Exam Study Guide
In a Snowflake role hierarchy, what is the top-level role?
SYSADMIN
ORGADMIN
ACCOUNTADMIN
SECURITYADMIN
In a Snowflake role hierarchy, the top-level role is ACCOUNTADMIN. This role has the highest level of privileges and is capable of performing all administrative functions within the Snowflake account
How does Snowflake define i1s approach to Discretionary Access Control (DAC)?
A defined level of access to an object
An entity in which access can be granted
Each object has an owner, who can in turn grail access to that object.
Access privileges are assigned to roles. which are in turn assigned to use's
Snowflake implements Discretionary Access Control (DAC) by using a role-based access control model. In this model, access privileges are not directly assigned to individual objects or users but are encapsulated within roles. These roles are then assigned to users, effectively granting them the access privileges contained within the role. This approach allows for granular control over database access, making it easier to manage permissions in a scalable and flexible manner.References: Snowflake Documentation on Access Control
How are serverless features billed?
Per second multiplied by an automatic sizing for the job
Per minute multiplied by an automatic sizing for the job, with a minimum of one minute
Per second multiplied by the size, as determined by the SERVERLESS_FEATURES_SIZE account parameter
Serverless features are not billed, unless the total cost for the month exceeds 10% of the warehouse credits, on the account
Serverless features in Snowflake are billed based on the time they are used, measured in minutes. The cost is calculated by multiplying the duration of the job by an automatic sizing determined by Snowflake, with a minimum billing increment of one minute. This means that even if a serverless feature is used for less than a minute, it will still be billed for the full minute.
What step does Snowflake recommend when loading data from a stage?
Use PURGE when using the COPY INTO