What is a characteristic of Secure Socket Layer (SSL) and Transport Layer Security (TLS)?
SSL and TLS provide a generic channel security mechanism on top of Transmission Control Protocol (TCP).
SSL and TLS provide nonrepudiation by default.
SSL and TLS do not provide security for most routed protocols.
SSL and TLS provide header encapsulation over HyperText Transfer Protocol (HTTP).
SSL and TLS provide a generic channel security mechanism on top of TCP. This means that SSL and TLS are protocols that enable secure communication between two parties over a network, such as the internet, by using encryption, authentication, and integrity mechanisms. SSL and TLS operate at the transport layer of the OSI model, above the TCP protocol, which provides reliable and ordered delivery of data. SSL and TLS can be used to secure various application layer protocols, such as HTTP, SMTP, FTP, and so on. SSL and TLS do not provide nonrepudiation by default, as this is a service that requires digital signatures and certificates to prove the origin and content of a message. SSL and TLS do provide security for most routed protocols, as they can encrypt and authenticate any data that is transmitted over TCP. SSL and TLS do not provide header encapsulation over HTTP, as this is a function of the HTTPS protocol, which is a combination of HTTP and SSL/TLS.
Which of the following is the PRIMARY concern when using an Internet browser to access a cloud-based service?
Insecure implementation of Application Programming Interfaces (API)
Improper use and storage of management keys
Misconfiguration of infrastructure allowing for unauthorized access
Vulnerabilities within protocols that can expose confidential data
The primary concern when using an Internet browser to access a cloud-based service is the vulnerabilities within protocols that can expose confidential data. Protocols are the rules and formats that govern the communication and exchange of data between systems or applications. Protocols can have vulnerabilities or flaws that can be exploited by attackers to intercept, modify, or steal the data. For example, some protocols may not provide adequate encryption, authentication, or integrity for the data, or they may have weak or outdated algorithms, keys, or certificates. When using an Internet browser to access a cloud-based service, the data may be transmitted over various protocols, such as HTTP, HTTPS, SSL, TLS, etc. If any of these protocols are vulnerable, the data may be compromised, especially if the data is sensitive or confidential. Therefore, it is important to use secure and updated protocols, as well as to monitor and patch any vulnerabilities12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 338; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 456.
The application of which of the following standards would BEST reduce the potential for data breaches?
ISO 9000
ISO 20121
ISO 26000
ISO 27001
The standard that would best reduce the potential for data breaches is ISO 27001. ISO 27001 is an international standard that specifies the requirements and the guidelines for establishing, implementing, maintaining, and improving an information security management system (ISMS) within an organization. An ISMS is a systematic approach to managing the information security of the organization, by applying the principles of plan-do-check-act (PDCA) cycle, and by following the best practices of risk assessment, risk treatment, security controls, monitoring, review, and improvement. ISO 27001 can help reduce the potential for data breaches, as it can provide a framework and a methodology for the organization to identify, protect, detect, respond, and recover from the information security incidents or events that could compromise the confidentiality, integrity, or availability of the data or the information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 25; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 33
While inventorying storage equipment, it is found that there are unlabeled, disconnected, and powered off devices. Which of the following is the correct procedure for handling such equipment?
They should be recycled to save energy.
They should be recycled according to NIST SP 800-88.
They should be inspected and sanitized following the organizational policy.
They should be inspected and categorized properly to sell them for reuse.
The correct procedure for handling the unlabeled, disconnected, and powered off devices that are found while inventorying storage equipment is that they should be inspected and sanitized following the organizational policy. The unlabeled, disconnected, and powered off devices are the devices or the systems that are not identified, not connected, or not turned on, and that are used or intended for storing or holding the data or the information, such as the hard disks, the flash drives, or the memory cards. The correct procedure for handling the unlabeled, disconnected, and powered off devices is that they should be inspected and sanitized following the organizational policy, which means that they should be examined or checked and cleaned or erased according to the rules or the guidelines that are established or defined by the organization, and that are based on the classification, the sensitivity, or the value of the data or the information that are stored or held on the devices or the systems. The inspection and the sanitization of the unlabeled, disconnected, and powered off devices can ensure or maintain the security or the privacy of the data or the information, as they can prevent or reduce the risk of unauthorized or inappropriate access or disclosure of the data or the information by the third parties or the attackers who access or compromise the devices or the systems.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 198; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 355
What balance MUST be considered when web application developers determine how informative application error messages should be constructed?
Risk versus benefit
Availability versus auditability
Confidentiality versus integrity
Performance versus user satisfaction
According to the CXL blog2, the balance that must be considered when web application developers determine how informative application error messages should be constructed is risk versus benefit. Application error messages are the messages that are displayed or communicated to the users when an error or a problem occurs in the web application, such as a login failure, a form validation error, or a server error. Application error messages are important for the user experience and the conversion rate, as they help the users to understand and resolve the error or the problem, as well as to continue or complete their tasks or goals on the web application. However, application error messages also pose some risks or challenges for the web application developers, as they may reveal or expose some sensitive or confidential information about the web application, such as the system architecture, the database structure, or the security vulnerabilities, which may be exploited or attacked by the malicious users or hackers. Therefore, web application developers need to consider the balance between the risk and the benefit when determining how informative application error messages should be constructed. The risk is the potential or possibility of harm or damage to the web application, the data, or the users, as a result of the application error messages, such as the loss of privacy, integrity, or availability. The benefit is the value or advantage of the application error messages for the web application, the data, or the users, such as the improvement of usability, functionality, or security. Web application developers need to weigh the risk and the benefit of the application error messages, and decide how much and what kind of information to include or exclude in the application error messages, as well as how to present or format the information in the application error messages, in order to achieve the optimal balance between the risk and the benefit. Availability versus auditability is not the balance that must be considered when web application developers determine how informative application error messages should be constructed, as it is not related to the information or the presentation of the application error messages, but to the performance or the monitoring of the web application. Availability is the property that ensures that the web application, the data, or the users are accessible or usable when needed or desired, and are protected from unauthorized or unintended denial or disruption. Auditability is the property that ensures that the web application, the data, or the users are traceable or accountable for their actions or events, and are supported by the logging or recording mechanisms. Availability and auditability are both important for the web application, the data, and the users, but they are not the balance that must be considered when determining how informative application error messages should be constructed, as they do not affect or influence the information or the presentation of the application error messages. Confidentiality versus integrity is not the balance that must be considered when web application developers determine how informative application error messages should be constructed, as it is not related to the information or the presentation of the application error messages, but to the protection or the quality of the data. Confidentiality is the property that ensures that the data is only accessible or disclosed to the authorized parties, and is protected from unauthorized or unintended access or disclosure. Integrity is the property that ensures that the data is accurate, complete, and consistent, and is protected from unauthorized or unintended modification or corruption. Confidentiality and integrity are both important for the data, but they are not the balance that must be considered when determining how informative application error messages should be constructed, as they do not affect or influence the information or the presentation of the application error messages. Performance versus user satisfaction is not the balance that must be considered when web application developers determine how informative application error messages should be constructed, as it is not related to the information or the presentation of the application error messages, but to the efficiency or the effectiveness of the web application. Performance is the measure or indicator of how well the web application performs its functions or services, such as the speed, reliability, or scalability of the web application. User satisfaction is the measure or indicator of how satisfied the users are with the web application, its functions or services, or its user experience, such as the usability, functionality, or security of the web application. Performance and user satisfaction are both important for the web application, but they are not the balance that must be considered when determining how informative application error messages should be constructed, as they do not affect or influence the information or the presentation of the application error messages. References: 2
Which of the following sets of controls should allow an investigation if an attack is not blocked by preventive controls or detected by monitoring?
Logging and audit trail controls to enable forensic analysis
Security incident response lessons learned procedures
Security event alert triage done by analysts using a Security Information and Event Management (SIEM) system
Transactional controls focused on fraud prevention
Logging and audit trail controls are designed to record and monitor the activities and events that occur on a system or network. They can provide valuable information for forensic analysis, such as the source, destination, time, and type of an event, the user or process involved, the data or resources accessed or modified, and the outcome or status of the event. Logging and audit trail controls can help identify the cause, scope, impact, and timeline of an attack, as well as the evidence and artifacts left by the attacker. They can also help determine the effectiveness and gaps of the preventive and detective controls, and support the incident response and recovery processes. Logging and audit trail controls should be configured, protected, and reviewed according to the organizational policies and standards, and comply with the legal and regulatory requirements.
What is an advantage of Elliptic Curve Cryptography (ECC)?
Cryptographic approach that does not require a fixed-length key
Military-strength security that does not depend upon secrecy of the algorithm
Opportunity to use shorter keys for the same level of security
Ability to use much longer keys for greater security
Which of the following would BEST describe the role directly responsible for data within an organization?
Data custodian
Information owner
Database administrator
Quality control
According to the CISSP For Dummies, the role that is directly responsible for data within an organization is the information owner. The information owner is the person or role that has the authority and accountability for the data or information that the organization owns, creates, uses, or maintains, such as data, documents, records, or intellectual property. The information owner is responsible for defining the classification, value, and sensitivity of the data or information, as well as the security requirements, policies, and standards for the data or information. The information owner is also responsible for granting or revoking the access rights and permissions to the data or information, as well as for monitoring and auditing the compliance and effectiveness of the security controls and mechanisms for the data or information. The data custodian is not the role that is directly responsible for data within an organization, although it may be a role that supports or assists the information owner. The data custodian is the person or role that has the responsibility for implementing and maintaining the security controls and mechanisms for the data or information, as defined by the information owner. The data custodian is responsible for performing the technical and operational tasks and activities for the data or information, such as backup, recovery, encryption, or disposal. The database administrator is not the role that is directly responsible for data within an organization, although it may be a role that supports or assists the information owner or the data custodian. The database administrator is the person or role that has the responsibility for managing and administering the database system that stores and processes the data or information. The database administrator is responsible for performing the technical and operational tasks and activities for the database system, such as installation, configuration, optimization, or troubleshooting.
A Simple Power Analysis (SPA) attack against a device directly observes which of the following?
Static discharge
Consumption
Generation
Magnetism
A Simple Power Analysis (SPA) attack against a device directly observes the consumption of power by the device. SPA is a type of side channel attack that exploits the variations in the power consumption of a device, such as a smart card or a cryptographic module, to infer information about the operations or data processed by the device. SPA can reveal the type, length, or sequence of instructions executed by the device, or the value of the secret key or data used by the device. The other options are not directly observed by SPA, but rather different aspects or effects of power. Static discharge is the sudden flow of electricity between two objects with different electric potentials. Generation is the process of producing electric power from other sources of energy. Magnetism is the physical phenomenon of attraction or repulsion between magnetic materials or fields. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, p. 525; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, p. 163.
The PRIMARY characteristic of a Distributed Denial of Service (DDoS) attack is that it
exploits weak authentication to penetrate networks.
can be detected with signature analysis.
looks like normal network activity.
is commonly confused with viruses or worms.
The primary characteristic of a Distributed Denial of Service (DDoS) attack is that it looks like normal network activity. A DDoS attack is a type of attack or a threat that aims or intends to disrupt or to degrade the availability or the performance of a system or a service, by overwhelming or flooding the system or the service with a large amount or a high volume of traffic or requests, from multiple or distributed sources or locations, such as the compromised or infected computers, devices, or networks, that are controlled or coordinated by the attacker or the malicious actor. The primary characteristic of a DDoS attack is that it looks like normal network activity, which means that it is difficult or challenging to detect or to prevent the DDoS attack, as it is hard or impossible to distinguish or to differentiate the legitimate or the authentic traffic or requests from the illegitimate or the malicious traffic or requests, and as it is hard or impossible to block or to filter the illegitimate or the malicious traffic or requests, without affecting or impacting the legitimate or the authentic traffic or requests. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 115; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 172
Which of the following is needed to securely distribute symmetric cryptographic keys?
Officially approved Public-Key Infrastructure (PKI) Class 3 or Class 4 certificates
Officially approved and compliant key management technology and processes
An organizationally approved communication protection policy and key management plan
Hardware tokens that protect the user’s private key.
According to the CISSP All-in-One Exam Guide2, the thing that is needed to securely distribute symmetric cryptographic keys is officially approved and compliant key management technology and processes. Symmetric cryptography is a type of cryptography that uses the same key to encrypt and decrypt the data or information, such as the Advanced Encryption Standard (AES) or the Data Encryption Standard (DES). Symmetric cryptographic keys are the secret or random values that are used to perform the encryption and decryption operations, such as the 128-bit or 256-bit keys. Key management is the process of generating, distributing, storing, using, changing, and destroying the cryptographic keys, as well as the policies and procedures that govern the process. Key management technology and processes are the tools and methods that are used to implement and perform the key management process, such as the key distribution protocols, the key servers, or the key lifecycle models. Key management technology and processes need to be officially approved and compliant, which means that they need to meet the standards, regulations, and requirements that are established by the authorities or organizations that oversee or regulate the cryptography or the security of the data or information, such as the National Institute of Standards and Technology (NIST) or the Payment Card Industry Data Security Standard (PCI DSS). Officially approved and compliant key management technology and processes are needed to securely distribute symmetric cryptographic keys, as they ensure that the keys are generated, distributed, stored, used, changed, and destroyed in a secure and consistent manner, and that the keys are protected from unauthorized or unintended access, disclosure, modification, corruption, loss, or theft. Officially approved Public-Key Infrastructure (PKI) Class 3 or Class 4 certificates are not the things that are needed to securely distribute symmetric cryptographic keys, although they may be the things that are needed to securely distribute asymmetric cryptographic keys. PKI is a system or a framework that provides the services and mechanisms for the creation, management, distribution, and verification of the digital certificates, which are the electronic documents that bind the identity and the public key of an entity, such as a person, a device, or an organization. PKI uses asymmetric cryptography, which is a type of cryptography that uses a pair of keys to encrypt and decrypt the data or information, such as the Rivest-Shamir-Adleman (RSA) or the Elliptic Curve Cryptography (ECC). Asymmetric cryptographic keys are the public or private values that are used to perform the encryption and decryption operations, such as the 2048-bit or 4096-bit keys. PKI certificates are classified into different classes or levels, based on the identity verification and assurance requirements, such as Class 1, Class 2, Class 3, or Class 4. Class 3 and Class 4 certificates are the highest classes or levels of PKI certificates, which require the most rigorous and stringent identity verification and assurance processes, such as the face-to-face or the in-person verification. Class 3 and Class 4 certificates are used for the high-security or high-value applications or transactions, such as the e-commerce, the e-government, or the e-banking. Class 3 and Class 4 certificates may help to securely distribute asymmetric cryptographic keys, as they ensure that the public keys are authentic, valid, and trustworthy, and that the identity and the public key of the entity are properly and securely bound. However, Class 3 and Class 4 certificates are not the things that are needed to securely distribute symmetric cryptographic keys, as they are not related to the symmetric cryptography or the symmetric cryptographic keys. An organizationally approved communication protection policy and key management plan is not the thing that is needed to securely distribute symmetric cryptographic keys, although it may be a document that guides or supports the key management technology and processes. A communication protection policy is a document that defines the rules, principles, and guidelines for the protection of the data or information that is transmitted or received over the communication channels or networks, such as the encryption, authentication, or integrity methods or mechanisms. A key management plan is a document that defines the objectives, scope, roles, and responsibilities for the key management process, as well as the policies and procedures for the key management process, such as the key generation, distribution, storage, use, change, and destruction policies and procedures. An organizationally approved communication protection policy and key management plan may help to guide or support the key management technology and processes, but it is not the thing that is needed to securely distribute symmetric cryptographic keys, as it is not the tool or method that is used to implement or perform the key management process. Hardware tokens that protect the user’s private key are not the things that are needed to securely distribute symmetric cryptographic keys, although they may be the things that are used to store or use the symmetric cryptographic keys. Hardware tokens are the physical or tangible devices that are used to store or use the cryptographic keys, such as the smart cards, the USB drives, or the hardware security modules. Hardware tokens may help to protect the user’s private key, which is the asymmetric cryptographic key that is used to decrypt the data or information, or to sign the data or information, such as the RSA or the ECC private key. Hardware tokens may also help to store or use the symmetric cryptographic key, which is the symmetric cryptographic key that is used to encrypt and decrypt the data or information, such as the AES or the DES key. However, hardware tokens are not the things that are needed to securely distribute symmetric cryptographic keys, as they are not the tools or methods that are used to generate or distribute the symmetric cryptographic keys. References: 2
After a thorough analysis, it was discovered that a perpetrator compromised a network by gaining access to the network through a Secure Socket Layer (SSL) Virtual Private Network (VPN) gateway. The perpetrator guessed a username and brute forced the password to gain access. Which of the following BEST mitigates this issue?
Implement strong passwords authentication for VPN
Integrate the VPN with centralized credential stores
Implement an Internet Protocol Security (IPSec) client
Use two-factor authentication mechanisms
The best way to mitigate the issue of a perpetrator compromising a network by gaining access to the network through an SSL VPN gateway by guessing a username and brute forcing the password is to use two-factor authentication mechanisms. Two-factor authentication is a method of verifying the identity of a user or device by requiring two different types of factors, such as something the user knows (e.g., password, PIN, etc.), something the user has (e.g., token, smart card, etc.), or something the user is (e.g., biometric, fingerprint, etc.). Two-factor authentication can enhance the security of the network access by making it harder for attackers to impersonate or compromise the legitimate users or devices. If the perpetrator only knows the username and password, they will not be able to access the network without the second factor, such as a token or a biometric34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 321; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 449.
Which of the following media sanitization techniques is MOST likely to be effective for an organization using public cloud services?
Low-level formatting
Secure-grade overwrite erasure
Cryptographic erasure
Drive degaussing
Media sanitization is the process of rendering the data on a storage device inaccessible or unrecoverable by a given level of effort. For an organization using public cloud services, the most effective media sanitization technique is cryptographic erasure, which involves encrypting the data on the device with a strong key and then deleting the key, making the data unreadable. Cryptographic erasure is suitable for cloud environments because it does not require physical access to the device, it can be performed remotely and quickly, and it does not affect the performance or lifespan of the device. Low-level formatting, secure-grade overwrite erasure, and drive degaussing are media sanitization techniques that require physical access to the device, which may not be possible or feasible for cloud users. Additionally, these techniques may not be compatible with some cloud storage technologies, such as solid-state drives (SSDs) or flash memory, and they may reduce the performance or lifespan of the device.
An application developer is deciding on the amount of idle session time that the application allows before a timeout. The BEST reason for determining the session timeout requirement is
organization policy.
industry best practices.
industry laws and regulations.
management feedback.
The session timeout requirement is the maximum amount of time that a user can be inactive on an application before the session is terminated and the user is required to re-authenticate. The best reason for determining the session timeout requirement is the organization policy, as it reflects the organization’s risk appetite, security objectives, and compliance obligations. The organization policy should specify the appropriate session timeout value for different types of applications and data, based on their sensitivity and criticality12. References:
What operations role is responsible for protecting the enterprise from corrupt or contaminated media?
Information security practitioner
Information librarian
Computer operator
Network administrator
According to the CISSP CBK Official Study Guide1, an information librarian is responsible for managing, maintaining, and protecting the organization’s knowledge resources, including ensuring that media (such as hard drives, USBs, CDs) are free from corruption or contamination to protect the enterprise’s data integrity. An information librarian is also responsible for cataloging, indexing, and classifying the media, as well as providing access and retrieval services to the authorized users. An information librarian may also perform backup, recovery, and disposal of the media, as well as monitor and audit the usage and security of the media. An information security practitioner is not the operations role that is responsible for protecting the enterprise from corrupt or contaminated media, although they may be involved in defining and enforcing the policies and standards for the media security. An information security practitioner is a general term for a person who performs various functions and tasks related to the information security of the organization, such as planning, designing, implementing, testing, operating, or auditing the information security systems and controls. An information security practitioner may also provide guidance, advice, and training to the other roles and stakeholders on the information security matters. A computer operator is not the operations role that is responsible for protecting the enterprise from corrupt or contaminated media, although they may be involved in using and handling the media. A computer operator is a person who operates and controls the computer systems and devices of the organization, such as the servers, workstations, printers, or scanners. A computer operator may also perform tasks such as loading and unloading the media, running and monitoring the programs and applications, troubleshooting and resolving the errors and problems, and reporting and documenting the activities and incidents. A network administrator is not the operations role that is responsible for protecting the enterprise from corrupt or contaminated media, although they may be involved in configuring and connecting the media. A network administrator is a person who administers and manages the network systems and devices of the organization, such as the routers, switches, firewalls, or wireless access points. A network administrator may also perform tasks such as installing and updating the network software and hardware, setting and maintaining the network parameters and security, optimizing and troubleshooting the network performance and availability, and supporting and assisting the network users and clients. References: 1
What is the difference between media marking and media labeling?
Media marking refers to the use of human-readable security attributes, while media labeling refers to the use of security attributes in internal data structures.
Media labeling refers to the use of human-readable security attributes, while media marking refers to the use of security attributes in internal data structures.
Media labeling refers to security attributes required by public policy/law, while media marking refers to security required by internal organizational policy.
Media marking refers to security attributes required by public policy/law, while media labeling refers to security attributes required by internal organizational policy.
According to the CISSP CBK Official Study Guide1, the difference between media marking and media labeling is that media labeling refers to the use of human-readable security attributes, while media marking refers to the use of security attributes in internal data structures. Media marking and media labeling are two methods or techniques of applying security attributes to the media, which are the physical or tangible devices or materials that store or contain the data or information, such as the disks, tapes, or papers. Security attributes are the tags or markers that indicate the classification, sensitivity, or clearance of the media, data, or information, such as top secret, secret, or confidential. Security attributes help to protect the media, data, or information from unauthorized or unintended access, disclosure, modification, corruption, loss, or theft, as well as to support the access control and audit mechanisms. Media labeling is the method or technique of applying security attributes to the media in a human-readable form, such as the words, symbols, or colors that are printed, stamped, or affixed on the media. Media labeling helps to identify and distinguish the media, data, or information based on their security attributes, as well as to inform and instruct the users or handlers of the media, data, or information about the proper and secure handling and disposal of them. Media marking is the method or technique of applying security attributes to the media in an internal data structure form, such as the bits, bytes, or fields that are embedded, encoded, or encrypted in the media. Media marking helps to verify and validate the media, data, or information based on their security attributes, as well as to enforce and monitor the access control and audit mechanisms for them. Media marking refers to security attributes required by public policy/law, while media labeling refers to security required by internal organizational policy is not the difference between media marking and media labeling, as it is not related to the form or format of the security attributes, but to the source or authority of the security attributes. Media marking and media labeling may both refer to security attributes required by public policy/law, such as the Controlled Unclassified Information (CUI) or the Personal Identifiable Information (PII), or to security attributes required by internal organizational policy, such as the proprietary or confidential information. The difference between media marking and media labeling is not based on who or what requires the security attributes, but on how the security attributes are applied or represented on the media.
Which of the following is the BEST example of weak management commitment to the protection of security assets and resources?
poor governance over security processes and procedures
immature security controls and procedures
variances against regulatory requirements
unanticipated increases in security incidents and threats
The best example of weak management commitment to the protection of security assets and resources is poor governance over security processes and procedures. Governance is the set of policies, roles, responsibilities, and processes that guide, direct, and control how an organization’s business divisions and IT teams cooperate to achieve business goals. Management commitment is essential for effective governance, as it demonstrates the leadership and support for security initiatives and activities. Poor governance indicates that management does not prioritize security, allocate sufficient resources, enforce accountability, or monitor performance. The other options are not examples of weak management commitment, but rather possible consequences or indicators of poor security practices. Immature security controls and procedures, variances against regulatory requirements, and unanticipated increases in security incidents and threats are all signs that security is not well-managed or implemented, but they do not necessarily reflect the level of management commitment. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 19; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, p. 9.
Which of the following BEST describes a Protection Profile (PP)?
A document that expresses an implementation independent set of security requirements for an IT product that meets specific consumer needs.
A document that is used to develop an IT security product from its security requirements definition.
A document that expresses an implementation dependent set of security requirements which contains only the security functional requirements.
A document that represents evaluated products where there is a one-to-one correspondence between a PP and a Security Target (ST).
A Protection Profile (PP) is a document that expresses an implementation independent set of security requirements for an IT product that meets specific consumer needs. A PP is based on the Common Criteria (CC) framework, which is an international standard for evaluating the security of IT products and systems. A PP defines the security objectives, threats, assumptions, and functional and assurance requirements for a product or a category of products. The other options are not correct descriptions of a PP. Option B is a description of a Security Target (ST), which is a document that is used to develop an IT security product from its security requirements definition. Option C is a description of an implementation dependent set of security requirements, which is not a PP, but rather a part of an ST. Option D is a description of a certified product, which is a product that has been evaluated against a PP or an ST and has met the security requirements. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, p. 414; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, p. 147.
The World Trade Organization's (WTO) agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) requires authors of computer software to be given the
right to refuse or permit commercial rentals.
right to disguise the software's geographic origin.
ability to tailor security parameters based on location.
ability to confirm license authenticity of their works.
The World Trade Organization’s (WTO) agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) requires authors of computer software to be given the right to refuse or permit commercial rentals. TRIPS is an international treaty that sets the minimum standards and rules for the protection and enforcement of intellectual property rights, such as patents, trademarks, or copyrights. TRIPS requires authors of computer software to be given the right to refuse or permit commercial rentals, which means that they can control whether their software can be rented or leased to others for profit. This right is intended to prevent the unauthorized copying or distribution of the software, and to ensure that the authors receive fair compensation for their work. The other options are not the rights that TRIPS requires authors of computer software to be given, but rather different or irrelevant concepts. The right to disguise the software’s geographic origin is not a right, but rather a violation, of TRIPS, as it can mislead or deceive the consumers or authorities about the source or quality of the software. The ability to tailor security parameters based on location is not a right, but rather a feature, of some software, such as encryption or authentication software, that can adjust the security settings or functions according to the location or jurisdiction of the user or device. The ability to confirm license authenticity of their works is not a right, but rather a benefit, of some software, such as digital rights management or anti-piracy software, that can verify or validate the license or ownership of the software. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 40; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 302.
Which of the following is the BEST approach to take in order to effectively incorporate the concepts of business continuity into the organization?
Ensure end users are aware of the planning activities
Validate all regulatory requirements are known and fully documented
Develop training and awareness programs that involve all stakeholders
Ensure plans do not violate the organization's cultural objectives and goals
Incorporating business continuity concepts effectively into an organization requires developing training and awareness programs that involve all stakeholders. This ensures that everyone understands their roles, responsibilities, and actions required during a disruption or crisis. References: CISSP Official (ISC)2 Practice Tests, Chapter 9, page 249; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 9, page 440
Which of the following questions can be answered using user and group entitlement reporting?
When a particular file was last accessed by a user
Change control activities for a particular group of users
The number of failed login attempts for a particular user
Where does a particular user have access within the network
User and group entitlement reporting is a process of collecting and analyzing the access rights and permissions of users and groups across the network. It can help answer questions such as where does a particular user have access within the network, what resources are accessible by a particular group, and who has access to a particular resource. User and group entitlement reporting can also help identify and remediate excessive or inappropriate access rights, enforce the principle of least privilege, and comply with security policies and regulations.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 138; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, page 114
Which type of security testing is being performed when an ethical hacker has no knowledge about the target system but the testing target is notified before the test?
Reversal
Gray box
Blind
White box
According to the CISSP CBK Official Study Guide1, the type of security testing that is being performed when an ethical hacker has no knowledge about the target system but the testing target is notified before the test is blind. Security testing is the process of assessing or evaluating the security or the vulnerability of the system or the network, by performing or conducting various tests or methods, such as the scanning, the analysis, or the penetration of the system or the network. Security testing can be classified into four types, based on the level of knowledge or information that the tester or the ethical hacker has about the target system or the network, as well as the level of notification or consent that the testing target or the owner has about the test, which are:
The type of security testing that is being performed when an ethical hacker has no knowledge about the target system but the testing target is notified before the test is blind, as it matches the definition or the description of the blind security testing, which is the security testing that is performed or conducted when the tester or the ethical hacker has no or zero knowledge or information about the target system or the network, and the testing target or the owner has full or complete notification or consent about the test. Reversal is not the type of security testing that is being performed when an ethical hacker has no knowledge about the target system but the testing target is notified before the test, as it does not match the definition or the description of the reversal security testing, which is the security testing that is performed or conducted when the tester or the ethical hacker has full or complete knowledge or information about the target system or the network, and the testing target or the owner has no or zero notification or consent about the test. Gray box is not the type of security testing that is being performed when an ethical hacker has no knowledge about the target system but the testing target is notified before the test, as it does not match the definition or the description of the gray box security testing, which is the security testing that is performed or conducted when the tester or the ethical hacker has partial or limited knowledge or information about the target system or the network, and the testing target or the owner has partial or limited notification or consent about the test. White box is not the type of security testing that is being performed when an ethical hacker has no knowledge about the target system but the testing target is notified before the test, as it does not match the definition or the description of the white box security testing, which is the security testing that is performed or conducted when the tester or the ethical hacker has full or complete knowledge or information about the target system or the network, and the testing target or the owner has full or complete notification or consent about the test. References: 1
Which of the following is the BEST method to assess the effectiveness of an organization's vulnerability management program?
Review automated patch deployment reports
Periodic third party vulnerability assessment
Automated vulnerability scanning
Perform vulnerability scan by security team
A third-party vulnerability assessment provides an unbiased evaluation of the organization’s security posture, identifying existing vulnerabilities and offering recommendations for mitigation. It is more comprehensive and objective compared to internal reviews or automated scans. References: CISSP Official (ISC)2 Practice Tests, Chapter 5, page 137
Sensitive customer data is going to be added to a database. What is the MOST effective implementation for ensuring data privacy?
Discretionary Access Control (DAC) procedures
Mandatory Access Control (MAC) procedures
Data link encryption
Segregation of duties
The most effective implementation for ensuring data privacy when sensitive customer data is going to be added to a database is data link encryption. Data link encryption is a type of encryption or a protection technique or mechanism that encrypts or protects the data or the information that is transmitted or communicated over the data link layer or the second layer of the Open Systems Interconnection (OSI) model, which is the layer or the level that provides or offers the reliable or the error-free transmission or communication of the data or the information between the nodes or the devices that are connected or linked by the physical layer or the first layer of the OSI model, such as the switches, the bridges, or the wireless access points. Data link encryption can provide a high level of security or protection for the data or the information that is transmitted or communicated over the data link layer, as it can prevent or reduce the risk of unauthorized or inappropriate access, disclosure, modification, or interception of the data or the information by the third parties or the attackers who capture or monitor the data or the information over the data link layer, and as it can also provide the confidentiality, the integrity, or the authenticity of the data or the information that is transmitted or communicated over the data link layer. Data link encryption is the most effective implementation for ensuring data privacy when sensitive customer data is going to be added to a database, as it can ensure or maintain the security or the quality of the sensitive customer data or the information that is transmitted or communicated over the data link layer, by encrypting or protecting the sensitive customer data or the information that is going to be added to the database, and by preventing or reducing the risk of unauthorized or inappropriate access, disclosure, modification, or interception of the sensitive customer data or the information by the third parties or the attackers who capture or monitor the sensitive customer data or the information over the data link layer.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 146; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 211
Which of the following analyses is performed to protect information assets?
Business impact analysis
Feasibility analysis
Cost benefit analysis
Data analysis
The analysis that is performed to protect information assets is the cost benefit analysis, which is a method of comparing the costs and benefits of different security solutions or alternatives. The cost benefit analysis helps to justify the investment in security controls and measures by evaluating the trade-offs between the security costs and the security benefits. The security costs include the direct and indirect expenses of acquiring, implementing, operating, and maintaining the security controls and measures. The security benefits include the reduction of risks, losses, and liabilities, as well as the enhancement of productivity, performance, and reputation. The other options are not the analysis that is performed to protect information assets, but rather different types of analyses. A business impact analysis is a method of identifying and quantifying the potential impacts of disruptive events on the organization’s critical business functions and processes. A feasibility analysis is a method of assessing the technical, operational, and economic viability of a proposed project or solution. A data analysis is a method of processing, transforming, and modeling data to extract useful information and insights. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 28; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, p. 21; CISSP practice exam questions and answers, Question 10.
The goal of a Business Impact Analysis (BIA) is to determine which of the following?
Cost effectiveness of business recovery
Cost effectiveness of installing software security patches
Resource priorities for recovery and Maximum Tolerable Downtime (MTD)
Which security measures should be implemented
According to the CISSP For Dummies3, the goal of a Business Impact Analysis (BIA) is to determine the resource priorities for recovery and Maximum Tolerable Downtime (MTD) for each business process and function. This means that the BIA should identify the criticality and dependencies of each business process and function, and the maximum amount of time that they can be disrupted without causing unacceptable consequences to the organization. The BIA should also determine the recovery point objectives (RPOs) and recovery time objectives (RTOs) for each business process and function, which are the acceptable levels of data loss and downtime respectively. The BIA should not focus on the cost effectiveness of business recovery or installing software security patches, as these are not the primary objectives of the BIA. The BIA should also not determine which security measures should be implemented, as this is the role of the risk assessment and risk management processes. References: 3
What type of wireless network attack BEST describes an Electromagnetic Pulse (EMP) attack?
Radio Frequency (RF) attack
Denial of Service (DoS) attack
Data modification attack
Application-layer attack
A Denial of Service (DoS) attack is a type of wireless network attack that aims to prevent legitimate users from accessing or using a wireless network or service. An Electromagnetic Pulse (EMP) attack is a specific form of DoS attack that involves generating a powerful burst of electromagnetic energy that can damage or disrupt electronic devices and systems, including wireless networks. An EMP attack can cause permanent or temporary loss of wireless network availability, functionality, or performance. A Radio Frequency (RF) attack is a type of wireless network attack that involves interfering with or jamming the radio signals used by wireless devices and networks, but it does not necessarily involve an EMP. A data modification attack is a type of wireless network attack that involves altering or tampering with the data transmitted or received over a wireless network, but it does not necessarily cause a DoS. An application-layer attack is a type of wireless network attack that targets the applications or services running on a wireless network, such as web servers or email servers, but it does not necessarily involve an EMP.
The BEST example of the concept of "something that a user has" when providing an authorized user access to a computing system is
the user's hand geometry.
a credential stored in a token.
a passphrase.
the user's face.
Which of the following is an essential step before performing Structured Query Language (SQL) penetration tests on a production system?
Verify countermeasures have been deactivated.
Ensure firewall logging has been activated.
Validate target systems have been backed up.
Confirm warm site is ready to accept connections.
An essential step before performing SQL penetration tests on a production system is to validate that the target systems have been backed up. SQL penetration tests are a type of security testing that involves injecting malicious SQL commands or queries into a database or application to exploit vulnerabilities or gain unauthorized access. Performing SQL penetration tests on a production system can cause data loss, corruption, or modification, as well as system downtime or instability. Therefore, it is important to ensure that the target systems have been backed up before conducting the tests, so that the data and system can be restored in case of any damage or disruption. The other options are not essential steps, but rather optional or irrelevant steps. Verifying countermeasures have been deactivated is not an essential step, but rather a risky and unethical step, as it can expose the system to other attacks or compromise the validity of the test results. Ensuring firewall logging has been activated is not an essential step, but rather a good practice, as it can help to monitor and record the test activities and outcomes. Confirming warm site is ready to accept connections is not an essential step, but rather a contingency plan, as it can provide an alternative site for continuing the system operations in case of a major failure or disaster. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 9, p. 471; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, p. 417.
Regarding asset security and appropriate retention, which of the following INITIAL top three areas are important to focus on?
Security control baselines, access controls, employee awareness and training
Human resources, asset management, production management
Supply chain lead-time, inventory control, and encryption
Polygraphs, crime statistics, forensics
Regarding asset security and appropriate retention, the initial top three areas that are important to focus on are security control baselines, access controls, employee awareness and training. Asset security and appropriate retention are the processes of identifying, classifying, protecting, and disposing of the assets of an organization, such as data, systems, devices, or facilities. Asset security and appropriate retention can help prevent or reduce the loss, theft, damage, or misuse of the assets, as well as comply with the legal and regulatory requirements. The initial top three areas that can help achieve asset security and appropriate retention are:
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2: Asset Security, pp. 61-62; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 2: Asset Security, pp. 163-164.
Which of the following is the MOST important output from a mobile application threat modeling exercise according to Open Web Application Security Project (OWASP)?
Application interface entry and endpoints
The likelihood and impact of a vulnerability
Countermeasures and mitigations for vulnerabilities
A data flow diagram for the application and attack surface analysis
The most important output from a mobile application threat modeling exercise according to OWASP is a data flow diagram for the application and attack surface analysis. A data flow diagram is a graphical representation of the data flows and processes within the application, as well as the external entities and boundaries that interact with the application. An attack surface analysis is a systematic evaluation of the potential vulnerabilities and threats that can affect the application, based on the data flow diagram and other sources of information. These two outputs can help identify and prioritize the security risks and requirements for the mobile application, as well as the countermeasures and mitigations for the vulnerabilities.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 487; [Official
Match the objectives to the assessment questions in the governance domain of Software Assurance Maturity Model (SAMM).
The correct matches are as follows:
Comprehensive Explanation: These matches are based on the definitions and objectives of the four governance domain practices in the Software Assurance Maturity Model (SAMM). SAMM is a framework to help organizations assess and improve their software security posture. The governance domain covers the organizational aspects of software security, such as policies, metrics, and roles.
References: SAMM Governance Domain; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 452
What is the MOST critical factor to achieve the goals of a security program?
Capabilities of security resources
Executive management support
Effectiveness of security management
Budget approved for security resources
The most critical factor to achieve the goals of a security program is the executive management support. The executive management is the highest level of authority or decision-making in the organization, such as the board of directors, the chief executive officer, or the chief information officer. The executive management support is the endorsement, the sponsorship, or the involvement of the executive management in the security program, such as the security planning, the security implementation, the security monitoring, or the security auditing. The executive management support is the most critical factor to achieve the goals of the security program, as it can provide the vision, the direction, or the strategy for the security program, and it can align the security program with the business needs and requirements. The executive management support can also provide the resources, the budget, or the authority for the security program, and it can foster the security culture, the security awareness, or the security governance in the organization. The executive management support can also influence the stakeholders, the customers, or the regulators, and it can demonstrate the commitment, the accountability, or the responsibility for the security program. Capabilities of security resources, effectiveness of security management, and budget approved for security resources are not the most critical factors to achieve the goals of the security program, as they are related to the skills, the performance, or the funding of the security program, not the endorsement, the sponsorship, or the involvement of the executive management in the security program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 33. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 48.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will MOST likely allow the organization to keep risk at an acceptable level?
Increasing the amount of audits performed by third parties
Removing privileged accounts from operational staff
Assigning privileged functions to appropriate staff
Separating the security function into distinct roles
The most likely action that will allow the organization to keep risk at an acceptable level is separating the security function into distinct roles. Separating the security function into distinct roles means to create and assign the specific and dedicated roles or positions for the security activities and initiatives, such as the security planning, the security implementation, the security monitoring, or the security auditing, and to separate them from the normal IT operations. Separating the security function into distinct roles can help to keep risk at an acceptable level, as it can enhance the security performance and effectiveness, by providing the authority, the resources, the guidance, and the accountability for the security roles, and by supporting the principle of least privilege and the separation of duties. Increasing the amount of audits performed by third parties, removing privileged accounts from operational staff, and assigning privileged functions to appropriate staff are not the most likely actions that will allow the organization to keep risk at an acceptable level, as they are related to the evaluation, the restriction, or the allocation of the security access or activity, not the separation of the security function into distinct roles. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 32. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 47.
According to best practice, which of the following groups is the MOST effective in performing an information security compliance audit?
In-house security administrators
In-house Network Team
Disaster Recovery (DR) Team
External consultants
According to best practice, the most effective group in performing an information security compliance audit is external consultants. External consultants are independent and objective third parties that can provide unbiased and impartial assessment of the organization’s compliance with the security policies, standards, and regulations. External consultants can also bring expertise, experience, and best practices from other organizations and industries, and offer recommendations for improvement. The other options are not as effective as external consultants, as they either have a conflict of interest or lack of independence (A and B), or do not have the primary role or responsibility of conducting compliance audits ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 240; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 302.
Refer to the information below to answer the question.
In a Multilevel Security (MLS) system, the following sensitivity labels are used in increasing levels of sensitivity: restricted, confidential, secret, top secret. Table A lists the clearance levels for four users, while Table B lists the security classes of four different files.
Which of the following is true according to the star property (*property)?
User D can write to File 1
User B can write to File 1
User A can write to File 1
User C can write to File 1
According to the star property (*property) of the Bell-LaPadula model, a subject with a given security clearance may write data to an object if and only if the object’s security level is greater than or equal to the subject’s security level. In other words, a subject can write data to an object with the same or higher sensitivity label, but not to an object with a lower sensitivity label. This rule is also known as the no write-down rule, as it prevents the leakage of information from a higher level to a lower level. In this question, User A has a Restricted clearance, and File 1 has a Restricted security class. Therefore, User A can write to File 1, as they have the same security level. User B, User C, and User D cannot write to File 1, as they have higher clearances than the security class of File 1, and they would violate the star property by writing down information to a lower level. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 498. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 514.
Which of the following is the MOST difficult to enforce when using cloud computing?
Data access
Data backup
Data recovery
Data disposal
The most difficult thing to enforce when using cloud computing is data disposal. Data disposal is the process of permanently deleting or destroying the data that is no longer needed or authorized, in a secure and compliant manner. Data disposal is challenging when using cloud computing, because the data may be stored or replicated in multiple locations, devices, or servers, and the cloud provider may not have the same policies, procedures, or standards as the cloud customer. Data disposal may also be affected by the legal or regulatory requirements of different jurisdictions, or the contractual obligations of the cloud service agreement. Data access, data backup, and data recovery are not the most difficult things to enforce when using cloud computing, as they can be achieved by using encryption, authentication, authorization, replication, or restoration techniques, and by specifying the service level agreements and the roles and responsibilities of the cloud provider and the cloud customer. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 337. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 353.
Which of the following methods provides the MOST protection for user credentials?
Forms-based authentication
Digest authentication
Basic authentication
Self-registration
The method that provides the most protection for user credentials is digest authentication. Digest authentication is a type of authentication that verifies the identity of a user or a device by using a cryptographic hash function to transform the user credentials, such as username and password, into a digest or a hash value, before sending them over a network, such as the internet. Digest authentication can provide more protection for user credentials than basic authentication, which sends the user credentials in plain text, or forms-based authentication, which relies on the security of the web server or the web application. Digest authentication can prevent the interception, disclosure, or modification of the user credentials by third parties, and can also prevent replay attacks by using a nonce or a random value. Self-registration is not a method of authentication, but a process of creating a user account or a profile by providing some personal information, such as name, email, or phone number. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
When using third-party software developers, which of the following is the MOST effective method of providing software development Quality Assurance (QA)?
Retain intellectual property rights through contractual wording.
Perform overlapping code reviews by both parties.
Verify that the contractors attend development planning meetings.
Create a separate contractor development environment.
When using third-party software developers, the most effective method of providing software development Quality Assurance (QA) is to perform overlapping code reviews by both parties. Code reviews are the process of examining the source code of an application for quality, functionality, security, and compliance. Overlapping code reviews by both parties means that the code is reviewed by both the third-party developers and the contracting organization, and that the reviews cover the same or similar aspects of the code. This can ensure that the code meets the requirements and specifications, that the code is free of defects or vulnerabilities, and that the code is consistent and compatible with the existing system or environment. Retaining intellectual property rights through contractual wording, verifying that the contractors attend development planning meetings, and creating a separate contractor development environment are all possible methods of providing software development QA, but they are not the most effective method of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1026. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1050.
Refer to the information below to answer the question.
A security practitioner detects client-based attacks on the organization’s network. A plan will be necessary to address these concerns.
What is the BEST reason for the organization to pursue a plan to mitigate client-based attacks?
Client privilege administration is inherently weaker than server privilege administration.
Client hardening and management is easier on clients than on servers.
Client-based attacks are more common and easier to exploit than server and network based attacks.
Client-based attacks have higher financial impact.
The best reason for the organization to pursue a plan to mitigate client-based attacks is that client-based attacks are more common and easier to exploit than server and network based attacks. Client-based attacks are the attacks that target the client applications or systems, such as web browsers, email clients, or media players, and that can exploit the vulnerabilities or weaknesses of the client software or configuration, or the user behavior or interaction. Client-based attacks are more common and easier to exploit than server and network based attacks, because the client applications or systems are more exposed and accessible to the attackers, the client software or configuration is more diverse and complex to secure, and the user behavior or interaction is more unpredictable and prone to errors or mistakes. Therefore, the organization needs to pursue a plan to mitigate client-based attacks, as they pose a significant security threat or risk to the organization’s data, systems, or network. Client privilege administration is inherently weaker than server privilege administration, client hardening and management is easier on clients than on servers, and client-based attacks have higher financial impact are not the best reasons for the organization to pursue a plan to mitigate client-based attacks, as they are not supported by the facts or evidence, or they are not relevant or specific to the client-side security. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1050. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1066.
Refer to the information below to answer the question.
A large organization uses unique identifiers and requires them at the start of every system session. Application access is based on job classification. The organization is subject to periodic independent reviews of access controls and violations. The organization uses wired and wireless networks and remote access. The organization also uses secure connections to branch offices and secure backup and recovery strategies for selected information and processes.
In addition to authentication at the start of the user session, best practice would require re-authentication
periodically during a session.
for each business process.
at system sign-off.
after a period of inactivity.
The best practice would require re-authentication after a period of inactivity, in addition to authentication at the start of the user session. Authentication is a process of verifying the identity or the credentials of a user or a device that requests access to a system or a resource. Re-authentication is a process of repeating the authentication after a certain condition or event, such as a change of location, a change of role, a change of privilege, or a period of inactivity. Re-authentication can help to enhance the security and the accountability of the access control, as it can prevent or detect the unauthorized or malicious use of the user or the device credentials, and it can ensure that the user or the device is still active and valid. Re-authenticating after a period of inactivity can help to prevent the unauthorized or malicious access by someone who may have gained physical access to the user or the device session, such as a co-worker, a visitor, or a thief. Re-authenticating periodically during a session, for each business process, or at system sign-off are not the best practices, as they may not be necessary or effective for the security or the accountability of the access control, and they may cause inconvenience or frustration to the user or the device. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Host-Based Intrusion Protection (HIPS) systems are often deployed in monitoring or learning mode during their initial implementation. What is the objective of starting in this mode?
Automatically create exceptions for specific actions or files
Determine which files are unsafe to access and blacklist them
Automatically whitelist actions or files known to the system
Build a baseline of normal or safe system events for review
A Host-Based Intrusion Protection (HIPS) system is a software that monitors and blocks malicious activities on a single host, such as a computer or a server. A HIPS system can also prevent unauthorized changes to the system configuration, files, or registry12
During the initial implementation, a HIPS system is often deployed in monitoring or learning mode, which means that it observes the normal behavior of the system and the applications running on it, without blocking or alerting on any events. The objective of starting in this mode is to automatically create exceptions for specific actions or files that are legitimate and safe, but may otherwise trigger false alarms or unwanted blocks by the HIPS system34
By creating exceptions, the HIPS system can reduce the number of false positives and improve its accuracy and efficiency. However, the monitoring or learning mode should not last too long, as it may also expose the system to potential attacks that are not detected or prevented by the HIPS system. Therefore, after a sufficient baseline of normal behavior is established, the HIPS system should be switched to a more proactive mode, such as alerting or blocking mode, which can actively respond to suspicious or malicious events
When dealing with compliance with the Payment Card Industry-Data Security Standard (PCI-DSS), an organization that shares card holder information with a service provider MUST do which of the following?
Perform a service provider PCI-DSS assessment on a yearly basis.
Validate the service provider's PCI-DSS compliance status on a regular basis.
Validate that the service providers security policies are in alignment with those of the organization.
Ensure that the service provider updates and tests its Disaster Recovery Plan (DRP) on a yearly basis.
The action that an organization that shares card holder information with a service provider must do when dealing with compliance with the Payment Card Industry-Data Security Standard (PCI-DSS) is to validate the service provider’s PCI-DSS compliance status on a regular basis. PCI-DSS is a set of security standards that applies to any organization that stores, processes, or transmits card holder data, such as credit or debit card information. PCI-DSS aims to protect the card holder data from unauthorized access, use, disclosure, or theft, and to ensure the security and integrity of the payment transactions. If an organization shares card holder data with a service provider, such as a payment processor, a hosting provider, or a cloud provider, the organization is still responsible for the security and compliance of the card holder data, and must ensure that the service provider also meets the PCI-DSS requirements. The organization must validate the service provider’s PCI-DSS compliance status on a regular basis, by obtaining and reviewing the service provider’s PCI-DSS assessment reports, such as the Self-Assessment Questionnaire (SAQ), the Report on Compliance (ROC), or the Attestation of Compliance (AOC). Performing a service provider PCI-DSS assessment on a yearly basis, validating that the service provider’s security policies are in alignment with those of the organization, and ensuring that the service provider updates and tests its Disaster Recovery Plan (DRP) on a yearly basis are not the actions that an organization that shares card holder information with a service provider must do when dealing with compliance with PCI-DSS, as they are not sufficient or relevant to verify the service provider’s PCI-DSS compliance status or to protect the card holder data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 49. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 64.
Refer to the information below to answer the question.
A large, multinational organization has decided to outsource a portion of their Information Technology (IT) organization to a third-party provider’s facility. This provider will be responsible for the design, development, testing, and support of several critical, customer-based applications used by the organization.
The organization should ensure that the third party's physical security controls are in place so that they
are more rigorous than the original controls.
are able to limit access to sensitive information.
allow access by the organization staff at any time.
cannot be accessed by subcontractors of the third party.
The organization should ensure that the third party’s physical security controls are in place so that they are able to limit access to sensitive information. Physical security controls are the measures or the mechanisms that protect the physical assets, such as the hardware, the software, the media, or the personnel, from the unauthorized or the malicious access, damage, or theft. Physical security controls can include locks, fences, guards, cameras, alarms, or biometrics. The organization should ensure that the third party’s physical security controls are able to limit access to sensitive information, as it can prevent or reduce the risk of the data breach, the data loss, or the data corruption, and it can ensure the confidentiality, the integrity, and the availability of the information. The organization should also ensure that the third party’s physical security controls are compliant with the organization’s policies, standards, and regulations, and that they are audited and monitored regularly. The organization should not ensure that the third party’s physical security controls are more rigorous than the original controls, allow access by the organization staff at any time, or cannot be accessed by subcontractors of the third party, as they are related to the level, the scope, or the restriction of the physical security controls, not the ability to limit access to sensitive information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 849. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 865.
From a security perspective, which of the following is a best practice to configure a Domain Name Service (DNS) system?
Configure secondary servers to use the primary server as a zone forwarder.
Block all Transmission Control Protocol (TCP) connections.
Disable all recursive queries on the name servers.
Limit zone transfers to authorized devices.
From a security perspective, the best practice to configure a DNS system is to limit zone transfers to authorized devices. Zone transfers are the processes of replicating the DNS data from one server to another, usually from a primary server to a secondary server. Zone transfers can expose sensitive information about the network topology, hosts, and services to attackers, who can use this information to launch further attacks. Therefore, zone transfers should be restricted to only the devices that need them, and authenticated and encrypted to prevent unauthorized access or modification. The other options are not as good as limiting zone transfers, as they either do not provide sufficient security for the DNS system (A and B), or do not address the zone transfer issue ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 156; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 166.
What is the BEST method to detect the most common improper initialization problems in programming languages?
Use and specify a strong character encoding.
Use automated static analysis tools that target this type of weakness.
Perform input validation on any numeric inputs by assuring that they are within the expected range.
Use data flow analysis to minimize the number of false positives.
The best method to detect the most common improper initialization problems in programming languages is to use automated static analysis tools that target this type of weakness. Improper initialization is a type of programming error that occurs when a variable or a data structure is not assigned a valid initial value before it is used. This can lead to undefined behavior, memory corruption, or security vulnerabilities. Automated static analysis tools are software tools that can scan, analyze, and test the source code of a program for errors, flaws, or vulnerabilities, without executing the program. By using automated static analysis tools that target improper initialization problems, the programmer can identify and fix the potential issues before they cause any harm or damage. Use and specify a strong character encoding, perform input validation on any numeric inputs by assuring that they are within the expected range, and use data flow analysis to minimize the number of false positives are not the best methods to detect the most common improper initialization problems in programming languages, as they do not directly address the root cause of the problem or provide the same level of coverage and accuracy as automated static analysis tools. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1018. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1040.
What is the MOST effective method for gaining unauthorized access to a file protected with a long complex password?
Brute force attack
Frequency analysis
Social engineering
Dictionary attack
The most effective method for gaining unauthorized access to a file protected with a long complex password is social engineering. Social engineering is a type of attack that exploits the human factor or the psychological weaknesses of the target, such as trust, curiosity, greed, or fear, to manipulate them into revealing sensitive information, such as passwords, or performing malicious actions, such as opening malicious attachments or clicking malicious links. Social engineering can bypass the technical security controls, such as encryption or authentication, and can be more efficient and successful than other methods that rely on brute force or guesswork. Brute force attack, frequency analysis, and dictionary attack are not the most effective methods for gaining unauthorized access to a file protected with a long complex password, as they require a lot of time, resources, and computing power, and they can be thwarted by the use of strong passwords, password policies, or password managers. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, Security Assessment and Testing, page 813. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, Security Assessment and Testing, page 829.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
When determining appropriate resource allocation, which of the following is MOST important to monitor?
Number of system compromises
Number of audit findings
Number of staff reductions
Number of additional assets
The most important factor to monitor when determining appropriate resource allocation is the number of system compromises. The number of system compromises is the count or the frequency of the security incidents or breaches that affect the confidentiality, the integrity, or the availability of the system data or functionality, and that are caused by the unauthorized or the malicious access or activity. The number of system compromises can help to determine appropriate resource allocation, as it can indicate the level of security risk or threat that the system faces, and the level of security protection or improvement that the system needs. The number of system compromises can also help to evaluate the effectiveness or the efficiency of the current resource allocation, and to identify the areas or the domains that require more or less resources. Number of audit findings, number of staff reductions, and number of additional assets are not the most important factors to monitor when determining appropriate resource allocation, as they are related to the results or the outcomes of the audit process, the changes or the impacts of the staff size, or the additions or the expansions of the system resources, not the security incidents or breaches that affect the system data or functionality. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 863. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 879.
Without proper signal protection, embedded systems may be prone to which type of attack?
Brute force
Tampering
Information disclosure
Denial of Service (DoS)
The type of attack that embedded systems may be prone to without proper signal protection is information disclosure. Information disclosure is a type of attack that exposes or reveals sensitive or confidential information to unauthorized parties, such as attackers, competitors, or the public. Information disclosure can occur through various means, such as interception, leakage, or theft of the information. Embedded systems are systems that are integrated into other devices or machines, such as cars, medical devices, or industrial controllers, and perform specific functions or tasks. Embedded systems may communicate with other systems or devices through signals, such as radio frequency, infrared, or sound waves. Without proper signal protection, such as encryption, authentication, or shielding, embedded systems may be vulnerable to information disclosure, as the signals may be captured, analyzed, or modified by attackers, and the information contained in the signals may be compromised. Brute force, tampering, and denial of service are not the types of attack that embedded systems may be prone to without proper signal protection, as they are related to the guessing, alteration, or prevention of the access or functionality of the systems, not the exposure or revelation of the information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 311. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 327.
Which of the following describes the concept of a Single Sign -On (SSO) system?
Users are authenticated to one system at a time.
Users are identified to multiple systems with several credentials.
Users are authenticated to multiple systems with one login.
Only one user is using the system at a time.
Single Sign-On (SSO) is a technology that allows users to securely access multiple applications and services using just one set of credentials, such as a username and a password56
With SSO, users do not have to remember and enter multiple passwords for different applications and services, which can improve their convenience and productivity. SSO also enhances security, as users can use stronger passwords, avoid reusing passwords, and comply with password policies more easily. Moreover, SSO reduces the risk of phishing, credential theft, and password fatigue56
SSO is based on the concept of federated identity, which means that the identity of a user is shared and trusted across different systems that have established a trust relationship. SSO uses various protocols and standards, such as SAML, OAuth, OIDC, and Kerberos, to enable the exchange of identity information and authentication tokens between the systems56
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following documents explains the proper use of the organization's assets?
Human resources policy
Acceptable use policy
Code of ethics
Access control policy
The document that explains the proper use of the organization’s assets is the acceptable use policy. An acceptable use policy is a document that defines the rules and guidelines for the appropriate and responsible use of the organization’s information systems and resources, such as computers, networks, or devices. An acceptable use policy can help to prevent or reduce the misuse, abuse, or damage of the organization’s assets, and to protect the security, privacy, and reputation of the organization and its users. An acceptable use policy can also specify the consequences or penalties for violating the policy, such as disciplinary actions, termination, or legal actions. A human resources policy, a code of ethics, and an access control policy are not the documents that explain the proper use of the organization’s assets, as they are related to the management, values, or authorization of the organization’s employees or users, not the usage or responsibility of the organization’s information systems or resources. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 47. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 62.
An organization's data policy MUST include a data retention period which is based on
application dismissal.
business procedures.
digital certificates expiration.
regulatory compliance.
An organization’s data policy must include a data retention period that is based on regulatory compliance. Regulatory compliance is the adherence to the laws, regulations, and standards that apply to the organization’s industry, sector, or jurisdiction. Regulatory compliance may dictate how long the organization must retain certain types of data, such as financial records, health records, or tax records, and how the data must be stored, protected, and disposed of. The organization must follow the regulatory compliance requirements for data retention to avoid legal liabilities, fines, or sanctions. The other options are not the basis for data retention period, as they either do not relate to the data policy (A and C), or do not have the same level of authority or obligation (B). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 68; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 74.
Refer to the information below to answer the question.
Desktop computers in an organization were sanitized for re-use in an equivalent security environment. The data was destroyed in accordance with organizational policy and all marking and other external indications of the sensitivity of the data that was formerly stored on the magnetic drives were removed.
Organizational policy requires the deletion of user data from Personal Digital Assistant (PDA) devices before disposal. It may not be possible to delete the user data if the device is malfunctioning. Which destruction method below provides the BEST assurance that the data has been removed?
Knurling
Grinding
Shredding
Degaussing
The best destruction method that provides the assurance that the data has been removed from a malfunctioning PDA device is shredding. Shredding is a method of physically destroying the media, such as flash memory cards, by cutting or tearing them into small pieces that make the data unrecoverable. Shredding can be effective in removing the data from a PDA device that cannot be deleted by software or firmware methods, as it does not depend on the functionality of the device or the media. Shredding can also prevent the reuse or the recycling of the media or the device, as it renders them unusable. Knurling, grinding, and degaussing are not the best destruction methods that provide the assurance that the data has been removed from a malfunctioning PDA device, as they are related to the methods of altering the surface, the shape, or the magnetic field of the media, not the methods of cutting or tearing the media into small pieces. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 889. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 905.
Refer to the information below to answer the question.
An organization has hired an information security officer to lead their security department. The officer has adequate people resources but is lacking the other necessary components to have an effective security program. There are numerous initiatives requiring security involvement.
The security program can be considered effective when
vulnerabilities are proactively identified.
audits are regularly performed and reviewed.
backups are regularly performed and validated.
risk is lowered to an acceptable level.
The security program can be considered effective when the risk is lowered to an acceptable level. The risk is the possibility or the likelihood of a threat exploiting a vulnerability, and causing a negative impact or a consequence to the organization’s assets, operations, or objectives. The security program is a set of activities and initiatives that aim to protect the organization’s information systems and resources from the security threats and risks, and to support the organization’s business needs and requirements. The security program can be considered effective when it achieves its goals and objectives, and when it reduces the risk to a level that is acceptable or tolerable by the organization, based on its risk appetite or tolerance. Vulnerabilities are proactively identified, audits are regularly performed and reviewed, and backups are regularly performed and validated are not the criteria to measure the effectiveness of the security program, as they are related to the methods or the processes of the security program, not the outcomes or the results of the security program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 24. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 39.
With data labeling, which of the following MUST be the key decision maker?
Information security
Departmental management
Data custodian
Data owner
With data labeling, the data owner must be the key decision maker. The data owner is the person or entity that has the authority and responsibility for the data, including its classification, protection, and usage. The data owner must decide how to label the data according to its sensitivity, criticality, and value, and communicate the labeling scheme to the data custodians and users. The data owner must also review and update the data labels as needed. The other options are not the key decision makers for data labeling, as they either do not have the authority or responsibility for the data (A, B, and C), or do not have the knowledge or interest in the data (B and C). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 63; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 69.
Which of the following is the MOST effective attack against cryptographic hardware modules?
Plaintext
Brute force
Power analysis
Man-in-the-middle (MITM)
The most effective attack against cryptographic hardware modules is power analysis. Power analysis is a type of side-channel attack that exploits the physical characteristics or behavior of a cryptographic device, such as a smart card, a hardware security module, or a cryptographic processor, to extract secret information, such as keys, passwords, or algorithms. Power analysis measures the power consumption or the electromagnetic radiation of the device, and analyzes the variations or patterns that correspond to the cryptographic operations or the data being processed. Power analysis can reveal the internal state or the logic of the device, and can bypass the security mechanisms or the tamper resistance of the device. Power analysis can be performed with low-cost and widely available equipment, and can be very difficult to detect or prevent. Plaintext, brute force, and man-in-the-middle (MITM) are not the most effective attacks against cryptographic hardware modules, as they are related to the encryption or transmission of the data, not the physical properties or behavior of the device. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 628. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 644.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following solutions would have MOST likely detected the use of peer-to-peer programs when the computer was connected to the office network?
Anti-virus software
Intrusion Prevention System (IPS)
Anti-spyware software
Integrity checking software
The best solution to detect the use of P2P programs when the computer was connected to the office network is an Intrusion Prevention System (IPS). An IPS is a device or a software that monitors, analyzes, and blocks the network traffic based on the predefined rules or policies, and that can prevent or stop any unauthorized or malicious access or activity on the network, such as P2P programs. An IPS can detect the use of P2P programs by inspecting the network packets, identifying the P2P protocols or signatures, and blocking or dropping the P2P traffic. Anti-virus software, anti-spyware software, and integrity checking software are not the best solutions to detect the use of P2P programs when the computer was connected to the office network, as they are related to the protection, removal, or verification of the software or files on the computer, not the monitoring, analysis, or blocking of the network traffic. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 512. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 528.
Which of the following is the MAIN goal of a data retention policy?
Ensure that data is destroyed properly.
Ensure that data recovery can be done on the datA.
Ensure the integrity and availability of data for a predetermined amount of time.
Ensure the integrity and confidentiality of data for a predetermined amount of time.
A data retention policy is a document that specifies the rules and guidelines for how long and under what conditions an organization should retain its data. The main goal of a data retention policy is to ensure the integrity and confidentiality of data for a predetermined amount of time. Integrity means that the data is accurate, complete, and consistent, and that it has not been modified or corrupted by unauthorized parties. Confidentiality means that the data is protected from unauthorized access or disclosure, and that it respects the privacy and security of the data owners or subjects. A data retention policy can help an organization to comply with the legal or regulatory requirements, to support the business or operational needs, and to reduce the storage or management costs of the data. Ensuring that data is destroyed properly, ensuring that data recovery can be done on the data, and ensuring the availability of data for a predetermined amount of time are not the main goals of a data retention policy, as they are related to the disposal, restoration, or accessibility of the data, not the integrity or confidentiality of the data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 49. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 64.
Which of the following problems is not addressed by using OAuth (Open Standard to Authorization) 2.0 to integrate a third-party identity provider for a service?
Resource Servers are required to use passwords to authenticate end users.
Revocation of access of some users of the third party instead of all the users from the third party.
Compromise of the third party means compromise of all the users in the service.
Guest users need to authenticate with the third party identity provider.
The problem that is not addressed by using OAuth 2.0 to integrate a third-party identity provider for a service is that resource servers are required to use passwords to authenticate end users. OAuth 2.0 is a framework that enables a third-party application to obtain limited access to a protected resource on behalf of a resource owner, without exposing the resource owner’s credentials to the third-party application. OAuth 2.0 relies on an authorization server that acts as an identity provider and issues access tokens to the third-party application, based on the resource owner’s consent and the scope of the access request. OAuth 2.0 does not address the authentication of the resource owner or the end user by the resource server, which is the server that hosts the protected resource. The resource server may still require the resource owner or the end user to use passwords or other methods to authenticate themselves, before granting access to the protected resource. Revocation of access of some users of the third party instead of all the users from the third party, compromise of the third party means compromise of all the users in the service, and guest users need to authenticate with the third party identity provider are problems that are addressed by using OAuth 2.0 to integrate a third-party identity provider for a service, as they are related to the delegation, revocation, or granularity of the access control or the identity management. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 692. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 708.
Which of the following violates identity and access management best practices?
User accounts
System accounts
Generic accounts
Privileged accounts
The type of accounts that violates identity and access management best practices is generic accounts. Generic accounts are accounts that are shared by multiple users or devices, and do not have a specific or unique identity associated with them. Generic accounts are often used for convenience, compatibility, or legacy reasons, but they pose a serious security risk, as they can compromise the accountability, traceability, and auditability of the actions and activities performed by the users or devices. Generic accounts can also enable unauthorized or malicious access, as they may have weak or default passwords, or may not have proper access control or monitoring mechanisms. User accounts, system accounts, and privileged accounts are not the types of accounts that violate identity and access management best practices, as they are accounts that have a specific or unique identity associated with them, and can be subject to proper authentication, authorization, and auditing measures. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 660. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 676.
Which item below is a federated identity standard?
802.11i
Kerberos
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup Language (SAML)
A federated identity standard is Security Assertion Markup Language (SAML). SAML is a standard that enables the exchange of authentication and authorization information between different parties, such as service providers and identity providers, using XML-based messages called assertions. SAML can facilitate the single sign-on (SSO) process, which allows a user to access multiple services or applications with a single login session, without having to provide their credentials multiple times. SAML can also support the federated identity management, which allows a user to use their identity or credentials from one domain or organization to access the services or applications from another domain or organization, without having to create or maintain separate accounts. 802.11i, Kerberos, and LDAP are not federated identity standards, as they are related to the wireless network security, the network authentication protocol, or the directory service protocol, not the exchange of authentication and authorization information between different parties. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 692. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 708.
Which of the following MUST system and database administrators be aware of and apply when configuring systems used for storing personal employee data?
Secondary use of the data by business users
The organization's security policies and standards
The business purpose for which the data is to be used
The overall protection of corporate resources and data
The thing that system and database administrators must be aware of and apply when configuring systems used for storing personal employee data is the organization’s security policies and standards. Security policies and standards are the documents that define the rules, guidelines, and procedures that govern the security of the organization’s information systems and data. Security policies and standards help to ensure the confidentiality, integrity, and availability of the information systems and data, and to comply with the legal or regulatory requirements. System and database administrators must be aware of and apply the organization’s security policies and standards when configuring systems used for storing personal employee data, as they are responsible for implementing and maintaining the security controls and measures that protect the personal employee data from unauthorized access, use, disclosure, or theft. Secondary use of the data by business users, the business purpose for which the data is to be used, and the overall protection of corporate resources and data are not the things that system and database administrators must be aware of and apply when configuring systems used for storing personal employee data, as they are related to the usage, purpose, or scope of the data, not the security of the data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 35. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 48.
Which of the following is the BEST countermeasure to brute force login attacks?
Changing all canonical passwords
Decreasing the number of concurrent user sessions
Restricting initial password delivery only in person
Introducing a delay after failed system access attempts
The best countermeasure to brute force login attacks is to introduce a delay after failed system access attempts. A brute force login attack is a type of attack that tries to guess the username and password of a system or account by using a large number of possible combinations, usually with the help of automated tools or scripts. A delay after failed system access attempts is a security mechanism that imposes a waiting time or a penalty before allowing another login attempt, after a certain number of unsuccessful attempts. This can slow down or discourage the brute force login attack, as it increases the time and effort required to find the correct credentials. Changing all canonical passwords, decreasing the number of concurrent user sessions, and restricting initial password delivery only in person are not the best countermeasures to brute force login attacks, as they do not directly address the frequency or speed of the login attempts or the use of automated tools or scripts. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Refer to the information below to answer the question.
An organization has hired an information security officer to lead their security department. The officer has adequate people resources but is lacking the other necessary components to have an effective security program. There are numerous initiatives requiring security involvement.
The effectiveness of the security program can PRIMARILY be measured through
audit findings.
risk elimination.
audit requirements.
customer satisfaction.
The primary way to measure the effectiveness of the security program is through the audit findings. The audit findings are the results or the outcomes of the audit process, which is a systematic and independent examination of the security activities and initiatives, to determine whether they comply with the security policies and standards, and whether they achieve the security objectives and goals. The audit findings can help to evaluate the effectiveness of the security program, as they can identify and report the strengths and the weaknesses, the successes and the failures, and the gaps and the risks of the security program, and they can provide the recommendations and the feedback for the improvement and the enhancement of the security program. Risk elimination, audit requirements, and customer satisfaction are not the primary ways to measure the effectiveness of the security program, as they are related to the impossibility, the necessity, or the quality of the security program, not the evaluation or the assessment of the security program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 39. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 54.
Which of the following is the MOST crucial for a successful audit plan?
Defining the scope of the audit to be performed
Identifying the security controls to be implemented
Working with the system owner on new controls
Acquiring evidence of systems that are not compliant
An audit is an independent and objective examination of an organization’s activities, systems, processes, or controls to evaluate their adequacy, effectiveness, efficiency, and compliance with applicable standards, policies, laws, or regulations. An audit plan is a document that outlines the objectives, scope, methodology, criteria, schedule, and resources of an audit. The most crucial element of a successful audit plan is defining the scope of the audit to be performed, which is the extent and boundaries of the audit, such as the subject matter, the time period, the locations, the departments, the functions, the systems, or the processes to be audited. The scope of the audit determines what will be included or excluded from the audit, and it helps to ensure that the audit objectives are met and the audit resources are used efficiently and effectively. Identifying the security controls to be implemented, working with the system owner on new controls, and acquiring evidence of systems that are not compliant are all important tasks in an audit, but they are not the most crucial for a successful audit plan, as they depend on the scope of the audit to be defined first. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 54. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 69.
Refer to the information below to answer the question.
A security practitioner detects client-based attacks on the organization’s network. A plan will be necessary to address these concerns.
What MUST the plan include in order to reduce client-side exploitation?
Approved web browsers
Network firewall procedures
Proxy configuration
Employee education
The plan must include employee education in order to reduce client-side exploitation. Employee education is a process of providing the employees with the necessary knowledge, skills, and awareness to follow the security policies and procedures, and to prevent or avoid the common security threats or risks, such as client-side exploitation. Client-side exploitation is a type of attack that targets the vulnerabilities or weaknesses of the client applications or systems, such as web browsers, email clients, or media players, and that can compromise the client data or functionality, or allow the attacker to gain access to the network or the server. Employee education can help to reduce client-side exploitation by teaching the employees how to recognize and avoid the malicious or suspicious links, attachments, or downloads, how to update and patch their client applications or systems, how to use the security tools or features, such as antivirus or firewall, and how to report or respond to any security incidents or breaches. Approved web browsers, network firewall procedures, and proxy configuration are not the plan components that must be included in order to reduce client-side exploitation, as they are related to the technical or administrative controls or measures, not the human or behavioral factors, that can affect the client-side security. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 47. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 62.
Refer to the information below to answer the question.
A security practitioner detects client-based attacks on the organization’s network. A plan will be necessary to address these concerns.
In the plan, what is the BEST approach to mitigate future internal client-based attacks?
Block all client side web exploits at the perimeter.
Remove all non-essential client-side web services from the network.
Screen for harmful exploits of client-side services before implementation.
Harden the client image before deployment.
The best approach to mitigate future internal client-based attacks is to harden the client image before deployment. Hardening the client image means to apply the security configurations and measures to the client operating system and applications, such as disabling unnecessary services, installing patches and updates, enforcing strong passwords, and enabling encryption and firewall. Hardening the client image can help to reduce the attack surface and the vulnerabilities of the client, and to prevent or resist the client-based attacks, such as web exploits, malware, or phishing. Blocking all client side web exploits at the perimeter, removing all non-essential client-side web services from the network, and screening for harmful exploits of client-side services before implementation are not the best approaches to mitigate future internal client-based attacks, as they are related to the network or the server level, not the client level, and they may not address all the possible types or sources of the client-based attacks. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 295. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 311.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
Which one of the following would cause an immediate review and possible change to the security policies of an organization?
Change in technology
Change in senior management
Change to organization processes
Change to organization goals
A security policy is a document that defines the security vision, mission, principles, objectives, and requirements of an organization, as well as the roles, responsibilities, and expectations of the organization and the stakeholders regarding the security of the organization’s assets, processes, and activities. A security policy is the foundation and framework of the organization’s security program, and it guides and influences the security decisions and actions of the organization and the stakeholders. A security policy should be reviewed and updated regularly to ensure that it reflects the current and future security needs and challenges of the organization and the stakeholders. One of the factors that would cause an immediate review and possible change to the security policy of an organization is a change to organization goals. The organization goals are the desired outcomes or results that the organization wants to achieve in terms of its vision, mission, values, and strategies. The organization goals can affect the security policy of the organization, as they can determine the scope, direction, and priority of the security policy, as well as the security risks and opportunities that the organization faces. A change to organization goals can have a significant impact on the security policy of the organization, as it can create a gap or a misalignment between the security policy and the organization goals, and expose the organization to new or increased security threats or vulnerabilities. Therefore, a change to organization goals would require an immediate review and possible change to the security policy of the organization, to ensure that the security policy is consistent and compatible with the organization goals, and that the security policy supports and enables the organization to achieve its goals. Change in technology, change in senior management, or change to organization processes are not the factors that would cause an immediate review and possible change to the security policy of an organization, as they are more related to the operational, personnel, or procedural aspects of the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security Governance Through Principles and Policies, page 26; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 1: Security and Risk Management, Question 1.10, page 55.
Which testing method requires very limited or no information about the network infrastructure?
While box
Static
Black box
Stress
The testing method that requires very limited or no information about the network infrastructure is black box. Black box is a type of testing method that treats the system or network as a black box, meaning that the tester has no or minimal knowledge of the internal structure, design, or configuration of the system or network. Black box testing focuses on the functionality and behavior of the system or network, rather than the implementation or logic of the system or network. Black box testing can help to simulate the perspective and actions of an external attacker, who may not have access to the detailed information about the system or network, and who may exploit the vulnerabilities or weaknesses of the system or network based on the inputs and outputs of the system or network. Black box testing can also help to identify the unexpected or unintended results or errors of the system or network, as well as to measure the performance and usability of the system or network. White box, static, or stress are not the testing methods that require very limited or no information about the network infrastructure, as they are either more comprehensive, analytical, or intensive testing methods that involve the knowledge or examination of the internal structure, design, or configuration of the system or network. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, page 1163; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.9, page 304.
A client has reviewed a vulnerability assessment report and has stated it is Inaccurate. The client states that the vulnerabilities listed are not valid because the host’s Operating System (OS) was not properly detected.
Where in the vulnerability assessment process did the erra MOST likely occur?
Detection
Enumeration
Reporting
Discovery
Detection is the stage in the vulnerability assessment process where the error most likely occurred. Detection is the process of identifying the vulnerabilities that exist on the target system or network, using various tools and techniques, such as scanners, sniffers, or exploit frameworks. Detection relies on the accurate identification of the host’s operating system, as different operating systems may have different vulnerabilities. If the host’s operating system was not properly detected, the detection process may produce false positives or false negatives, resulting in an inaccurate vulnerability assessment report. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, page 284; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6: Security Assessment and Testing, page 410]
A subscription service which provides power, climate control, raised flooring, and telephone wiring but NOT the computer and peripheral equipment is BEST described as a:
warm site.
reciprocal site.
sicold site.
hot site.
A cold site is a type of backup site that provides the basic infrastructure, such as power, climate control, raised flooring, and telephone wiring, but not the computer and peripheral equipment. A cold site requires a longer recovery time than a warm site or a hot site, as the equipment and data need to be transported and installed at the site. A warm site is a backup site that has some of the equipment and data ready, but not fully operational. A hot site is a backup site that has all the equipment and data ready, and can be switched on immediately. A reciprocal site is an agreement between two organizations to use each other’s facilities in case of a disaster. References: [CISSP CBK Reference, 5th Edition, Chapter 7, page 365]; [CISSP All-in-One Exam Guide, 8th Edition, Chapter 7, page 345]
A software engineer uses automated tools to review application code and search for application flaws, back doors, or other malicious code. Which of the following is the
FIRST Software Development Life Cycle (SDLC) phase where this takes place?
Design
Test
Development
Deployment
The development phase is the first Software Development Life Cycle (SDLC) phase where a software engineer uses automated tools to review application code and search for application flaws, back doors, or other malicious code. The development phase is the phase where the software engineer writes, compiles, and tests the application code, based on the design specifications and requirements. The development phase is also the phase where the software engineer performs code review and analysis, using automated tools, such as static or dynamic analysis tools, to identify and eliminate any errors, vulnerabilities, or malicious code in the application code. Code review and analysis is an important security activity in the development phase, as it can help to improve the quality, functionality, and security of the application, and to prevent or mitigate any potential attacks or exploits on the application12. References: CISSP CBK, Fifth Edition, Chapter 3, page 217; CISSP Practice Exam – FREE 20 Questions and Answers, Question 11.
Which of the following is the MAIN difference between a network-based firewall and a host-based firewall?
A network-based firewall is stateful, while a host-based firewall is stateless.
A network-based firewall controls traffic passing through the device, while a host-based firewall controls traffic destined for the device.
A network-based firewall verifies network traffic, while a host-based firewall verifies processes and applications.
A network-based firewall blocks network intrusions, while a host-based firewall blocks malware.
The main difference between a network-based firewall and a host-based firewall is that a network-based firewall controls traffic passing through the device, while a host-based firewall controls traffic destined for the device. A firewall is a device or a software that filters and regulates the network traffic based on a set of rules or policies, and that blocks or allows the traffic based on the source, destination, protocol, port, or content of the traffic. A network-based firewall is a type of firewall that is deployed at the network perimeter or the network segment, and that controls the traffic that passes through the device, such as the traffic that enters or exits the network, or the traffic that moves between different network zones or subnets. A host-based firewall is a type of firewall that is installed on a specific host or system, such as a server, a workstation, or a mobile device, and that controls the traffic that is destined for the device, such as the traffic that originates from or terminates at the device, or the traffic that is related to the applications or processes running on the device. The other options are not the main difference between a network-based firewall and a host-based firewall, as they either do not describe the characteristics or functions of the firewalls, or do not apply to both types of firewalls. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.2 Implement secure communication channels, 4.2.2.1 Network and host-based firewalls; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.2 Implement secure communication channels, 4.2.2.1 Network and host-based firewalls
Which of the following would an information security professional use to recognize changes to content, particularly unauthorized changes?
File Integrity Checker
Security information and event management (SIEM) system
Audit Logs
Intrusion detection system (IDS)
The tool that an information security professional would use to recognize changes to content, particularly unauthorized changes, is a File Integrity Checker. A File Integrity Checker is a type of security tool that monitors and verifies the integrity and authenticity of the files or content, by comparing the current state or version of the files or content with a known or trusted baseline or reference, using various methods, such as checksums, hashes, or signatures. A File Integrity Checker can recognize changes to content, particularly unauthorized changes, by detecting and reporting any discrepancies or anomalies between the current state or version and the baseline or reference, such as the addition, deletion, modification, or corruption of the files or content. A File Integrity Checker can help to prevent or mitigate the unauthorized changes to content, by alerting the information security professional, and by restoring the files or content to the original or desired state or version . References: [CISSP CBK, Fifth Edition, Chapter 3, page 245]; [100 CISSP Questions, Answers and Explanations, Question 18].
A security professional should consider the protection of which of the following elements FIRST when developing a defense-in-depth strategy for a mobile workforce?
Network perimeters
Demilitarized Zones (DM2)
Databases and back-end servers
End-user devices
Defense-in-depth is a security strategy that employs multiple layers of security controls and mechanisms to protect the system or network from various types of attacks. A mobile workforce is a group of employees or users who work remotely or outside the organization’s physical premises, using mobile devices such as laptops, tablets, or smartphones. A security professional should consider the protection of the end-user devices first when developing a defense-in-depth strategy for a mobile workforce, as these devices are the most vulnerable and exposed to various threats, such as theft, loss, malware, phishing, or unauthorized access. The end-user devices should be protected by security controls and mechanisms such as encryption, authentication, authorization, antivirus, firewall, VPN, device management, backup, and recovery. Network perimeters, demilitarized zones, and databases and back-end servers are also important elements to protect in a defense-in-depth strategy, but they are not the first priority for a mobile workforce, as they are more related to the organization’s internal network and infrastructure. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Secure Network Architecture and Securing Network Components, page 343; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 4: Communication and Network Security, Question 4.5, page 184.
When dealing with shared, privilaged accounts, especially those for emergencies, what is the BEST way to assure non-repudiation of logs?
Regularity change the passwords,
implement a password vaulting solution.
Lock passwords in tamperproof envelopes in a safe.
Implement a strict access control policy.
The best way to assure non-repudiation of logs when dealing with shared, privileged accounts, especially those for emergencies, is to implement a password vaulting solution. A password vaulting solution is a system that securely stores and manages the passwords for shared or privileged accounts, such as administrator, root, or emergency accounts. A password vaulting solution can provide the following benefits: it can enforce strong password policies, such as complexity, length, and expiration; it can generate random and unique passwords for each account; it can encrypt and protect the passwords from unauthorized access; it can automate the password rotation and synchronization; it can grant or revoke the access to the passwords based on roles, rules, or workflows; it can audit and log the password usage and activities; and it can provide accountability and traceability for the shared or privileged accounts. A password vaulting solution can help to prevent the misuse or compromise of the shared or privileged accounts, and ensure the non-repudiation of logs. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 248; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5: Identity and Access Management, page 337]
Assume that a computer was powered off when an information security professional
arrived at a crime scene. Which of the following actions should be performed after
the crime scene is isolated?
Turn the computer on and collect volatile data.
Turn the computer on and collect network information.
Leave the computer off and prepare the computer for transportation to the laboratory
Remove the hard drive, prepare it for transportation, and leave the hardware ta the scene.
A crime scene is a location where a security incident or breach has occurred and where potential evidence can be found. A computer is a device that can store, process, or transmit digital data that can be used as evidence in a security investigation. When an information security professional arrives at a crime scene where a computer was powered off, the best action to perform after the crime scene is isolated is to leave the computer off and prepare the computer for transportation to the laboratory. Leaving the computer off can help to preserve the integrity and authenticity of the data on the computer, as well as to prevent any further damage or tampering. Preparing the computer for transportation can help to protect the computer from physical harm or environmental factors during the movement. Transporting the computer to the laboratory can help to perform a proper forensic analysis of the data on the computer in a controlled and secure environment. Turning the computer on and collecting volatile data or network information, or removing the hard drive and leaving the hardware at the scene are not the best actions to perform, as they can compromise the evidence, violate the chain of custody, or destroy the original state of the computer. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 11: Security Operations, page 711; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 7: Security Operations, Question 7.13, page 276.
What is the BEST approach for maintaining ethics when a security professional is
unfamiliar with the culture of a country and is asked to perform a questionable task?
Exercise due diligence when deciding to circumvent host government requests.
Become familiar with the means in which the code of ethics is applied and considered.
Complete the assignment based on the customer's wishes.
Execute according to the professional's comfort level with the code of ethics.
A security professional should adhere to a code of ethics that guides their conduct and decisions in their profession. However, different countries may have different cultures, laws, regulations, or norms that affect the interpretation and application of the code of ethics. Therefore, the best approach for maintaining ethics when a security professional is unfamiliar with the culture of a country and is asked to perform a questionable task is to become familiar with the means in which the code of ethics is applied and considered in that country. This may involve researching the local context, consulting with experts, seeking guidance from peers, or following established standards or frameworks. By doing so, the security professional can avoid ethical dilemmas, conflicts, or violations, and perform their task in a responsible and respectful manner. Exercising due diligence, completing the assignment based on the customer’s wishes, or executing according to the professional’s comfort level are not the best approaches, as they may ignore, overlook, or contradict the code of ethics or the local culture. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 29; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 1: Security and Risk Management, Question 1.13, page 50.
What is the PRIMARY benefit of relying on Security Content Automation Protocol (SCAP)?
Save security costs for the organization.
Improve vulnerability assessment capabilities.
Standardize specifications between software security products.
Achieve organizational compliance with international standards.
The primary benefit of relying on Security Content Automation Protocol (SCAP) is to standardize specifications between software security products. SCAP is a suite of specifications that enable the automated and interoperable assessment, measurement, and reporting of the security posture and compliance of systems and networks. SCAP consists of six components: Common Platform Enumeration (CPE), Common Configuration Enumeration (CCE), Common Vulnerabilities and Exposures (CVE), Common Vulnerability Scoring System (CVSS), Extensible Configuration Checklist Description Format (XCCDF), and Open Vulnerability and Assessment Language (OVAL). SCAP enables different software security products, such as scanners, analyzers, or auditors, to use a common language and format to describe and exchange information about the security configuration, vulnerabilities, and risks of systems and networks. This can improve the accuracy, consistency, and efficiency of the security assessment and remediation processes, and reduce the complexity and cost of managing multiple security products. Saving security costs for the organization, improving vulnerability assessment capabilities, and achieving organizational compliance with international standards are also benefits of relying on SCAP, but they are not the primary benefit. Saving security costs for the organization is a benefit of relying on SCAP, as it can reduce the need for manual and labor-intensive security tasks, and increase the reuse and integration of security data and tools. Improving vulnerability assessment capabilities is a benefit of relying on SCAP, as it can provide more comprehensive, timely, and reliable information about the security weaknesses and exposures of systems and networks, and enable more effective and proactive mitigation and response actions. Achieving organizational compliance with international standards is a benefit of relying on SCAP, as it can help to demonstrate and verify the alignment of the security policies and practices of the organization with the established benchmarks and baselines, such as the National Institute of Standards and Technology (NIST) Special Publication 800-53 or the International Organization for Standardization (ISO) 27001. References:
A company developed a web application which is sold as a Software as a Service (SaaS) solution to the customer. The application is hosted by a web server running on a ‘specific operating system (OS) on a virtual machine (VM). During the transition phase of the service, it is determined that the support team will need access to the application logs. Which of the following privileges would be the MOST suitable?
Administrative privileges on the OS
Administrative privileges on the web server
Administrative privileges on the hypervisor
Administrative privileges on the application folders
The most suitable privileges for the support team to access the application logs of a web application that is hosted by a web server running on a specific OS on a VM are administrative privileges on the web server. A web application is a type of software application that runs on a web server, and that can be accessed and used by the users or the customers through a web browser, over the internet or a network. A web application can generate and store various types of logs, such as access logs, error logs, or security logs, that record and provide information about the activities, events, or issues that occur on the web application. A web server is a type of software or hardware that hosts and delivers the web application and its content to the web browser, using the Hypertext Transfer Protocol (HTTP) or other protocols. A web server can run on a specific OS, such as Windows, Linux, or MacOS, and it can run on a physical or a virtual machine. A VM is a type of software that emulates a physical machine, and that can run multiple OSs and applications on the same physical machine, using a software layer called a hypervisor. Administrative privileges are a type of access rights or permissions that grant the user or the role the ability to perform various actions or tasks on a system or a service, such as installing, configuring, or managing the system or the service. Administrative privileges on the web server are the most suitable privileges for the support team to access the application logs of a web application that is hosted by a web server running on a specific OS on a VM, as they can provide the support team with the necessary and sufficient access rights or permissions to view, analyze, or troubleshoot the application logs, without compromising the security or the functionality of the OS, the VM, or the hypervisor56. References: CISSP CBK, Fifth Edition, Chapter 3, page 241; CISSP Practice Exam – FREE 20 Questions and Answers, Question 16.
What Is the FIRST step for a digital investigator to perform when using best practices to collect digital evidence from a potential crime scene?
Consult the lead investigate to team the details of the case and required evidence.
Assure that grounding procedures have been followed to reduce the loss of digital data due to static electricity discharge.
Update the Basic Input Output System (BIOS) and Operating System (OS) of any tools used to assure evidence admissibility.
Confirm that the appropriate warrants were issued to the subject of the investigation to eliminate illegal search claims.
The first step for a digital investigator to perform when using best practices to collect digital evidence from a potential crime scene is to confirm that the appropriate warrants were issued to the subject of the investigation to eliminate illegal search claims. A warrant is a legal document that authorizes the investigator to search, seize, or examine a person, place, or thing for evidence of a crime. A warrant is required to ensure that the investigation complies with the law and respects the rights and privacy of the subject. A warrant also prevents the evidence from being challenged or excluded in court due to illegal search or seizure. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, page 316; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6: Security Assessment and Testing, page 442]
An organization that has achieved a Capability Maturity model Integration (CMMI) level of 4 has done which of the following?
Addressed continuous innovative process improvement
Addressed the causes of common process variance
Achieved optimized process performance
Achieved predictable process performance
An organization that has achieved a Capability Maturity Model Integration (CMMI) level of 4 has done the following: achieved predictable process performance. CMMI is a framework that provides a set of best practices and guidelines for improving the capability and maturity of the processes of an organization, such as software development, service delivery, or project management. CMMI consists of five levels, each of which represents a different stage or degree of process improvement, from initial to optimized. The five levels of CMMI are:
An organization that has achieved a CMMI level of 4 has done the following: achieved predictable process performance, meaning that the organization has established quantitative objectives and metrics for the processes, and has used statistical and analytical techniques to monitor and control the variation and performance of the processes, and to ensure that the processes meet the expected or desired outcomes. An organization that has achieved a CMMI level of 4 has not done the following: addressed continuous innovative process improvement, addressed the causes of common process variance, or achieved optimized process performance, as these are the characteristics or achievements of a CMMI level of 5, which is the highest and most mature level of CMMI. References:
Which of the following frameworks provides vulnerability metrics and characteristics to support the National Vulnerability Database (NVD)?
Center for Internet Security (CIS)
Common Vulnerabilities and Exposures (CVE)
Open Web Application Security Project (OWASP)
Common Vulnerability Scoring System (CVSS)
The framework that provides vulnerability metrics and characteristics to support the National Vulnerability Database (NVD) is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and consistent way to measure and communicate the severity and the impact of the vulnerabilities or weaknesses that may affect the security or the functionality of the systems or the components. CVSS provides vulnerability metrics and characteristics, such as the base score, the temporal score, and the environmental score, that are based on the various factors or attributes of the vulnerabilities, such as the exploitability, the scope, the impact, the remediation, or the confidence. CVSS supports the NVD, which is a repository or a database that collects and maintains the information or the data about the publicly known or reported vulnerabilities or weaknesses that are identified by the Common Vulnerabilities and Exposures (CVE) identifiers. CVSS supports the NVD, because it can:
The other options are not the frameworks that provide vulnerability metrics and characteristics to support the NVD. Center for Internet Security (CIS) is an organization that provides the best practices and the guidelines for securing the systems or the components, such as the CIS Controls and the CIS Benchmarks, that are based on the consensus and the collaboration of the experts or the professionals in the field of cybersecurity. CIS does not provide vulnerability metrics and characteristics to support the NVD, but rather provides security recommendations and configurations to prevent or mitigate the vulnerabilities or weaknesses that are included in the NVD. Common Vulnerabilities and Exposures (CVE) is a system that provides the identifiers or the names for the publicly known or reported vulnerabilities or weaknesses that affect the security or the functionality of the systems or the components, and that are used for referencing and tracking the vulnerabilities or weaknesses. CVE does not provide vulnerability metrics and characteristics to support the NVD, but rather provides vulnerability identification and classification to populate and maintain the NVD. Open Web Application Security Project (OWASP) is an organization that provides the resources and the tools for improving the security of the web applications or the websites, such as the OWASP Top 10 and the OWASP Testing Guide, that are based on the research and the analysis of the experts or the professionals in the field of web application security. OWASP does not provide vulnerability metrics and characteristics to support the NVD, but rather provides vulnerability awareness and education to prevent or mitigate the vulnerabilities or weaknesses that are included in the NVD. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Architecture and Engineering, page 450. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Security Architecture and Engineering, page 451.
Which of the following methods MOST efficiently manages user accounts when using a third-party cloud-based application and directory solution?
Cloud directory
Directory synchronization
Assurance framework
Lightweight Directory Access Protocol (LDAP)
Directory synchronization is a method of managing user accounts when using a third-party cloud-based application and directory solution. Directory synchronization allows the user accounts in the local directory, such as Active Directory, to be automatically synchronized with the user accounts in the cloud directory, such as Azure Active Directory. This way, the users can use the same credentials to access both the local and the cloud resources, and the administrators can manage the user accounts from a single point. Option A, cloud directory, is not a method, but a type of directory service that is hosted in the cloud. Option C, assurance framework, is not related to user account management, but to the evaluation and verification of security controls. Option D, Lightweight Directory Access Protocol (LDAP), is a protocol for accessing and querying directory services, not a method for managing user accounts. References: CISSP Testking ISC Exam Questions - CISSP Certification with CISSP Answers, CISSP Practice Exam | Boson
What is the PRIMARY purpose for an organization to conduct a security audit?
To ensure the organization is adhering to a well-defined standard
To ensure the organization is applying security controls to mitigate identified risks
To ensure the organization is configuring information systems efficiently
To ensure the organization is documenting findings
The primary purpose for an organization to conduct a security audit is to ensure that the organization is applying security controls to mitigate identified risks. A security audit is a systematic and independent examination of the security posture and performance of an organization, system, or network, against a set of predefined criteria, standards, or regulations. A security audit can help to ensure that the organization is applying security controls to mitigate identified risks, by evaluating the effectiveness and adequacy of the security controls and measures that are implemented on the organization, system, or network, and by identifying and resolving any security issues or gaps that may expose the organization, system, or network to various security threats and risks. A security audit can also help to provide recommendations and solutions to improve the security posture and performance of the organization, system, or network, as well as to demonstrate the compliance and accountability of the organization with the relevant security policies and regulations. To ensure the organization is adhering to a well-defined standard, to ensure the organization is configuring information systems efficiently, or to ensure the organization is documenting findings are not the primary purposes for an organization to conduct a security audit, as they are more related to the quality, optimization, or reporting aspects of security. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 18: Security Assessment and Testing, page 1001; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 6: Security Assessment and Testing, Question 6.1, page 243.
In setting expectations when reviewing the results of a security test, which of the following statements is MOST important to convey to reviewers?
The target’s security posture cannot be further compromised.
The results of the tests represent a point-in-time assessment of the target(s).
The accuracy of testing results can be greatly improved if the target(s) are properly hardened.
The deficiencies identified can be corrected immediately
The most important statement to convey to reviewers when setting expectations for reviewing the results of a security test is that the results of the tests represent a point-in-time assessment of the target(s). A security test is a process of evaluating and measuring the security posture and performance of an information system or a network, by using various tools, techniques, and methods, such as vulnerability scanning, penetration testing, or security auditing. The results of a security test reflect the security state of the target(s) at the time of the test, and they may not be valid or accurate for a different time period, as the security environment and conditions may change due to various factors, such as new threats, patches, updates, or configurations. Therefore, reviewers should understand that the results of a security test are not definitive or permanent, but rather indicative or temporary, and that they should be interpreted and used accordingly. The statement that the target’s security posture cannot be further compromised is not true, as a security test does not guarantee or ensure the security of the target(s), but rather identifies and reports the security issues or weaknesses that may exist. The statement that the accuracy of testing results can be greatly improved if the target(s) are properly hardened is not relevant, as a security test is not meant to improve the accuracy of the results, but rather to assess the security of the target(s), and hardening the target(s) before the test may not reflect the actual or realistic security posture of the target(s). The statement that the deficiencies identified can be corrected immediately is not realistic, as a security test may identify various types of deficiencies that may require different levels of effort, time, and resources to correct, and some deficiencies may not be correctable at all, due to technical, operational, or financial constraints.
While reviewing the financial reporting risks of a third-party application, which of the following Service Organization Control (SOC) reports will be the MOST useful?
ISIsOC 1
SOC 2
SOC 3
SOC for cybersecurity
ISIsOC 1 is the most useful Service Organization Control (SOC) report for reviewing the financial reporting risks of a third-party application, because it focuses on the internal controls over financial reporting (ICFR) of the service organization. ISIsOC 1 reports are based on the Statement on Standards for Attestation Engagements (SSAE) No. 18, and can be either Type 1 or Type 2, depending on whether they provide a point-in-time or a period-of-time evaluation of the controls. SOC 2, SOC 3, and SOC for cybersecurity reports are based on the Trust Services Criteria, and cover different aspects of the service organization’s security, availability, confidentiality, processing integrity, and privacy. They are not specifically designed for financial reporting risks. References: CISSP Official Study Guide, 9th Edition, page 1016; CISSP All-in-One Exam Guide, 8th Edition, page 1095
What is the MAIN purpose of a security assessment plan?
Provide guidance on security requirements, to ensure the identified security risks are properly addressed based on the recommendation
Provide the objectives for the security and privacy control assessments and a detailed roadmap of how to conduct such assessments.
Provide technical information to executives to help them understand information security postures and secure funding.
Provide education to employees on security and privacy, to ensure their awareness on policies and procedures
The main purpose of a security assessment plan is to provide the objectives for the security and privacy control assessments and a detailed roadmap of how to conduct such assessments. A security assessment plan defines the scope, criteria, methods, roles, and responsibilities of the security assessment process, which is the process of evaluating and testing the effectiveness and compliance of the security and privacy controls implemented in an information system. A security assessment plan helps to ensure that the security assessment process is consistent, systematic, and comprehensive. A security assessment plan does not provide guidance on security requirements, as this is the role of a security requirements analysis or a security architecture design. A security assessment plan does not provide technical information to executives, as this is the role of a security report or a security briefing. A security assessment plan does not provide education to employees, as this is the role of a security awareness or a security training program.
Which of the following is the BEST identity-as-a-service (IDaaS) solution for validating users?
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup Language (SAM.)
Single Sign-on (SSO)
Open Authentication (OAuth)
The best identity-as-a-service (IDaaS) solution for validating users is Security Assertion Markup Language (SAML). IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as authentication, authorization, and provisioning, to the users of an organization. SAML is an XML-based standard that enables the exchange of authentication and authorization information between different parties, such as an identity provider (IdP) and a service provider (SP). SAML can be used to implement single sign-on (SSO) and federated identity management (FIM) scenarios, where a user can access multiple services or applications with a single login and credential. SAML can provide a secure, scalable, and interoperable solution for validating users across different domains and platforms. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 231; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5: Identity and Access Management, page 320]
Which of the following four iterative steps are conducted on third-party vendors in an on-going basis?
Investigate, Evaluate, Respond, Monitor
Frame, Assess, Respond, Monitor
Frame, Assess, Remediate, Monitor
Investigate, Assess, Remediate, Monitor
Third-party vendors are external entities that provide products or services to an organization, such as suppliers, contractors, consultants, or partners. Third-party vendors can pose various risks to the organization, such as security breaches, compliance violations, service disruptions, or reputational damage. Therefore, the organization should conduct a third-party risk management (TPRM) process to identify, assess, mitigate, and monitor the risks associated with third-party vendors. The TPRM process consists of four iterative steps that are conducted on third-party vendors in an on-going basis. The steps are:
Frame: This step involves defining the scope, objectives, and governance of the TPRM process, as well as establishing the criteria and thresholds for risk assessment and acceptance.
Assess: This step involves collecting and analyzing information about the third-party vendors, such as their security policies, controls, practices, certifications, and performance, to evaluate their risk profile and compliance status.
Respond: This step involves developing and implementing strategies and actions to address the risks identified in the assessment step, such as negotiating contracts, enforcing service level agreements, applying controls, conducting audits, or terminating relationships.
Monitor: This step involves tracking and reviewing the performance and risk posture of the third-party vendors on a regular basis, as well as updating the TPRM process as needed to reflect changes in the business environment, regulatory requirements, or risk appetite.
Therefore, the correct answer is B. The other options are incorrect because they do not include all the steps of the TPRM process or use different terms that are not consistent with the TPRM framework. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1: Security and Risk Management, Section: Third-Party Risk Management; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security Governance Through Principles and Policies, Section: Third-Party Governance.
A large corporation is locking for a solution to automate access based on where on request is coming from, who the user is, what device they are connecting with, and what time of day they are attempting this access. What type of solution would suit their needs?
Discretionary Access Control (DAC)
Role Based Access Control (RBAC)
Mandater Access Control (MAC)
Network Access Control (NAC)
The type of solution that would suit the needs of a large corporation that wants to automate access based on where the request is coming from, who the user is, what device they are connecting with, and what time of day they are attempting this access is Network Access Control (NAC). NAC is a solution that enables the enforcement of security policies and rules on the network level, by controlling the access of devices and users to the network resources. NAC can automate access based on various factors, such as the location, identity, role, device type, device health, or time of the request. NAC can also perform functions such as authentication, authorization, auditing, remediation, or quarantine of the devices and users that attempt to access the network. Discretionary Access Control (DAC), Role Based Access Control (RBAC), and Mandatory Access Control (MAC) are not types of solutions, but types of access control models that define how the access rights or permissions are granted or denied to the subjects or objects. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 737; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 517.
If virus infection is suspected, which of the following is the FIRST step for the user to take?
Unplug the computer from the network.
Save the opened files and shutdown the computer.
Report the incident to service desk.
Update the antivirus to the latest version.
The first step for the user to take if virus infection is suspected is to report the incident to service desk. This will help to contain the infection, prevent further damage, and initiate the recovery process. The service desk can also provide guidance on how to handle the infected computer and what actions to take next. Unplugging the computer from the network, saving the opened files and shutting down the computer, or updating the antivirus to the latest version are possible subsequent steps, but they should not be done before reporting the incident. References:
CISSP Official (ISC)2 Practice Tests, 3rd Edition, Domain 7: Security Operations, Question 7.1.2
CISSP CBK, 5th Edition, Chapter 7: Security Operations, Section: Incident Management
What is the HIGHEST priority in agile development?
Selecting appropriate coding language
Managing costs of product delivery
Early and continuous delivery of software
Maximizing the amount of code delivered
The highest priority in agile development is early and continuous delivery of software. Agile development is a type of software development methodology that is based on the principles of the Agile Manifesto, which values individuals and interactions, working software, customer collaboration, and responding to change. Agile development aims to deliver software products or services that meet the changing needs and expectations of the customers and stakeholders, by using an iterative, incremental, and collaborative approach. Agile development involves various methods or frameworks, such as Scrum, Kanban, or Extreme Programming. The highest priority in agile development is early and continuous delivery of software, as stated in the first principle of the Agile Manifesto: "Our highest priority is to satisfy the customer through early and continuous delivery of valuable software." Early and continuous delivery of software means that the software products or services are delivered to the customers or stakeholders in short and frequent cycles, rather than in long and infrequent cycles. Early and continuous delivery of software can help to improve the quality and value of the software products or services, by enabling faster feedback, validation, and verification of the software products or services, as well as by allowing more flexibility and adaptability to the changing requirements and preferences of the customers or stakeholders. Selecting appropriate coding language, managing costs of product delivery, or maximizing the amount of code delivered are not the highest priorities in agile development, as they are either more related to the technical, financial, or quantitative aspects of software development, rather than the customer-oriented or value-driven aspects of software development. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, page 1155; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.11, page 305.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
Which factors MUST be considered when classifying information and supporting assets for risk management, legal discovery, and compliance?
System owner roles and responsibilities, data handling standards, storage and secure development lifecycle requirements
Data stewardship roles, data handling and storage standards, data lifecycle requirements
Compliance office roles and responsibilities, classified material handling standards, storage system lifecycle requirements
System authorization roles and responsibilities, cloud computing standards, lifecycle requirements
The factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance are data stewardship roles, data handling and storage standards, and data lifecycle requirements. Data stewardship roles are the roles and responsibilities of the individuals or entities who are accountable for the creation, maintenance, protection, and disposal of the information and supporting assets. Data stewardship roles include data owners, data custodians, data users, and data stewards. Data handling and storage standards are the policies, procedures, and guidelines that define how the information and supporting assets should be handled and stored, based on their classification level, sensitivity, and value. Data handling and storage standards include data labeling, data encryption, data backup, data retention, and data disposal. Data lifecycle requirements are the requirements that specify the stages and processes that the information and supporting assets should go through, from their creation to their destruction. Data lifecycle requirements include data collection, data processing, data analysis, data sharing, data archiving, and data deletion. System owner roles and responsibilities, data handling standards, storage and secure development lifecycle requirements are not the factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance, although they may be related or relevant concepts. System owner roles and responsibilities are the roles and responsibilities of the individuals or entities who are accountable for the operation, performance, and security of the system that hosts or processes the information and supporting assets. System owner roles and responsibilities include system authorization, system configuration, system monitoring, and system maintenance. Data handling standards are the policies, procedures, and guidelines that define how the information should be handled, but not how the supporting assets should be stored. Data handling standards are a subset of data handling and storage standards. Storage and secure development lifecycle requirements are the requirements that specify the stages and processes that the storage and development systems should go through, from their inception to their decommissioning. Storage and secure development lifecycle requirements include storage design, storage implementation, storage testing, storage deployment, storage operation, storage maintenance, and storage disposal. Compliance office roles and responsibilities, classified material handling standards, storage system lifecycle requirements are not the factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance, although they may be related or relevant concepts. Compliance office roles and responsibilities are the roles and responsibilities of the individuals or entities who are accountable for ensuring that the organization complies with the applicable laws, regulations, standards, and policies. Compliance office roles and responsibilities include compliance planning, compliance assessment, compliance reporting, and compliance improvement. Classified material handling standards are the policies, procedures, and guidelines that define how the information and supporting assets that are classified by the government or military should be handled and stored, based on their security level, such as top secret, secret, or confidential. Classified material handling standards are a subset of data handling and storage standards. Storage system lifecycle requirements are the requirements that specify the stages and processes that the storage system should go through, from its inception to its decommissioning. Storage system lifecycle requirements are a subset of storage and secure development lifecycle requirements. System authorization roles and responsibilities, cloud computing standards, lifecycle requirements are not the factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance, although they may be related or relevant concepts. System authorization roles and responsibilities are the roles and responsibilities of the individuals or entities who are accountable for granting or denying access to the system that hosts or processes the information and supporting assets. System authorization roles and responsibilities include system identification, system authentication, system authorization, and system auditing. Cloud computing standards are the standards that define the requirements, specifications, and best practices for the delivery of computing services over the internet, such as infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS). Cloud computing standards include cloud service level agreements (SLAs), cloud interoperability, cloud portability, and cloud security. Lifecycle requirements are the requirements that specify the stages and processes that the information and supporting assets should go through, from their creation to their destruction. Lifecycle requirements are the same as data lifecycle requirements.
Which of the following is a responsibility of a data steward?
Ensure alignment of the data governance effort to the organization.
Conduct data governance interviews with the organization.
Document data governance requirements.
Ensure that data decisions and impacts are communicated to the organization.
A responsibility of a data steward is to ensure that data decisions and impacts are communicated to the organization. A data steward is a role or a function that is responsible for managing and maintaining the quality and the usability of the data within a specific data domain or a business area, such as finance, marketing, or human resources. A data steward can provide some benefits for data governance, which is the process of establishing and enforcing the policies and standards for the collection, use, storage, and protection of data, such as enhancing the accuracy and the reliability of the data, preventing or detecting errors or inconsistencies, and supporting the audit and the compliance activities. A data steward can perform various tasks or duties, such as:
Ensuring that data decisions and impacts are communicated to the organization is a responsibility of a data steward, as it can help to ensure the transparency and the accountability of the data governance process, as well as to facilitate the coordination and the cooperation of the data governance stakeholders, such as the data owners, the data custodians, the data users, and the data governance team. Ensuring alignment of the data governance effort to the organization, conducting data governance interviews with the organization, and documenting data governance requirements are not responsibilities of a data steward, although they may be related or possible tasks or duties. Ensuring alignment of the data governance effort to the organization is a responsibility of the data governance team, which is a group of experts or advisors who are responsible for defining and implementing the data governance policies and standards, as well as for overseeing and evaluating the data governance process and performance. Conducting data governance interviews with the organization is a task or a technique that can be used by the data governance team, the data steward, or the data auditor, to collect and analyze the information and the feedback about the data governance process and performance, from the data governance stakeholders, such as the data owners, the data custodians, the data users, or the data consumers. Documenting data governance requirements is a task or a technique that can be used by the data governance team, the data owner, or the data user, to specify and describe the needs and the expectations of the data governance process and performance, such as the data quality, the data security, or the data compliance.
Proven application security principles include which of the following?
Minimizing attack surface area
Hardening the network perimeter
Accepting infrastructure security controls
Developing independent modules
Minimizing attack surface area is a proven application security principle that aims to reduce the exposure or the vulnerability of an application to potential attacks, by limiting or eliminating the unnecessary or unused features, functions, or services of the application, as well as the access or the interaction of the application with other applications, systems, or networks. Minimizing attack surface area can provide some benefits for security, such as enhancing the performance and the functionality of the application, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Hardening the network perimeter, accepting infrastructure security controls, and developing independent modules are not proven application security principles, although they may be related or useful concepts or techniques. Hardening the network perimeter is a network security concept or technique that aims to protect the network from external or unauthorized attacks, by strengthening or enhancing the security controls or mechanisms at the boundary or the edge of the network, such as firewalls, routers, or gateways. Hardening the network perimeter can provide some benefits for security, such as enhancing the performance and the functionality of the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, hardening the network perimeter is not an application security principle, as it is not specific or applicable to the application layer, and it does not address the internal or the inherent security of the application. Accepting infrastructure security controls is a risk management concept or technique that involves accepting the residual risk of an application after applying the security controls or mechanisms provided by the underlying infrastructure, such as the hardware, the software, the network, or the cloud. Accepting infrastructure security controls can provide some benefits for security, such as reducing the cost and the complexity of the security implementation, leveraging the expertise and the resources of the infrastructure providers, and supporting the audit and the compliance activities. However, accepting infrastructure security controls is not an application security principle, as it is not a proactive or a preventive measure to enhance the security of the application, and it may introduce or increase the dependency or the vulnerability of the application on the infrastructure. Developing independent modules is a software engineering concept or technique that involves designing or creating the application as a collection or a composition of discrete or separate components or units, each with a specific function or purpose, and each with a well-defined interface or contract. Developing independent modules can provide some benefits for security, such as enhancing the usability and the maintainability of the application, preventing or isolating some types of errors or bugs, and supporting the testing and the verification activities. However, developing independent modules is not an application security principle, as it is not a direct or a deliberate measure to improve the security of the application, and it may not address or prevent some types of attacks or vulnerabilities that affect the application as a whole or the interaction between the modules.
A vulnerability assessment report has been submitted to a client. The client indicates that one third of the hosts
that were in scope are missing from the report.
In which phase of the assessment was this error MOST likely made?
Enumeration
Reporting
Detection
Discovery
The discovery phase of a vulnerability assessment is the process of identifying and enumerating the hosts, services, and applications that are in scope of the assessment. This phase involves techniques such as network scanning, port scanning, service scanning, and banner grabbing. If one third of the hosts that were in scope are missing from the report, it means that the discovery phase was not performed properly or completely, and some hosts were not detected or included in the assessment. The enumeration, reporting, and detection phases are not likely to cause this error, as they depend on the results of the discovery phase. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Assessment and Testing, page 857; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security Assessment and Testing, page 794.
Who is responsible for the protection of information when it is shared with or provided to other organizations?
Systems owner
Authorizing Official (AO)
Information owner
Security officer
The information owner is the person who has the authority and responsibility for the information within an Information System (IS). The information owner is responsible for the protection of information when it is shared with or provided to other organizations, such as by defining the classification, sensitivity, retention, and disposal of the information, as well as by approving or denying the access requests and periodically reviewing the access rights. The system owner, the authorizing official, and the security officer are not responsible for the protection of information when it is shared with or provided to other organizations, although they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following is a responsibility of the information owner?
Ensure that users and personnel complete the required security training to access the Information System
(IS)
Defining proper access to the Information System (IS), including privileges or access rights
Managing identification, implementation, and assessment of common security controls
Ensuring the Information System (IS) is operated according to agreed upon security requirements
One of the responsibilities of the information owner is to define proper access to the Information System (IS), including privileges or access rights. This involves determining who can access the data, what they can do with the data, and under what conditions they can access the data. The information owner must also approve or deny the access requests and periodically review the access rights. Ensuring that users and personnel complete the required security training, managing the common security controls, and ensuring the IS is operated according to the security requirements are not the responsibilities of the information owner, but they may involve the information owner’s collaboration or consultation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following are important criteria when designing procedures and acceptance criteria for acquired software?
Code quality, security, and origin
Architecture, hardware, and firmware
Data quality, provenance, and scaling
Distributed, agile, and bench testing
Code quality, security, and origin are important criteria when designing procedures and acceptance criteria for acquired software. Code quality refers to the degree to which the software meets the functional and nonfunctional requirements, as well as the standards and best practices for coding. Security refers to the degree to which the software protects the confidentiality, integrity, and availability of the data and the system. Origin refers to the source and ownership of the software, as well as the licensing and warranty terms. Architecture, hardware, and firmware are not criteria for acquired software, but for the system that hosts the software. Data quality, provenance, and scaling are not criteria for acquired software, but for the data that the software processes. Distributed, agile, and bench testing are not criteria for acquired software, but for the software development and testing methods. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 947; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 869.
Which security access policy contains fixed security attributes that are used by the system to determine a
user’s access to a file or object?
Mandatory Access Control (MAC)
Access Control List (ACL)
Discretionary Access Control (DAC)
Authorized user control
The security access policy that contains fixed security attributes that are used by the system to determine a user’s access to a file or object is Mandatory Access Control (MAC). MAC is a type of access control model that assigns permissions to users and objects based on their security labels, which indicate their level of sensitivity or trustworthiness. MAC is enforced by the system or the network, rather than by the owner or the creator of the object, and it cannot be modified or overridden by the users. MAC can provide some benefits for security, such as enhancing the confidentiality and the integrity of the data, preventing unauthorized access or disclosure, and supporting the audit and compliance activities. MAC is commonly used in military or government environments, where the data is classified according to its level of sensitivity, such as top secret, secret, confidential, or unclassified. The users are granted security clearance based on their level of trustworthiness, such as their background, their role, or their need to know. The users can only access the objects that have the same or lower security classification than their security clearance, and the objects can only be accessed by the users that have the same or higher security clearance than their security classification. This is based on the concept of no read up and no write down, which requires that a user can only read data of lower or equal sensitivity level, and can only write data of higher or equal sensitivity level. MAC contains fixed security attributes that are used by the system to determine a user’s access to a file or object, by using the following methods:
Match the name of access control model with its associated restriction.
Drag each access control model to its appropriate restriction access on the right.
Access to which of the following is required to validate web session management?
Log timestamp
Live session traffic
Session state variables
Test scripts
Access to session state variables is required to validate web session management. Web session management is the process of maintaining the state and information of a user across multiple requests and interactions with a web application. Web session management relies on session state variables, which are data elements that store the user’s preferences, settings, authentication status, and other relevant information for the duration of the session. Session state variables can be stored on the client side (such as cookies or local storage) or on the server side (such as databases or files). To validate web session management, it is necessary to access the session state variables and verify that they are properly generated, maintained, and destroyed by the web application. This can help to ensure the security, functionality, and performance of the web application and the user experience. The other options are not required to validate web session management. Log timestamp is a data element that records the date and time of a user’s activity or event on the web application, but it does not store the user’s state or information. Live session traffic is the network data that is exchanged between the user and the web application during the session, but it does not reflect the session state variables that are stored on the client or the server side. Test scripts are code segments that are used to automate the testing of the web application’s features and functions, but they do not access the session state variables directly. References: Session Management - OWASP Cheat Sheet Series; Session Management: An Overview | SecureCoding.com; Session Management in HTTP - GeeksforGeeks.
Which of the following could be considered the MOST significant security challenge when adopting DevOps practices compared to a more traditional control framework?
Achieving Service Level Agreements (SLA) on how quickly patches will be released when a security flaw is found.
Maintaining segregation of duties.
Standardized configurations for logging, alerting, and security metrics.
Availability of security teams at the end of design process to perform last-minute manual audits and reviews.
The most significant security challenge when adopting DevOps practices compared to a more traditional control framework is maintaining segregation of duties. DevOps is a set of practices and methodologies that aim to integrate and automate the development and the operations of a system or a network, such as software, applications, or services, to enhance the quality and the speed of the delivery and the deployment of the system or the network. DevOps can provide some benefits for security, such as enhancing the performance and the functionality of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. DevOps can involve various tools and techniques, such as continuous integration, continuous delivery, continuous testing, continuous monitoring, or continuous feedback. A traditional control framework is a set of policies and procedures that aim to establish and enforce the security and the governance of a system or a network, such as software, applications, or services, to protect the confidentiality, the integrity, and the availability of the system or the network. A traditional control framework can provide some benefits for security, such as enhancing the visibility and the accountability of the system or the network, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. A traditional control framework can involve various controls and mechanisms, such as risk assessment, change management, configuration management, access control, or audit trail. Maintaining segregation of duties is the most significant security challenge when adopting DevOps practices compared to a more traditional control framework, as it can be difficult and costly to implement and manage, due to the differences and the conflicts between the DevOps and the traditional control framework principles and objectives. Segregation of duties is a security principle or a technique that requires that different roles or functions are assigned to different parties, and that no single party can perform all the steps of a process or a task, such as development, testing, deployment, or maintenance. Segregation of duties can provide some benefits for security, such as enhancing the accuracy and the reliability of the process or the task, preventing or detecting fraud or errors, and supporting the audit and the compliance activities.
Which of the following would BEST support effective testing of patch compatibility when patches are applied to an organization’s systems?
Standardized configurations for devices
Standardized patch testing equipment
Automated system patching
Management support for patching
Standardized configurations for devices can help to reduce the complexity and variability of the systems that need to be patched, and thus facilitate the testing of patch compatibility. Standardized configurations can also help to ensure that the patches are applied consistently and correctly across the organization. Standardized patch testing equipment, automated system patching, and management support for patching are also important factors for effective patch management, but they are not directly related to testing patch compatibility. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 605; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 386.
Extensible Authentication Protocol-Message Digest 5 (EAP-MD5) only provides which of the following?
Mutual authentication
Server authentication
User authentication
Streaming ciphertext data
Extensible Authentication Protocol-Message Digest 5 (EAP-MD5) is a type of EAP method that uses the MD5 hashing algorithm to provide user authentication. EAP is a framework that allows different authentication methods to be used in network access scenarios, such as wireless, VPN, or dial-up. EAP-MD5 only provides user authentication, which means that it verifies the identity of the user who is requesting access to the network, but not the identity of the network server who is granting access. Therefore, EAP-MD5 does not provide mutual authentication, server authentication, or streaming ciphertext data. EAP-MD5 is considered insecure and vulnerable to various attacks, such as offline dictionary attacks, man-in-the-middle attacks, or replay attacks, and should not be used in modern networks.
Which of the following is a characteristic of an internal audit?
An internal audit is typically shorter in duration than an external audit.
The internal audit schedule is published to the organization well in advance.
The internal auditor reports to the Information Technology (IT) department
Management is responsible for reading and acting upon the internal audit results
A characteristic of an internal audit is that management is responsible for reading and acting upon the internal audit results. An internal audit is an independent and objective evaluation or assessment of the internal controls, processes, or activities of an organization, performed by a group of auditors or professionals who are part of the organization, such as the internal audit department or the audit committee. An internal audit can provide some benefits for security, such as enhancing the accuracy and the reliability of the operations, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. An internal audit can involve various steps and roles, such as:
Management is responsible for reading and acting upon the internal audit results, as they are the primary users or recipients of the internal audit report, and they have the authority and the accountability to implement or execute the recommendations or the improvements suggested by the internal audit report, as well as to report or disclose the internal audit results to the external parties, such as the regulators, the shareholders, or the customers. An internal audit is typically shorter in duration than an external audit, the internal audit schedule is published to the organization well in advance, and the internal auditor reports to the audit committee are not characteristics of an internal audit, although they may be related or possible aspects of an internal audit. An internal audit is typically shorter in duration than an external audit, as it is performed by a group of auditors or professionals who are part of the organization, and who have more familiarity and access to the internal controls, processes, or activities of the organization, compared to a group of auditors or professionals who are outside the organization, and who have less familiarity and access to the internal controls, processes, or activities of the organization. However, an internal audit is typically shorter in duration than an external audit is not a characteristic of an internal audit, as it is not a defining or a distinguishing feature of an internal audit, and it may vary depending on the type or the nature of the internal audit, such as the objectives, scope, criteria, or methodology of the internal audit. The internal audit schedule is published to the organization well in advance, as it is a good practice or a technique that can help to ensure the transparency and the accountability of the internal audit, as well as to facilitate the coordination and the cooperation of the internal audit stakeholders, such as the management, the audit committee, the internal auditor, or the audit team.
Which of the BEST internationally recognized standard for evaluating security products and systems?
Payment Card Industry Data Security Standards (PCI-DSS)
Common Criteria (CC)
Health Insurance Portability and Accountability Act (HIPAA)
Sarbanes-Oxley (SOX)
The best internationally recognized standard for evaluating security products and systems is Common Criteria (CC), which is a framework or a methodology that defines and describes the criteria or the guidelines for the evaluation or the assessment of the security functionality and the security assurance of information technology (IT) products and systems, such as hardware, software, firmware, or network devices. Common Criteria (CC) can provide some benefits for security, such as enhancing the confidence and the trust in the security products and systems, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Common Criteria (CC) can involve various elements and roles, such as:
Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPAA), and Sarbanes-Oxley (SOX) are not internationally recognized standards for evaluating security products and systems, although they may be related or relevant regulations or frameworks for security. Payment Card Industry Data Security Standard (PCI-DSS) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the cardholder data or the payment card information, such as the credit card number, the expiration date, or the card verification value, and that applies to the entities or the organizations that are involved or engaged in the processing, the storage, or the transmission of the cardholder data or the payment card information, such as the merchants, the service providers, or the acquirers. Health Insurance Portability and Accountability Act (HIPAA) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the protected health information (PHI) or the personal health information, such as the medical records, the diagnosis, or the treatment, and that applies to the entities or the organizations that are involved or engaged in the provision, the payment, or the operation of the health care services or the health care plans, such as the health care providers, the health care clearinghouses, or the health plans. Sarbanes-Oxley (SOX) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the financial information or the financial reports, such as the income statement, the balance sheet, or the cash flow statement, and that applies to the entities or the organizations
Which of the following is the MOST efficient mechanism to account for all staff during a speedy nonemergency evacuation from a large security facility?
Large mantrap where groups of individuals leaving are identified using facial recognition technology
Radio Frequency Identification (RFID) sensors worn by each employee scanned by sensors at each exitdoor
Emergency exits with push bars with coordinates at each exit checking off the individual against a
predefined list
Card-activated turnstile where individuals are validated upon exit
Section: Security Operations
An organization’s security policy delegates to the data owner the ability to assign which user roles have access
to a particular resource. What type of authorization mechanism is being used?
Discretionary Access Control (DAC)
Role Based Access Control (RBAC)
Media Access Control (MAC)
Mandatory Access Control (MAC)
Discretionary Access Control (DAC) is a type of authorization mechanism that grants or denies access to resources based on the identity of the user and the permissions assigned by the owner of the resource. The owner of the resource has the discretion to decide who can access the resource and what level of access they can have. For example, the owner of a file can assign read, write, or execute permissions to different users or groups. DAC is flexible and easy to implement, but it also poses security risks, such as unauthorized access, data leakage, or privilege escalation, if the owner is not careful or knowledgeable about the security implications of their decisions.
The core component of Role Based Access Control (RBAC) must be constructed of defined data elements.
Which elements are required?
Users, permissions, operations, and protected objects
Roles, accounts, permissions, and protected objects
Users, roles, operations, and protected objects
Roles, operations, accounts, and protected objects
Role Based Access Control (RBAC) is a model of access control that assigns permissions to users based on their roles, rather than their individual identities. The core component of RBAC is the role, which is a collection of permissions that define what operations a user can perform on what protected objects. The required data elements for RBAC are:
An organization has outsourced its financial transaction processing to a Cloud Service Provider (CSP) who will provide them with Software as a Service (SaaS). If there was a data breach who is responsible for monetary losses?
The Data Protection Authority (DPA)
The Cloud Service Provider (CSP)
The application developers
The data owner
The data owner is the person who has the authority and responsibility for the data stored, processed, or transmitted by an Information System (IS). The data owner is responsible for the monetary losses if there was a data breach, as the data owner is accountable for the security, quality, and integrity of the data, as well as for defining the classification, sensitivity, retention, and disposal of the data. The Data Protection Authority (DPA) is not responsible for the monetary losses, but for the enforcement of the data protection laws and regulations. The Cloud Service Provider (CSP) is not responsible for the monetary losses, but for the provision of the cloud services and the protection of the cloud infrastructure. The application developers are not responsible for the monetary losses, but for the development and maintenance of the software applications. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
From a security perspective, which of the following assumptions MUST be made about input to an
application?
It is tested
It is logged
It is verified
It is untrusted
From a security perspective, the assumption that must be made about input to an application is that it is untrusted. Untrusted input is any data or information that is provided by an external or an unknown source, such as a user, a client, a network, or a file, and that is not validated or verified by the application before being processed or used by the application. Untrusted input can pose a serious security risk for the application, as it can contain or introduce malicious or harmful content or commands, such as malware, viruses, worms, trojans, or SQL injection, that can compromise or damage the confidentiality, the integrity, or the availability of the application, or the data or the systems that are connected to the application. Therefore, from a security perspective, the assumption that must be made about input to an application is that it is untrusted, and that it should be treated with caution and suspicion, and that it should be subjected to various security controls or mechanisms, such as input validation, input sanitization, input filtering, or input encoding, before being processed or used by the application. Input validation is the process or the technique of checking or verifying that the input meets the expected or the required format, type, length, range, or value, and that it does not contain or introduce any invalid or illegal characters, symbols, or commands. Input sanitization is the process or the technique of removing or modifying any invalid or illegal characters, symbols, or commands from the input, or replacing them with valid or legal ones, to prevent or mitigate any potential attacks or vulnerabilities. Input filtering is the process or the technique of allowing or blocking the input based on a predefined or a configurable set of rules or criteria, such as a whitelist or a blacklist, to prevent or mitigate any unwanted or unauthorized input. Input encoding is the process or the technique of transforming or converting the input into a different or a standard format or representation, such as HTML, URL, or Base64, to prevent or mitigate any interpretation or execution of the input by the application or the system. It is tested, it is logged, and it is verified are not the assumptions that must be made about input to an application from a security perspective, although they may be related or possible aspects or outcomes of input to an application. It is tested is an aspect or an outcome of input to an application, as it implies that the input has been subjected to various tests or evaluations, such as unit testing, integration testing, or penetration testing, to verify or validate the functionality and the quality of the input, as well as to detect or report any errors, bugs, or vulnerabilities in the input. However, it is tested is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is logged is an aspect or an outcome of input to an application, as it implies that the input has been recorded or stored in a log file or a database, along with other relevant information or metadata, such as the source, the destination, the timestamp, or the status of the input, to provide a trace or a history of the input, as well as to support the audit and the compliance activities. However, it is logged is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is verified is an aspect or an outcome of input to an application, as it implies that the input has been confirmed or authenticated by the application or the system, using various security controls or mechanisms, such as digital signatures, certificates, or tokens, to ensure the integrity and the authenticity of the input, as well as to prevent or mitigate any tampering or spoofing of the input. However, it is verified is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application.
As part of an application penetration testing process, session hijacking can BEST be achieved by which of the following?
Known-plaintext attack
Denial of Service (DoS)
Cookie manipulation
Structured Query Language (SQL) injection
Cookie manipulation is a technique that allows an attacker to intercept, modify, or forge a cookie, which is a piece of data that is used to maintain the state of a web session. By manipulating the cookie, the attacker can hijack the session and gain unauthorized access to the web application. Known-plaintext attack, DoS, and SQL injection are not directly related to session hijacking, although they can be used for other purposes, such as breaking encryption, disrupting availability, or executing malicious commands. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 729; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 522.
What is the foundation of cryptographic functions?
Encryption
Cipher
Hash
Entropy
The foundation of cryptographic functions is entropy. Entropy is a measure of the randomness or unpredictability of a system or a process. Entropy is essential for cryptographic functions, such as encryption, decryption, hashing, or key generation, as it provides the security and the strength of the cryptographic algorithms and keys. Entropy can be derived from various sources, such as physical phenomena, user input, or software applications. Entropy can also be quantified in terms of bits, where higher entropy means higher randomness and higher security. Encryption, cipher, and hash are not the foundation of cryptographic functions, although they are related or important concepts or techniques. Encryption is the process of transforming plaintext or cleartext into ciphertext or cryptogram, using a cryptographic algorithm and a key, to protect the confidentiality and the integrity of the data. Encryption can be symmetric or asymmetric, depending on whether the same or different keys are used for encryption and decryption. Cipher is another term for a cryptographic algorithm, which is a mathematical function that performs encryption or decryption. Cipher can be classified into various types, such as substitution, transposition, stream, or block, depending on how they operate on the data. Hash is the process of generating a fixed-length and unique output, called a hash or a digest, from a variable-length and arbitrary input, using a one-way function, to verify the integrity and the authenticity of the data. Hash can be used for various purposes, such as digital signatures, message authentication codes, or password storage.
Who is accountable for the information within an Information System (IS)?
Security manager
System owner
Data owner
Data processor
The data owner is the person who has the authority and responsibility for the information within an Information System (IS). The data owner is accountable for the security, quality, and integrity of the data, as well as for defining the classification, sensitivity, retention, and disposal of the data. The data owner must also approve or deny the access requests and periodically review the access rights. The security manager, the system owner, and the data processor are not accountable for the information within an IS, but they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
A software scanner identifies a region within a binary image having high entropy. What does this MOST likely indicate?
Encryption routines
Random number generator
Obfuscated code
Botnet command and control
Obfuscated code is a type of code that is deliberately written or modified to make it difficult to understand or reverse engineer3. Obfuscation techniques can include changing variable names, removing comments, adding irrelevant code, or encrypting parts of the code. Obfuscated code can have high entropy, which means that it has a high degree of randomness or unpredictability4. A software scanner can identify a region within a binary image having high entropy as a possible indication of obfuscated code. Encryption routines, random number generators, and botnet command and control are not necessarily related to obfuscated code, and may not have high entropy. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 8, page 4674: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 508.
By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the
confidentiality of the traffic is protected.
opportunity to sniff network traffic exists.
opportunity for device identity spoofing is eliminated.
storage devices are protected against availability attacks.
By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the opportunity to sniff network traffic exists. A SAN is a dedicated network that connects storage devices, such as disk arrays, tape libraries, or servers, to provide high-speed data access and transfer. A SAN may use different protocols or technologies to communicate with storage devices, such as Fibre Channel, iSCSI, or NFS. By allowing storage communications to run on top of TCP/IP, a common network protocol that supports internet and intranet communications, a SAN may leverage the existing network infrastructure and reduce costs and complexity. However, this also exposes the storage communications to the same risks and threats that affect the network communications, such as sniffing, spoofing, or denial-of-service attacks. Sniffing is the act of capturing or monitoring network traffic, which may reveal sensitive or confidential information, such as passwords, encryption keys, or data. By allowing storage communications to run on top of TCP/IP with a SAN, the confidentiality of the traffic is not protected, unless encryption or other security measures are applied. The opportunity for device identity spoofing is not eliminated, as an attacker may still impersonate a legitimate storage device or server by using a forged or stolen IP address or MAC address. The storage devices are not protected against availability attacks, as an attacker may still disrupt or overload the network or the storage devices by sending malicious or excessive packets or requests.
An auditor carrying out a compliance audit requests passwords that are encrypted in the system to verify that the passwords are compliant with policy. Which of the following is the BEST response to the auditor?
Provide the encrypted passwords and analysis tools to the auditor for analysis.
Analyze the encrypted passwords for the auditor and show them the results.
Demonstrate that non-compliant passwords cannot be created in the system.
Demonstrate that non-compliant passwords cannot be encrypted in the system.
The best response to the auditor is to demonstrate that the system enforces the password policy and does not allow non-compliant passwords to be created. This way, the auditor can verify the compliance without compromising the confidentiality or integrity of the encrypted passwords. Providing the encrypted passwords and analysis tools to the auditor (A) may expose the passwords to unauthorized access or modification. Analyzing the encrypted passwords for the auditor and showing them the results (B) may not be sufficient to convince the auditor of the compliance, as the results could be manipulated or falsified. Demonstrating that non-compliant passwords cannot be encrypted in the system (D) is not a valid response, as encryption does not depend on the compliance of the passwords. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 241; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 303.
The type of authorized interactions a subject can have with an object is
control.
permission.
procedure.
protocol.
Permission is the type of authorized interactions a subject can have with an object. Permission is a rule or a setting that defines the specific actions or operations that a subject can perform on an object, such as read, write, execute, or delete1. Permission is usually granted by the owner or the administrator of the object, and can be based on the identity, role, or group membership of the subject. Control, procedure, and protocol are not types of authorized interactions a subject can have with an object, as they are related to different aspects of access control or security. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 355.
Which of the following is an effective method for avoiding magnetic media data remanence?
Degaussing
Encryption
Data Loss Prevention (DLP)
Authentication
Degaussing is an effective method for avoiding magnetic media data remanence, which is the residual representation of data that remains on a storage device after it has been erased or overwritten. Degaussing is a process of applying a strong magnetic field to the storage device, such as a hard disk or a tape, to erase the data and destroy the magnetic alignment of the media. Degaussing can ensure that the data is unrecoverable, even by forensic tools or techniques. Encryption, DLP, and authentication are not methods for avoiding magnetic media data remanence, as they do not erase the data from the storage device, but rather protect it from unauthorized access or disclosure. References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 631. : CISSP For Dummies, 7th Edition, Chapter 9, page 251.
The BEST method of demonstrating a company's security level to potential customers is
a report from an external auditor.
responding to a customer's security questionnaire.
a formal report from an internal auditor.
a site visit by a customer's security team.
The best method of demonstrating a company’s security level to potential customers is a report from an external auditor, who is an independent and qualified third party that evaluates the company’s security policies, procedures, controls, and practices against a set of standards or criteria, such as ISO 27001, NIST, or COBIT. A report from an external auditor provides an objective and credible assessment of the company’s security posture, and may also include recommendations for improvement or certification . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 47. : CISSP For Dummies, 7th Edition, Chapter 1, page 29.
In Disaster Recovery (DR) and business continuity training, which BEST describes a functional drill?
A full-scale simulation of an emergency and the subsequent response functions
A specific test by response teams of individual emergency response functions
A functional evacuation of personnel
An activation of the backup site
A functional drill is a type of disaster recovery and business continuity training that involves a specific test by response teams of individual emergency response functions, such as fire suppression, medical assistance, or data backup. A functional drill is designed to evaluate the performance, coordination, and effectiveness of the response teams and the emergency procedures. A functional drill is not the same as a full-scale simulation, a functional evacuation, or an activation of the backup site. A full-scale simulation is a type of disaster recovery and business continuity training that involves a realistic and comprehensive scenario of an emergency and the subsequent response functions, involving all the stakeholders, resources, and equipment. A functional evacuation is a type of disaster recovery and business continuity training that involves the orderly and safe movement of personnel from a threatened or affected area to a safe location. An activation of the backup site is a type of disaster recovery and business continuity action that involves the switching of operations from the primary site to the secondary site in the event of a disaster or disruption.
What technique BEST describes antivirus software that detects viruses by watching anomalous behavior?
Signature
Inference
Induction
Heuristic
Heuristic is the technique that best describes antivirus software that detects viruses by watching anomalous behavior. Heuristic is a method of virus detection that analyzes the behavior and characteristics of the program or file, rather than comparing it to a known signature or pattern. Heuristic can detect unknown or new viruses that have not been identified or cataloged by the antivirus software. However, heuristic can also generate false positives, as some legitimate programs or files may exhibit suspicious or unusual behavior12. References: 1: What is Heuristic Analysis?32: Heuristic Virus Detection4
Which Hyper Text Markup Language 5 (HTML5) option presents a security challenge for network data leakage prevention and/or monitoring?
Cross Origin Resource Sharing (CORS)
WebSockets
Document Object Model (DOM) trees
Web Interface Definition Language (IDL)
WebSockets is an HTML5 option that presents a security challenge for network data leakage prevention and/or monitoring, as it enables a bidirectional, full-duplex communication channel between a web browser and a server2. WebSockets can bypass the traditional HTTP request-response model and establish a persistent connection that can exchange data in real time. This can pose a risk of data leakage, as the data transmitted over WebSockets may not be inspected or filtered by the network security devices, such as firewalls, proxies, or data loss prevention systems3. Cross Origin Resource Sharing (CORS), Document Object Model (DOM) trees, and Web Interface Definition Language (IDL) are not HTML5 options that present a security challenge for network data leakage prevention and/or monitoring, as they are not related to the communication channel between the web browser and the server. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 973: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 4, page 211.
In Business Continuity Planning (BCP), what is the importance of documenting business processes?
Provides senior management with decision-making tools
Establishes and adopts ongoing testing and maintenance strategies
Defines who will perform which functions during a disaster or emergency
Provides an understanding of the organization's interdependencies
Documenting business processes is an important step in Business Continuity Planning (BCP), as it provides an understanding of the organization’s interdependencies, such as the people, resources, systems, and functions that are involved in each process. This helps to identify the critical processes that need to be prioritized and protected, as well as the potential impact of a disruption on the organization’s operations and objectives12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 10092: CISSP For Dummies, 7th Edition, Chapter 10, page 339.
The three PRIMARY requirements for a penetration test are
A defined goal, limited time period, and approval of management
A general objective, unlimited time, and approval of the network administrator
An objective statement, disclosed methodology, and fixed cost
A stated objective, liability waiver, and disclosed methodology
The three primary requirements for a penetration test are a defined goal, a limited time period, and an approval of management. A penetration test is a type of security assessment that simulates a malicious attack on an information system or network, with the permission of the owner, to identify and exploit vulnerabilities and evaluate the security posture of the system or network. A penetration test requires a defined goal, which is the specific objective or scope of the test, such as testing a particular system, network, application, or function. A penetration test also requires a limited time period, which is the duration or deadline of the test, such as a few hours, days, or weeks. A penetration test also requires an approval of management, which is the formal authorization and consent from the senior management of the organization that owns the system or network to be tested, as well as the management of the organization that conducts the test. A general objective, unlimited time, and approval of the network administrator are not the primary requirements for a penetration test, as they may not provide a clear and realistic direction, scope, and authorization for the test.
What is the ultimate objective of information classification?
To assign responsibility for mitigating the risk to vulnerable systems
To ensure that information assets receive an appropriate level of protection
To recognize that the value of any item of information may change over time
To recognize the optimal number of classification categories and the benefits to be gained from their use
The ultimate objective of information classification is to ensure that information assets receive an appropriate level of protection in accordance with their importance and sensitivity to the organization. Information classification is the process of assigning labels or categories to information based on criteria such as confidentiality, integrity, availability, and value. Information classification helps the organization to identify the risks and threats to the information, and to apply the necessary controls and safeguards to protect it. Information classification also helps the organization to comply with the legal, regulatory, and contractual obligations related to the information12. References: 1: Information Classification - Why it matters?32: ISO 27001 & Information Classification: Free 4-Step Guide4
Which of the following statements is TRUE for point-to-point microwave transmissions?
They are not subject to interception due to encryption.
Interception only depends on signal strength.
They are too highly multiplexed for meaningful interception.
They are subject to interception by an antenna within proximity.
They are subject to interception by an antenna within proximity. Point-to-point microwave transmissions are line-of-sight media, which means that they can be intercepted by any antenna that is in the direct path of the signal. The interception does not depend on encryption, multiplexing, or signal strength, as long as the antenna is close enough to receive the signal.
Which of the following BEST represents the principle of open design?
Disassembly, analysis, or reverse engineering will reveal the security functionality of the computer system.
Algorithms must be protected to ensure the security and interoperability of the designed system.
A knowledgeable user should have limited privileges on the system to prevent their ability to compromise security capabilities.
The security of a mechanism should not depend on the secrecy of its design or implementation.
This is the principle of open design, which states that the security of a system or mechanism should rely on the strength of its key or algorithm, rather than on the obscurity of its design or implementation. This principle is based on the assumption that the adversary has full knowledge of the system or mechanism, and that the security should still hold even if that is the case. The other options are not consistent with the principle of open design, as they either imply that the security depends on hiding or protecting the design or implementation (A and B), or that the user’s knowledge or privileges affect the security ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, page 109.
Which one of these risk factors would be the LEAST important consideration in choosing a building site for a new computer facility?
Vulnerability to crime
Adjacent buildings and businesses
Proximity to an airline flight path
Vulnerability to natural disasters
Proximity to an airline flight path is the least important consideration in choosing a building site for a new computer facility, as it poses the lowest risk factor compared to the other options. Proximity to an airline flight path may cause some noise or interference issues, but it is unlikely to result in a major disaster or damage to the computer facility, unless there is a rare case of a plane crash or a terrorist attack3. Vulnerability to crime, adjacent buildings and businesses, and vulnerability to natural disasters are more important considerations in choosing a building site for a new computer facility, as they can pose significant threats to the physical security, availability, and integrity of the facility and its assets. Vulnerability to crime can expose the facility to theft, vandalism, or sabotage. Adjacent buildings and businesses can affect the fire safety, power supply, or environmental conditions of the facility. Vulnerability to natural disasters can cause the facility to suffer from floods, earthquakes, storms, or fires. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 10, page 543.
Passive Infrared Sensors (PIR) used in a non-climate controlled environment should
reduce the detected object temperature in relation to the background temperature.
increase the detected object temperature in relation to the background temperature.
automatically compensate for variance in background temperature.
detect objects of a specific temperature independent of the background temperature.
Passive Infrared Sensors (PIR) are devices that detect motion by sensing the infrared radiation emitted by objects. In a non-climate controlled environment, the background temperature may vary due to weather, seasons, or other factors. This may affect the sensitivity and accuracy of the PIR sensors, as they may not be able to distinguish between the object and the background. Therefore, the PIR sensors should have a feature that automatically adjusts the threshold or baseline of the background temperature to avoid false alarms or missed detections.
A and B are incorrect because they are not feasible or desirable solutions. Reducing or increasing the detected object temperature in relation to the background temperature would require altering the physical properties of the object or the sensor, which may not be possible or practical. Moreover, this may also affect the performance or functionality of the object or the sensor.
D is incorrect because it is not realistic or reliable. Detecting objects of a specific temperature independent of the background temperature would require the PIR sensors to have a very high resolution and precision, which may not be available or affordable. Moreover, this may also limit the range and scope of the PIR sensors, as they may not be able to detect objects that have different temperatures or emit different amounts of infrared radiation.
Which of the following is TRUE about Disaster Recovery Plan (DRP) testing?
Operational networks are usually shut down during testing.
Testing should continue even if components of the test fail.
The company is fully prepared for a disaster if all tests pass.
Testing should not be done until the entire disaster plan can be tested.
Testing is a vital part of the Disaster Recovery Plan (DRP) process, as it validates the effectiveness and feasibility of the plan, identifies gaps and weaknesses, and provides opportunities for improvement and training. Testing should continue even if components of the test fail, as this will help to evaluate the impact of the failure, the root cause of the problem, and the possible solutions or alternatives34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 10354: CISSP For Dummies, 7th Edition, Chapter 10, page 351.
Who must approve modifications to an organization's production infrastructure configuration?
Technical management
Change control board
System operations
System users
A change control board (CCB) is a group of stakeholders who are responsible for reviewing, approving, and monitoring changes to an organization’s production infrastructure configuration. A production infrastructure configuration is the set of hardware, software, network, and environmental components that support the operation of an information system. Changes to the production infrastructure configuration can affect the security, performance, availability, and functionality of the system. Therefore, changes must be carefully planned, tested, documented, and authorized before implementation. A CCB ensures that changes are aligned with the organization’s objectives, policies, and standards, and that changes do not introduce any adverse effects or risks to the system or the organization. A CCB is not the same as technical management, system operations, or system users, who may be involved in the change management process, but do not have the authority to approve changes.
What is an effective practice when returning electronic storage media to third parties for repair?
Ensuring the media is not labeled in any way that indicates the organization's name.
Disassembling the media and removing parts that may contain sensitive datA.
Physically breaking parts of the media that may contain sensitive datA.
Establishing a contract with the third party regarding the secure handling of the mediA.
When returning electronic storage media to third parties for repair, it is important to ensure that the media does not contain any sensitive or confidential data that could be compromised or disclosed. One way to do this is to establish a contract with the third party that specifies the security requirements and obligations for the handling, storage, and disposal of the media. This contract should also include provisions for auditing, reporting, and remediation in case of any security incidents or breaches. The other options are not effective practices, as they either do not protect the data adequately (A), damage the media unnecessarily (B and C), or are not feasible or practical in some cases. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 239; CISSP For Dummies, 7th Edition, Chapter 2, page 43.
Which of the following is an essential element of a privileged identity lifecycle management?
Regularly perform account re-validation and approval
Account provisioning based on multi-factor authentication
Frequently review performed activities and request justification
Account information to be provided by supervisor or line manager
A privileged identity lifecycle management is a process of managing the access rights and activities of users who have elevated permissions to access sensitive data or resources in an organization2. An essential element of a privileged identity lifecycle management is to regularly perform account re-validation and approval, which means verifying that the privileged users still need their access rights and have them approved by the appropriate authority. This can help prevent unauthorized or excessive access, reduce the risk of insider threats, and ensure compliance with policies and regulations. Account provisioning based on multi-factor authentication, frequently review performed activities and request justification, and account information to be provided by supervisor or line manager are also important aspects of a privileged identity lifecycle management, but they are not as essential as account re-validation and approval. References: 2: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 5, page 283.
Which of the following statements is TRUE of black box testing?
Only the functional specifications are known to the test planner.
Only the source code and the design documents are known to the test planner.
Only the source code and functional specifications are known to the test planner.
Only the design documents and the functional specifications are known to the test planner.
Black box testing is a method of software testing that does not require any knowledge of the internal structure or code of the software1. The test planner only knows the functional specifications, which describe what the software is supposed to do, and tests the software based on the expected inputs and outputs. Black box testing is useful for finding errors in the functionality, usability, or performance of the software, but it cannot detect errors in the code or design. White box testing, on the other hand, requires the test planner to have access to the source code and the design documents, and tests the software based on the internal logic and structure2. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21, page 13132: CISSP For Dummies, 7th Edition, Chapter 8, page 215.
An organization is designing a large enterprise-wide document repository system. They plan to have several different classification level areas with increasing levels of controls. The BEST way to ensure document confidentiality in the repository is to
encrypt the contents of the repository and document any exceptions to that requirement.
utilize Intrusion Detection System (IDS) set drop connections if too many requests for documents are detected.
keep individuals with access to high security areas from saving those documents into lower security areas.
require individuals with access to the system to sign Non-Disclosure Agreements (NDA).
The best way to ensure document confidentiality in the repository is to encrypt the contents of the repository and document any exceptions to that requirement. Encryption is the process of transforming the information into an unreadable form using a secret key or algorithm. Encryption protects the confidentiality of the information by preventing unauthorized access or disclosure, even if the repository is compromised or breached. Encryption also provides integrity and authenticity of the information by ensuring that it has not been modified or tampered with. Documenting any exceptions to the encryption requirement is also important to justify the reasons and risks for not encrypting certain information, and to apply alternative controls if needed93. References: 9: What Is a Document Repository and What Are the Benefits of Using One103: What is a document repository and why you should have one11
Which of the following is a security feature of Global Systems for Mobile Communications (GSM)?
It uses a Subscriber Identity Module (SIM) for authentication.
It uses encrypting techniques for all communications.
The radio spectrum is divided with multiple frequency carriers.
The signal is difficult to read as it provides end-to-end encryption.
A security feature of Global Systems for Mobile Communications (GSM) is that it uses a Subscriber Identity Module (SIM) for authentication. A SIM is a smart card that contains the subscriber’s identity, phone number, network information, and encryption keys. The SIM is inserted into the mobile device and communicates with the network to authenticate the subscriber and establish a secure connection. The SIM also stores the subscriber’s contacts, messages, and preferences. The SIM provides security by preventing unauthorized access to the subscriber’s account and data, and by allowing the subscriber to easily switch devices without losing their information12. References: 1: GSM - Security and Encryption32: Introduction to GSM security
What is the MOST effective countermeasure to a malicious code attack against a mobile system?
Sandbox
Change control
Memory management
Public-Key Infrastructure (PKI)
A sandbox is a security mechanism that isolates a potentially malicious code or application from the rest of the system, preventing it from accessing or modifying any sensitive data or resources1. A sandbox can be implemented at the operating system, application, or network level, and can provide a safe environment for testing, debugging, or executing untrusted code. A sandbox is the most effective countermeasure to a malicious code attack against a mobile system, as it can prevent the code from spreading, stealing, or destroying any information on the device. Change control, memory management, and PKI are not directly related to preventing or mitigating malicious code attacks on mobile systems. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 507.
In a financial institution, who has the responsibility for assigning the classification to a piece of information?
Chief Financial Officer (CFO)
Chief Information Security Officer (CISO)
Originator or nominated owner of the information
Department head responsible for ensuring the protection of the information
In a financial institution, the responsibility for assigning the classification to a piece of information belongs to the originator or nominated owner of the information. The originator is the person who creates or generates the information, and the nominated owner is the person who is assigned the accountability and authority for the information by the management. The originator or nominated owner is the best person to determine the value and sensitivity of the information, and to assign the appropriate classification level based on the criteria and guidelines established by the organization. The originator or nominated owner is also responsible for reviewing and updating the classification as needed, and for ensuring that the information is handled and protected according to its classification56. References: 5: Information Classification Policy76: Information Classification and Handling Policy
To prevent inadvertent disclosure of restricted information, which of the following would be the LEAST effective process for eliminating data prior to the media being discarded?
Multiple-pass overwriting
Degaussing
High-level formatting
Physical destruction
The least effective process for eliminating data prior to the media being discarded is high-level formatting. High-level formatting is the process of preparing a storage device, such as a hard disk or a flash drive, for data storage by creating a file system and marking the bad sectors. However, high-level formatting does not erase the data that was previously stored on the device. The data can still be recovered using data recovery tools or forensic techniques. To prevent inadvertent disclosure of restricted information, more secure methods of data sanitization should be used, such as multiple-pass overwriting, degaussing, or physical destruction34. References: 3: Delete Sensitive Data before Discarding Your Media94: Best Practices for Media Destruction10
An organization allows ping traffic into and out of their network. An attacker has installed a program on the network that uses the payload portion of the ping packet to move data into and out of the network. What type of attack has the organization experienced?
Data leakage
Unfiltered channel
Data emanation
Covert channel
The organization has experienced a covert channel attack, which is a technique of hiding or transferring data within a communication channel that is not intended for that purpose. In this case, the attacker has used the payload portion of the ping packet, which is normally used to carry diagnostic data, to move data into and out of the network. This way, the attacker can bypass the network security controls and avoid detection. Data leakage (A) is a general term for the unauthorized disclosure of sensitive or confidential data, which may or may not involve a covert channel. Unfiltered channel (B) is a term for a communication channel that does not have any security mechanisms or filters applied to it, which may allow unauthorized or malicious traffic to pass through. Data emanation © is a term for the unintentional radiation or emission of electromagnetic signals from electronic devices, which may reveal sensitive or confidential information to eavesdroppers. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 179; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 189.
Which of the following wraps the decryption key of a full disk encryption implementation and ties the hard disk drive to a particular device?
Trusted Platform Module (TPM)
Preboot eXecution Environment (PXE)
Key Distribution Center (KDC)
Simple Key-Management for Internet Protocol (SKIP)
A Trusted Platform Module (TPM) is a hardware device that wraps the decryption key of a full disk encryption implementation and ties the hard disk drive to a particular device. A TPM is a secure cryptoprocessor that generates, stores, and protects cryptographic keys and other sensitive data. A TPM can be used to implement full disk encryption, which is a technique that encrypts the entire contents of a hard disk drive, making it unreadable without the correct decryption key. A TPM can wrap the decryption key, which means that it encrypts the key with another key that is stored in the TPM and can only be accessed by authorized software. A TPM can also tie the hard disk drive to a particular device, which means that it verifies the identity and integrity of the device before allowing the decryption of the hard disk drive. This prevents unauthorized access to the data even if the hard disk drive is physically removed and attached to another device. A Preboot eXecution Environment (PXE), a Key Distribution Center (KDC), and a Simple Key-Management for Internet Protocol (SKIP) are not devices or techniques that wrap the decryption key of a full disk encryption implementation and tie the hard disk drive to a particular device. A PXE is a protocol that enables a device to boot from a network server without a local operating system or storage device. A KDC is a server that issues and manages cryptographic keys and tickets for authentication and encryption in a Kerberos system. A SKIP is a protocol that provides secure key exchange and authentication for IPsec.
An advantage of link encryption in a communications network is that it
makes key management and distribution easier.
protects data from start to finish through the entire network.
improves the efficiency of the transmission.
encrypts all information, including headers and routing information.
An advantage of link encryption in a communications network is that it encrypts all information, including headers and routing information. Link encryption is a type of encryption that is applied at the data link layer of the OSI model, and encrypts the entire packet or frame as it travels from one node to another1. Link encryption can protect the confidentiality and integrity of the data, as well as the identity and location of the nodes. Link encryption does not make key management and distribution easier, as it requires each node to have a separate key for each link. Link encryption does not protect data from start to finish through the entire network, as it only encrypts the data while it is in transit, and decrypts it at each node. Link encryption does not improve the efficiency of the transmission, as it adds overhead and latency to the communication. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 419.
Which of the following is considered best practice for preventing e-mail spoofing?
Spam filtering
Cryptographic signature
Uniform Resource Locator (URL) filtering
Reverse Domain Name Service (DNS) lookup
The best practice for preventing e-mail spoofing is to use cryptographic signatures. E-mail spoofing is a technique that involves forging the sender’s address or identity in an e-mail message, usually to trick the recipient into opening a malicious attachment, clicking on a phishing link, or disclosing sensitive information. Cryptographic signatures are digital signatures that are created by encrypting the e-mail message or a part of it with the sender’s private key, and attaching it to the e-mail message. Cryptographic signatures can be used to verify the authenticity and integrity of the sender and the message, and to prevent e-mail spoofing5 . References: 5: What is Email Spoofing? : How to Prevent Email Spoofing
How can a forensic specialist exclude from examination a large percentage of operating system files residing on a copy of the target system?
Take another backup of the media in question then delete all irrelevant operating system files.
Create a comparison database of cryptographic hashes of the files from a system with the same operating system and patch level.
Generate a message digest (MD) or secure hash on the drive image to detect tampering of the media being examined.
Discard harmless files for the operating system, and known installed programs.
A forensic specialist can exclude from examination a large percentage of operating system files residing on a copy of the target system by creating a comparison database of cryptographic hashes of the files from a system with the same operating system and patch level. This method is also known as known file filtering or file signature analysis. It allows the forensic specialist to quickly identify and eliminate the files that are part of the standard operating system installation and focus on the files that are unique or relevant to the investigation. This makes the process of exclusion much faster and more accurate than manually deleting or discarding files12. References: 1: Computer Forensics: Forensic Techniques, Part 1 [Updated 2019]32: Point Checklist: cissp book4
An external attacker has compromised an organization's network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker's ability to gain further information?
Implement packet filtering on the network firewalls
Require strong authentication for administrators
Install Host Based Intrusion Detection Systems (HIDS)
Implement logical network segmentation at the switches
The most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information is to implement logical network segmentation at the switches. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, role, or access level. This way, the organization can isolate the traffic and data of different segments, and limit the exposure and impact of an attack. If the attacker has installed a sniffer onto an inside computer, logical network segmentation can prevent the sniffer from capturing the traffic and data of other segments, thus reducing the information leakage. The other options are not as effective as logical network segmentation, as they either do not prevent the sniffer from capturing the traffic and data (A and B), or do not detect or stop the attack ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 163; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 173.
What would be the PRIMARY concern when designing and coordinating a security assessment for an Automatic Teller Machine (ATM) system?
Physical access to the electronic hardware
Regularly scheduled maintenance process
Availability of the network connection
Processing delays
The primary concern when designing and coordinating a security assessment for an Automatic Teller Machine (ATM) system is the availability of the network connection. An ATM system relies on a network connection to communicate with the bank’s servers and process the transactions of the customers. If the network connection is disrupted, degraded, or compromised, the ATM system may not be able to function properly, or may expose the customers’ data or money to unauthorized access or theft. Therefore, a security assessment for an ATM system should focus on ensuring that the network connection is reliable, resilient, and secure, and that there are backup or alternative solutions in case of network failure12. References: 1: ATM Security: Best Practices for Automated Teller Machines32: ATM Security: A Comprehensive Guide4
The goal of software assurance in application development is to
enable the development of High Availability (HA) systems.
facilitate the creation of Trusted Computing Base (TCB) systems.
prevent the creation of vulnerable applications.
encourage the development of open source applications.
The goal of software assurance in application development is to prevent the creation of vulnerable applications. Software assurance is the process of ensuring that the software is designed, developed, and maintained in a secure, reliable, and trustworthy manner. Software assurance involves applying security principles, standards, and best practices throughout the software development life cycle, such as security requirements, design, coding, testing, deployment, and maintenance. Software assurance aims to prevent or reduce the introduction of vulnerabilities, defects, or errors in the software that could compromise its security, functionality, or quality . References: : Software Assurance : Software Assurance - OWASP Cheat Sheet Series
The BEST way to check for good security programming practices, as well as auditing for possible backdoors, is to conduct
log auditing.
code reviews.
impact assessments.
static analysis.
Code reviews are the best way to check for good security programming practices, as well as auditing for possible backdoors, in a software system. Code reviews involve examining the source code of the software for any errors, vulnerabilities, or malicious code that could compromise the security or functionality of the system. Code reviews can be performed manually by human reviewers, or automatically by tools that scan and analyze the code. The other options are not as effective as code reviews, as they either do not examine the source code directly (A and C), or only detect syntactic or semantic errors, not logical or security flaws (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 463; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 555.
TESTED 22 Dec 2024
Copyright © 2014-2024 DumpsBuddy. All Rights Reserved