Storage

Ace your homework & exams now with Quizwiz!

PDL vs APD

Permanent Device Loss: A condition that occurs when a storage device permanently fails or is administratively removed or excluded. It is not expected to become available. All Paths Down: A condition that occurs when a storage device becomes inaccessible to the host and no paths to the device are available. ESXi treats this as a transient condition.

iSCSI Authentication

iSCSI storage systems authenticate an initiator by a name and key pair. ESXi supports the CHAP authentication protocol. To use CHAP authentication, the ESXi host and the iSCSI storage system must have CHAP enabled and have common credentials.

VAAI

vSphere APIs for Array Integration, Also referred to as hardware acceleration or hardware offload APIs, a set of APIs and SCSI commands used to offload certain functions that are performed more efficiently on the storage array.

VASA

vSphere APIs for Storage Awareness, A set of vSphere APIs designed to allow storage vendors to advertise proprietary features and capabilities of the storage device to the hypervisor, as well as the ability to verify if the VM requirements are met by the storage device.

VOMA

vSphere On-disk Metadata Analyzer. Identifies and fixes incidents of metadata corruption. It's recommended to use when experiencing VMFS datastore of virtual flash problems. Only supports single-extent datastores.

4K native format with Software Emulation (4Kn)

In the 4Kn devices, both physical and logical sectors are 4096 bytes in length. ESXi detects and registers the 4Kn devices and automatically emulates them as 512e. The guest operating systems always see it as a 512n device. Only local SAS/SATA HDD toghether with UEFI. Only NMP.

NVMe Namespaces

In the NVMe storage array, a namespace is a storage volume backed by some quantity of non-volatile memory. In the context of ESXi, the namespace is analogous to a storage device, or LUN.

NVMe over PCIe

In this configuration, your ESXi host uses a PCIe storage adapter to access one or more local NVMe storage devices

NFS3 vs NFS4.1

- NFS4.1 supports Kerberos security mechanisms together with AES256 and AES128. - NFS3 uses VMware propriety client-side locking. NFS4.1 uses share reservations. - NFS4.1 supports multipathing. NFS4.1 does not support Storage DRS, Storage I/O control, Site Recovery Manager.

512-byte emulation format (512e)

512e is the advanced format in which the physical sector size is 4096 bytes, but the logical sector emulates 512-bytes sector size. Used for supporting legacy apps and OS.

SAN fabric

A SAN topology with at least one switch present.

Explain VMFS metadata

A VMFS datastores maintains a consistent view of all the mapping information regarding the datastores. Such as: - Creating, growing or locking a VM file. - Changing file attributes. - Changing VM power state. - vMotion migrations VMFS uses special locking mechanisms to protect the data and prevent multiple hosts from writing to it concurrently.

Pluggable Storage Arcitecture (PSA)

A VMkernel layer that coordinates software modules such as NMP, HPP and third-party MPPs for multipathing operations.

NVMe Controllers

A controller is associated with one or several NVMe namespaces and provides an access path between the ESXi host and the namespaces in the storage array.

WWPN (World Wide Port Name)

A globally unique identifier for a port that allows certain applications to access the port. The FC switches discover the WWPN of a device or host and assign a port address to the device.

N-Port ID Virtualization (NPIV)

A single FC HBA port (N-port) can register with the fabric by using several WWPNs. This method allows an N-port to claim multiple fabric addresses, each of which appears as a unique entity. When ESXi hosts use a SAN, these multiple, unique identifiers allow the assignment of WWNs to individual virtual machines as part of their configuration. NPIV can be used only for virtual machines with RDM disks. The fabric switches must be NPIV-aware.

iSCSI node

A single discoverable entity on the iSCSI SAN, such as an initiator or a target, represents an iSCSI node. Each node has a node name.

Explain VMFS

A special high-performance file system format that is optimized for storing virtual machines. Both VMFS5 and VMFS6 are currently supported. VMFS3 datastores are automatically upgraded. VMFS is set up on block-based storage devices and can span multiple physcial devices.

SAN

A specialized high-speed network that connects host servers to high-performance storage subsystems.

Active-passive storage system

A system in which one storage processor is actively providing access to a given LUN. The other processors act as a backup for the LUN and can be actively providing access to other LUN I/O. I/O can be successfully sent only to an active port for a given LUN. If access through the active storage port fails, one of the passive storage processors can be activated by the servers accessing it.

iSCSI name

A worldwide unique name for identifying the node. iSCSI uses the iSCSI Qualified Name (IQN) and Extended Unique Identifier (EUI).

Asymmetric Logical Unit Access (ALUA)

ALUA-compliant storage systems provide different levels of access per port. With ALUA, the host can determine the states of target ports and prioritize paths. The host uses some of the active paths as primary, and uses others as secondary.

Virtual Flash Resource

Aggregate local flash devices on an ESXi host into a single virtualized caching layer called virtual flash resource. VFFS is a derivative of VMFS, which is optimized for flash devices and is used to group the physical flash devices into a single caching resource pool.

NFS networking considerations

Configure a VMkernel port group for NFS storage. If L3 switches are used, hosts and NFS storage arrays must be on different subnets.

Core Dump Files

Core Dump files can be configured from esxcli and designates a file for a host to use for core dumps on the datastore.

iSCSI dynamic discovery

Dynamic discovery obtains a list of accessible targets from the iSCSI storage system.

Explain NFS

ESXi supports both NFS3 and NFS4.1 datastores. The latter brings multipathing and Kerberos authentication but has no support for storage DRS, storage I/O filters and site recovery. They use different lock mechanisms which will wreck havoc if both are used simultaneously.

What are datastores?

ESXi uses datastores to store virtual disks. Also supports objects like ISOs and templates.

Datastore signatures

Each VMFS datastore created on a storage device has a unique signature (UUID) stored in the file system superblock. When a storage device contains a VMFS datastore copy, you can mount the datastore with the existing signature or assign a new signature. Keep signature if you want to maintain synchronized copies of VMs at a secondary site. Resignature if you want to retain the data stored on the VMFS datastore copy.

Expand an Existing Datastore vs Add an Extent

Expanding increases the size of an datastore limited by the backing storage devices free space. Adding an extent increases the capacity of an existing VMFS datastore by adding new storage devices to the datastore.

NVMe Subsystem

Generally, an NVMe subsystem is a storage array that might include several NVMe controllers, several namespaces, a non-volatile memory storage medium, and an interface between the controller and non-volatile memory storage medium. The subsystem is identified by a subsystem NVMe Qualified Name (NQN).

Multipathing

Multipathing allows you to have more than one physical path from the ESXi host to a LUN on a storage system. If any component of the path fails, the host selects another available path for I/O. The process of detecting a failed path and switching to another is called path failover.

NVMe

NVMe is a method for connecting and transferring data between a host and a target storage system. NVMe is designed for use with faster storage media equipped with non-volatile memory, such as flash devices. This type of storage can achieve low latency, low CPU usage, and high performance, and generally serves as an alternative to SCSI storage.

Explain the VMFS locking mechanisms

Newly formattted VMFS5/6 datastores uses ATS-only mechanism. Older datastores may use ATS+SCSI ATS: Hardware assisted locking supporting discrete locking per disk sector. SCSI reservations: Locks an entire storage device while an operation that requres metadata protection is performed. The reservation is released when the operations completes.

RDM

Raw Device Mapping. A mapping file in a separate VMFS volume that acts proxy for a raw physical storage device. With the RMD, a VM can access and use the storage device directly. The RDM contains metadata for managing and redirecting the disk access to the physical device.

VMware High-Performance Plug-in (HPP)

Replaces the NMP for high-speed devices such as NVMe. Can improve the performance of ultra-fast flash devices installed locally on the ESXi host. Also NVMe-oF.

Zoning

SAN uses zoning to restrict server access to storage arrays not allocated to that server. Zones define which HBAs can connect to which SPs. Devices outside a zone are not visible to the devices inside the zone.

iSCSI static discovery

Static discovery can access only a particular target by target name and address.

Storage Array Type Plug-ins (SATPs)

Submodules of the NMP and are responsible for determining the state of array-specific paths, performing path activation and detecting path errors.

Path Selection Plug-ins (PSPs)

Submodules of the NMP and are responsible for selection a physical path for I/O requests.

Virtual port storage system (iSCSI)

Supports access to all available LUNs through a single virtual port. Virtual port storage systems are active-active storage devices, but hide their multiple connections though a single port.

Active-active storage system

Supports access to the LUNs simultaneously through all the storage ports that are available without significant performance degradation. All the paths are active, unless a path fails.

NFS server configuration

The NFS volume must be exported using NFS over TCP. The NFS shares must be exported as either NFS3 or NFS4.1. Not both.

NVMe Transports

The NVMe storage can be directly attached to a host using a PCIe interface or indirectly through different fabric transports. VMware NVMe over Fabrics (NVMe-oF) provides a distance connectivity between a host and a target storage device on a shared storage array. - NVMe over PCIe - NVMe over RDMA - NVMe over Fibre Channel (FC-NVMe)

PMem-datastores

The PMem datastore is used to store virtual NVDIMM devices and traditional virtual disks of a VM. The VM home directory with the vmx and vmware.log files cannot be placed on the PMem datastore.

iSCSI Initiator

The client, called iSCSI initiator, operates on your ESXi host. It initiates iSCSI sessions by issuing SCSI commands and transmitting them, encapsulated into the iSCSI protocol, to an iSCSI server.

Native Multipathing Plug-in (NMP)

The default VMkernel multipathing module. Associates physical paths with a specific storage device and provides a default path selection algorithm based on array type.

Storage Policies

The policies control which type of storage is provided for the virtual machine and how the virtual machine is placed within storage. They also determine data services that the virtual machine can use.

iSCSI target

The server is known as an iSCSI target. Typically, the iSCSI target represents a physical storage system on the network. The target can also be a virtual iSCSI SAN, for example, an iSCSI target emulator running in a virtual machine. The iSCSI target responds to the initiator's commands by transmitting required iSCSI data.

Snapshots

The state of the virtual disk is preserved, which prevents the guest operating system from writing to it. A delta or child disk is created. The delta represents the difference between the current state of the VM disk and the state that existed when you took the previous snapshot. If the snapshot is reverted back to, all changes made in the delta disks are discarded and operations continue. If the snapshot is deleted, all changes in the delta disks has to be written to the virtual disk.

Multipathing Plug-ins (MPPs)

Third-party multipathing plug-ins using the VMkernel APIs. Can provide specific load balancing/failover functionalities and has to be installed on ESXi hosts.

VMware NVMe over FC

This technology maps NVMe onto the Fibre Channel protocol to enable the transfer of data and commands between a host computer and a target storage device.

NVMe over RDMA (RoCE v2)

This technology uses a remote direct memory access (RDMA) transport between two systems on the network. The transport enables data exchange in the main memory bypassing the operating system or the processor of either system. ESXi supports RDMA over Converged Ethernet v2 (RoCE v2) technology, which enables a remote direct memory access over an Ethernet network.

512-byte native format (512n)

Traditional 512 byte sector format.

Fibre Channel

Uses a switching fabric to connect storage LUNs to hosts. Encapsulates SCSI commands in Fibre Channel frames.

Compare VMFS5 vs VMFS6

VMFS6 requires ESXi 6.5 or later. VMFS5 do not support 4kn storage devices. VMFS5 supports automatic space reclamation. VMFS5 supports manual. VMFS5 uses VMFSsparse for vDisks smaller than 2TB. SEsparse for larger disks. VMFS6 only uses SEsparse.

SEsparse vs VMFSsparse

VMFSsparse is a redo-log that starts empty. Upon snapshot creation, the base vmdk attached to the VM is changed to the newly created sparse vmdk. SEsparse is more space efficient and supports space reclamation technique. Blocks that the guest OS deletes are marked. The system sends commands to the SEsparse layer to unmap those blocks.

NVMe Controller Discovery

With this mechanism, the ESXi host first contacts a discovery controller. The discovery controller returns a list of available controllers. After you select a controller for your host to access, all namespaces associated with this controller become available to your host.

Port_ID (or port address)

Within a SAN, each port has a unique port ID that serves as the FC address for the port. This unique ID enables routing of data through the SAN to that port. The FC switches assign the port ID when the device logs in to the fabric. The port ID is valid only while the device is logged on.

NVMe Controller Connection

Your ESXi host connects to the controller that you specify. All namespaces associated with this controller become available to your host.

iSCSI Extensions for RDMA (iSER)

iSER differs from traditional iSCSI as it replaces the TCP/IP data transfer model with the Remote Direct Memory Access (RDMA) transport. Using the direct data placement technology of the RDMA, the iSER protocol can transfer data directly between the memory buffers of the ESXi host and storage devices. This method eliminates unnecessary TCP/IP processing and data coping, and can also reduce latency and the CPU load on the storage device


Related study sets

Chapter 3 + 19: Establishing Goals Consistent with Your Values and Ethics & Project Management

View Set

SEC - 160 Security Administration I Chapter 8 Cryptography

View Set

Bio 270 Lecture Exam #2 (ch.8-13)

View Set

Descubre 2: Lección 1 La Salud (Fill in the blank)

View Set

Air Pollution-Chapters 18 and 19

View Set

CHP 6 PHYSICAL ACTIVITY AND FITNESS

View Set