Nutanix NCP

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

AOS

AOS is the base operating system that runs on each CVM.

VLAN Best Practices

Add the Controller VM and the AHV host to the same VLAN. By default, AHV assigns the Controller VM and the hypervisor to VLAN 0, which effectively places them on the native VLAN configured on the upstream physical switch. Do not add other devices, such as guest VMs, to the same VLAN as the CVM and hypervisor. Isolate guest VMs on one or more separate VLANs.

Prism Image Post-Import Action

After you import an image you can perform several actions. Clone a VM from the image Leave the image in the service for future deployments Delete the imported image After you import an image, you must clone a VM from the image that you have imported and then delete the imported image.

OVS Bonded Port (Bond0) Best Practices

Aggregate the host 10 GbE interfaces to an OVS bond on br0. Trunk these interfaces on the physical switch. By default, the 10 GbE interfaces in the OVS bond operate in the recommended active-backup mode. Note: Nutanix does not recommend nor support the mixing of bond modes across AHV hosts in the same cluster. LACP configurations might work but might have limited support.

Stargate Component

- A distributed system that presents storage to other systems (such as a hypervisor) needs a unified component for receiving and processing data that it receives. The Nutanix cluster has a software component called Stargate that manages this responsibility. All read and write requests are sent across an internal vSwitch to the Stargate process running on that node. Stargate depends on Medusa to gather metadata and Zeus to gather cluster configuration data. From the perspective of the hypervisor, Stargate is the main point of contact for the Nutanix cluster. Note: If Stargate cannot reach Medusa, the log files include an HTTP timeout. Zeus communication issues can include a Zookeeper timeout.

Nutanix Cluster

- A Nutanix cluster is a logical grouping of physical and logical components. - The nodes in a block can belong to the same or different clusters. - Joining multiple nodes in a cluster allows for the pooling of resources. - Acropolis presents storage as a single pool via the Controller VM (CVM). - As part of the cluster creation process, all storage hardware - SSDs, HDDs, and Non-Volatile Memory Express (NVMe) - is presented as a single storage pool.

Nutanix Block

- A block is a chassis that holds one to four nodes, and contains power, cooling, and the backplane for the nodes. - The number of nodes and drives depends on the hardware chosen for the solution.

IPAM

- A managed network is a VLAN plus IP Address Management (IPAM). - IPAM is the cluster capability to function like a DHCP server, to assign an IP address to a VM that sits on the managed network. - The Acropolis Master acts as an internal DHCP server for all managed networks. - The OVS is responsible for encapsulating DHCP requests from the VMs in VXLAN and forwarding them to the Acropolis Master. - VMs receive their IP addresses from the Acropolis Master's responses. - The IP address assigned to a VM is persistent until you delete the VNIC or destroy the VM. - The Acropolis Master runs the CVM administrative process to track device IP addresses. This creates associations between the interface's MAC addresses, IP addresses and defined pool of IP addresses for the AOS DHCP server.

Nutanix Node

- A node is an x86 server with compute and storage resources. - A single Nutanix cluster can have an unlimited number of nodes. - Different hardware platforms are available to address varying workload needs for compute and storage.

Storage Container

- A storage container is a logical segmentation of the storage pool. - Storage containers are thin provisioned and can be configured with compression, deduplication, replication factor, and so on. - Contains configuration options (example: compression, deduplication, RF, and so on) - Contains the virtual disks (vDisks) used by virtual machines. Selecting a storage pool for a new container defines the physical storage where the vDisks are stored. - A storage container is a subset of available storage within a storage pool. Storage containers enable an administrator to apply rules or transformations such as compression to a data set. They hold the virtual disks (vDisks) used by virtual machines. Selecting a storage pool for a new storage container defines the physical disks where the vDisks are stored. ncli ctr ls displays existing containers.

Storage Pool

- A storage pool is a group of physical disks from all tiers and is a logical storage representation of the physical drives from all nodes in the cluster. - A storage device (SSD and HDD) can only belong to one storage pool at a time. This can provide physical storage separation between VMs if the VMs are using a different storage pool. - Nutanix recommends creating a single storage pool to hold all disks within the cluster. - A storage pool is a group of physical storage devices for the cluster including PCIe SSD, SSD, and HDD devices. The storage pool spans multiple nodes and scales as the cluster expands. A storage device can only be a member of a single storage pool. Nutanix recommends creating a single storage pool containing all disks within the cluster. ncli sp ls displays existing storage pools.

vDisk Component

- A vDisk is a logical component and is any file over 512KB on DSF including, .vmdks and VM hard disks. - Created on a storage container and is composed of extents that are grouped and stored as an extent group. Extents consist of n number of contiguous blocks and are dynamically distributed among extent groups to provide data striping across nodes/disks to improve performance. - A VM virtual disk (such as a VM-flat.vmdk) and a VM swap file (VM.vswp) are also vDisks.

VLANs

- AHV supports two different ways to provide VM connectivity: managed and unmanaged networks. - With unmanaged networks, VMs get a direct connection to their VLAN of choice. Each virtual network in AHV maps to a single VLAN and bridge. All VLANs allowed on the physical switch port to the AHV host are available to the CVM and guest VMs. You can create and manage virtual networks, without any additional AHV host configuration, using: Prism Element Acropolis CLI (aCLI) REST API - Acropolis binds each virtual network it creates to a single VLAN.

Prism Operational Insight (Pro)

- Advanced machine learning technology - Built-in heuristics and business intelligence - Customizable dashboards - Built-in and custom reporting - Single-click query

One Way Authentication

- Authenticate to the server

Bonds

- Bonded ports aggregate the physical interfaces on the AHV host. - By default, the system creates a bond named br0-up in bridge br0 containing all physical interfaces. - Nutanix recommends using the name br0-up to quickly identify this interface as the bridge br0 uplink. - Only utilize NICs of the same speed within the same bond.

Prism Starter Edition

- Both Prism Element and Prism Central are collectively referred to as Prism Starter. Prism Central for a single cluster is free of charge, you must purchase a license to manage multiple clusters.

Bridges

- Bridges act as virtual switches to manage traffic between physical and virtual network interfaces. - The default AHV configuration includes an OVS bridge called br0 and a native Linux bridge called virbr0. - The virbr0 Linux bridge carries management and storage communication between the CVM and AHV host. All other storage, host, and VM network traffic flows through the br0 OVS bridge.

Two Way Authentication

- Client authenticates to server and server authenticates to client

DARE

- Data at Rest Encryption (DARE) - Data at Rest Encryption (DARE) secures data while at rest using built-in key-based access management. - Data is encrypted on all drives at all times. - Data is inaccessible in the event of drive or node theft. - Data on a drive can be securely destroyed. - Key authorization enables password rotation at arbitrary times. - Protection can be enabled or disabled at any time. - No performance penalty is incurred despite encrypting all data. - Nutanix provides a software-only option for data-at-rest security with the Ultimate license. This does not require the use of self-encrypting drives.

Active-Backup Bond Mode

- Default mode - With the active-backup bond mode, one interface in the bond carries traffic and other interfaces in the bond are used only when the active link fails. - Active-backup is the simplest bond mode, easily allowing connections to multiple upstream switches without any additional switch configuration. - The active-backup bond mode requires no special hardware and you can use different physical switches for redundancy. - traffic from all VMs uses only a single active link within the bond at one time. - This mode only offers failover ability (no traffic load balancing.) If the active link goes down, a backup or passive link activates to provide continued connectivity. AHV transmits all traffic including those from the CVM and VMs across the active link. All traffic shares 10 Gbps of network bandwidth.

Virtual Bridge Best Practices

- Do not delete or rename OVS bridge br0. - Do not modify the native Linux bridge virbr0.

Nutanix Hardware Product Mixing Constraints

- Due to the diversity of hardware platforms, there are several product mixing restrictions: - Nodes with different Intel processor families can be part of the same cluster but cannot be located on the same block. - Hardware from different vendors cannot be part of the same cluster. - For more product mixing restrictions, please check the compatibility matrix in the Nutanix Support Portal. - You can have a mix of nodes with self-encrypting drives (SED) and standard (non-SED) disks, however you are not able to use the Data at Rest (DARE) hardware encryption feature. You can mix models (nodes) in the same cluster, but not in the same block (physical chassis).

Prism Pro Edition

- Every edition of Acropolis includes Prism Starter for single (Prism Element) and multiple site (Prism Central) management. - Prism Pro is a set of features providing advanced analytics and intelligent insights into managing a Nutanix environment. These features include performance anomaly detection, capacity planning, custom dashboards, reporting, and advanced search capabilities. You can license the Prism Pro feature set to unlock it within Prism Central. - Adds operational insight, capacity planning and performance monitoring (license required)

nCLI Component

- Get status and configure entities within a cluster. - Download the nCLI installer to a local machine from Prism Element. This requires Java Runtime Environment (JRE) version 5.0 or higher. - ncli -s management_ip_addr -u 'username' -p 'user_password' (connect via nCLI client) - ncli> entity action parameter1=value parameter2=value syntax

Information Not Shared By Pulse

- Guest VMs - User data - Metadata - Administrator credentials - Identification data - Private information

Network Segmentation During an AOS Upgrade

- If the new AOS release supports network segmentation, AHV automatically creates the eth2 interface on each CVM. However, the network remains unsegmented and the cluster services on the CVM continue to use eth0 until you configure network segmentation. - Do not delete the eth2 interface that AHV creates on the CVMs, even if you are not using the network segmentation feature.

Information Shared By Pulse

- Information collected and shared: - System alerts - Current Nutanix software version - Nutanix processes and CVM information - Hypervisor details - System-level statistics - Configuration information

Changing Passwords

- It is possible to change 4 different sets of passwords in a Nutanix cluster: user, CVM, IPMI, and the hypervisor host. - Nutanix enables administrators with password complexity features such as forcing the use of upper/lower case letters, symbols, numbers, change frequency, and password length. After you have successfully changed a password, the new password is synchronized across all Controller VMs and interfaces (Prism web console, nCLI, and SSH). - By default, the admin user password does not expire and can be changed at any time. If you do change the admin password, you will also need to update any applications and scripts that use the admin credentials for authentication. For authentication purposes, Nutanix recommends that you create a user with an admin role, instead of using the admin account.

aCLI Component

- Manage the Acropolis portion of the Nutanix environment: hosts, networks, snapshots and VMs.

Network Segmentation

- Network segmentation is designed to manage traffic from backplane (storage and CVM) traffic. - It separates storage traffic from routable management traffic for security purposes and creates separate virtual networks for each traffic type. - can segment the network on a Nutanix cluster in the following ways: - On an existing cluster by using the Prism web console - When creating a cluster by using Nutanix Foundation 3.11.2 or higher versions.

Unsupported Network Segmentation Configurations

- Network segmentation is not supported in the following configurations: - Clusters on which the CVMs have a manually created eth2 interface. - Clusters on which the eth2 interface on one or more CVMs have manually assigned IP addresses. - In ESXi clusters where the CVM connects to a VMware distributed virtual switch. - Clusters that have two (or more) vSwitches or bridges for CVM traffic isolation. The CVM management network (eth0) and the CVM backplane network (eth2) must reside on a single vSwitch or bridge. Do not create these CVM networks on separate vSwitches or bridges.

AHV

- Nutanix AHV is a comprehensive enterprise virtualization solution tightly integrated into Acropolis and is provided with no additional license cost. - AHV delivers the features required to run enterprise applications, for example: Combined VM operations and performance monitoring via Nutanix Prism Backup, disaster recovery, host and VM high availability Dynamic scheduling (intelligent placement and resource contention avoidance) Broad ecosystem support. Certified Citrix ready, Microsoft validated via the Server Virtualization Validation Program (SVVP). - You manage AHV through the Prism web console (GUI), command line interface (nCLI/aCLI), and REST APIs.

NGT

- Nutanix Guest Tools (NGT) is an in-guest agent framework that enables advanced VM management functionality through the Nutanix Platform. - The NGT bundle consists of the following components: - Nutanix Guest Agent (NGA) service. Communicates with the Nutanix Controller VM. - File Level Restore CLI. Performs self-service file-level recovery from the VM snapshots. For more information about self-service restore, see the Acropolis Advanced Setup Guide. - Nutanix VM mobility drivers. Provides drivers for VM migration between ESXi and AHV, in- place hypervisor conversion, and cross-hypervisor disaster recovery (CH-DR) features. For more information about cross- hypervisor disaster recovery, see this article on the Support Portal. - VSS requestor and hardware provider for Windows VMs. Enables application-consistent snapshots of AHV or ESXi Windows VMs. For more information about Nutanix VSS-based snapshots for the Windows VMs, see the Application-Consistent Snapshot Guidelines on the Support Portal.

Nutanix Key Management and Administration

- Nutanix nodes are authenticated by a key management server (KMS). - SEDs generate new encryption keys, which are uploaded to the KMS. - In the event of power failure or a reboot, keys are retrieved from the KMS and used to unlock the SEDs. - You can instantly reprogram security keys. - Crypto Erase can be used to instantly erase all data on an SED while generating a new key.

Prism Central

- Provides multicluster management through a single web console and runs as a separate VM - Allows you to manage different clusters across separate physical locations on one screen and offers an organizational view into a distributed Nutanix environment. - Can deploy Prism Central: Manually Import a VM template Using one-click from Prism Element - You can run a Prism Central VM in a VM of any size; the only difference is the amount of CPU and memory available to the Prism Central VM for VM management. - Can deploy a Prism Central instance initially as a scale-out cluster or, if you are running it as a single VM, easily scale it out with one click using Prism Element. - Management of multiple Acropolis clusters via a single UI (license required)

Prism Performance Monitoring

- Provides real-time performance behavior of VMs and workloads. - Utilizes predictive monitoring based on behavioral analysis to detect anomalies. - Detects bottlenecks and provides guidance for VM resource allocation.

Pulse Component

- Pulse is enabled by default and monitors cluster health and proactively notifies customer support if a problem is detected. - Collects cluster data automatically and unobtrusively with no performance impact - Sends diagnostic data via e-mail to both Support and the user once per day, per node - Proactive monitoring (different from alerts) - Disabling Pulse is not recommended, since Support will not be notified if you have an issue. - Pulse sends alerts to Nutanix Support by default, but administrators can define additional recipients. - Basic statistics include Zeus, Stargate, Cassandra, and Curator subsystem information; Controller VM information; hypervisor and VM information; cluster configuration; and performance information.

SCMA

- Security Configuration Management Automation (SCMA) - Monitors over 800 security entities covering storage, virtualization, and management - Detects unknown or unauthorized changes and can self-heal to maintain compliance - Logs SCMA output/actions to syslog - The SCMA framework ensures that services are constantly inspected for variance to the security policy. - With SCMA, you can schedule the STIG to run hourly, daily, weekly, or monthly.

STIGs

- Security Technical Implementation Guides - STIGs lock down IT environments and reduce security vulnerabilities in infrastructure. - Nutanix has created custom STIGs that are based on the guidelines outlined by The Defense Information Systems Agency (DISA) to keep the Enterprise Cloud Platform within compliance and reduce attack surfaces. - Nutanix provides the STIGs in machine-readable XCCDF.xml format and PDF - Nutanix Controller VM conforms to RHEL 7 (Linux 7) STIG as published by DISA. Additionally, Nutanix maintains its own STIG for the Acropolis Hypervisor (AHV).

Configuring Network Segmentation For an Existing RDMA Cluster

- Segment the network on an existing RDMA cluster by using the Prism web console. The network segmentation process: - Creates a separate network for RDMA communications on the existing default virtual switch. - Places the rdma0 interface created on the CVMs during upgrade. - Places the host interfaces on the newly created network. - For new RDMA networks, you must specify a nonroutable subnet. AHV automatically assigns the interfaces on the backplane network IP addresses from this subnet, so reserve the entire subnet for the backplane network alone. - If you plan to specify a VLAN for the RDMA network, configure the VLAN on the physical switch ports to which the nodes connect. If you specify the optional VLAN ID, AHV places the newly created interfaces on the VLAN. Nutanix highly recommends a separate VLAN for the RDMA network to achieve true segmentation.

Prism Infrastructure Management

- Streamline common hypervisor and VM tasks. - Deploy, configure, and manage clusters for storage and virtualization. - Deploy, configure, migrate, and manage virtual machines. - Create datastores, manage storage policies, and administer DR.

Built-In Roles

- Super Admin: Full administrator privileges. - Prism Admin: Full administrator privileges except for creating or modifying the user accounts. - Prism Viewer: View-only privileges. - Self-Service Admin: Manages all cloud-oriented resources and services. This is the only cloud administration role available. - Project Admin: Manages cloud objects (roles, VMs, Apps, Marketplace) belonging to a project. You can specify a role for a user when you assign a user to a project, so individual users or groups can have different roles in the same project. - Developer: Develops, troubleshoots, and tests applications in a project. - Consumer: Accesses the applications and blueprints in a project.

CLI Powershell Cmdlets

- Task Administration: Get-NTNXTask Poll-NTNXTask Acropolis VMAdministration Operations Add-NTNXVMDisk Get-NTNXVMDisk Remove- NTNXVMDisk Set- NTNXVMDisk Stop- NTNXVMDisk Stop-NTNXVMMove Add-NTNXVMNIC Get- NTNXVMNIC Remove-NTNXVMNIC - Network Administration: Get-NTNXNetwork New-NTNXNetwork Remove-NTNXNetwork Set-NTNXNetwork Get-NTNXNetworkAddressTable Reserve-NTNXNetworkIP UnReserve-NTNXNetworkIP - Snapshot Administration: Clone-NTNXSnapshot Get-NTNXSnapshot New-NTNXSnapshot Remove-NTNXSnapshot

Prism Image Configuration

- The Nutanix web console, Prism, allows you to import and configure operating system ISO and disk image files. - This image service allows you to assemble a repository of image files in different formats (raw, vhd, vhdx, vmdk, vdi, iso, and qcow2) that you can later use when creating virtual machines. - If connected to Prism Central, you can migrate your images over to Prism Central for centralized management. - This will not remove your images from Prism Element, but will allow management only in Prism Central. - When you create a VM using that image, the image is copied to other Prism Element clusters, is made active, and is then available for use on all Prism Element clusters managed by that instance of Prism Central. - The QCOW2 format decouples the physical storage layer from the virtual layer by adding a mapping between the logical and physical blocks.

Network Best Practices

- The best practice is to use only the 10 GB NICs and to disconnect the 1 GB NICs if you do not need them or put them in a separate bond to be used in not critical networks. - Connections from the server to the physical switch use 10 GbE or higher interfaces. You can establish connections between the switches with 40 GbE or faster direct links, or through a leaf-spine network topology (not shown). The IPMI management interface of the Nutanix node also connects to the out-of-band management network, which may connect to the production network, but it is not mandatory.

Acropolis

- The foundation for a platform that starts with hyperconverged infrastructure then adds built-in virtualization, storage services, virtual networking, and cross-hypervisor application mobility. - Nutanix Acropolis includes three foundational components: - Distributed Storage Fabric (DSF) - App Mobility Fabric - AHV - AHV is the hypervisor while DSF and App Mobility Fabric are functional layers in the Controller VM (CVM). - Acropolis also refers to the base software running on each node in the cluster.

Nutanix Management Interfaces

- There are several methods to manage a Nutanix implementation. - Graphical UI - Prism Element and Prism Central. This is the preferred method for management because you can manage the entire environment (when using Prism Central). - Command line interfacesnCLI - get status and configure entities within a clusteraCLI - manage the Acropolis portion of the Nutanix environment - Nutanix PowerShell cmdlets - For use with Windows PowerShell - REST API - exposes all GUI components for orchestration and automation

Changing the IPMI Password

- This procedure helps prevent the BMC password from being retrievable on port 49152. - The maximum allowed length of the IPMI password is 19 characters, except on ESXi hosts, where the maximum length is 15 characters. Do not use the following special characters in the IPMI password: & ; ` ' \ " | * ? ~ < > ^ ( ) [ ] { } $ \n \r - Perform these steps on every IPMI host in the cluster. 1. Sign in to the IPMI web interface as the administrative user. 2. Navigate to the administrative user configuration and modify the user 3. Update the password

Two Factor Authentication

- Username/password sent for authentication as well as a valid certificate to verify identity

Nutanix HCI

- Uses off-the-shelf x86 servers with local flash drives (SSD) and spinning hard disks (HDD) to create a cluster of compute and storage resources. - Easily scales out compute and storage resources with the addition of nodes. - Tolerates one or two node failures with built-in resiliency. - Restores resiliency after a node failure by replicating nonredundant data to other nodes. - Provides a set of REST API calls that you can use for automation.

Controller Virtual Machine

- What makes a node "Nutanix" is the Controller VM (CVM). - There is one CVM per node. - CVMs linked together across multiple nodes forms a cluster. - The CVM has direct access to the local SSDs and HDDs of the node. - A CVM communicates with all other cluster CVMs across a network to pool storage resources from all nodes. This is the Distributed Storage Fabric (DSF). - The CVM provides the user interface known as the Prism web console. - The CVM allows for cluster-wide operations of VM-centric software-defined services: snapshots, clones, High Availability, Disaster Recovery, deduplication, compression, erasure coding, storage optimization, and so on. Hypervisors (AHV, ESXi, Hyper-V, XenServer) communicate with DSF using the industry-standard protocols NFS, iSCSI, and SMB3.

Creating Custom RBAC Roles

- When creating custom roles for your organization, remember to: - Clearly understand the specific set of tasks a user will need to perform their job - Identify permissions that map to those tasks and assign them accordingly - Document and verify your custom roles to ensure that the correct privileges have been assigned

Configuring Role Mapping

- When user authentication is enabled, the following permissions are applied: - Directory-service-authorized users are assigned full administrator permissions by default. - SAML-authorized users are not assigned any permissions by default; they must be explicitly assigned. - You can refine the authentication process by assigning a role (with associated permissions) to users or groups. To assign roles: 1. Navigate to the Role Mapping section of the Settings page. 2. Create a role mapping and provide information for the directory or provider, role, entities that should be assigned to the role, and then save. Repeat this process for each role that you want to create.

Changing User Passwords

- You can change user passwords, including for the default admin user, in the web console or nCLI. Changing the password through either interface changes it for both. - Using the web console: Log on to the web console as the user whose password is to be changed and select Change Password from the user icon pull-down list of the main menu. - Using nCLI: Specify the username and passwords. $ ncli -u 'username' -p 'old_pw' user change-password current-password="curr_pw" new- password="new_pw"

Nutanix Cluster Lockdown

- You can restrict access to a Nutanix cluster. - SSH sessions can be restricted through nonrepudiated keys. - Each node employs a public/private key-pair - Cluster secured by distributing these keys - You can disable remote logon with a password. - You can completely lock down SSH access by disabling remote logon and deleting all keys except for the interCVM and CVM to host communication keys.

Configuring Network Segmentation On Existing Cluster

- You can segment the network on an existing cluster by using the Prism web console. The network segmentation process: - Creates a separate network for backplane communications on the existing default virtual switch. - Configures the eth2 interfaces that AHV creates on the CVMs during upgrade. - Places the host interfaces on the newly created network. - From the specified subnet, AHV assigns IP addresses to each new interface. Each node requires two IP addresses. For new backplane networks, you must specify a nonroutable subnet. The interfaces on the backplane network are automatically assigned IP addresses from this subnet, so reserve the entire subnet for the backplane network alone.

Nutanix VirtIO

- a collection of drivers for paravirtual devices that enhance the stability and performance of virtual machines on AHV. - Enable Windows 64-bit VMs to recognize AHV virtual hardware - Contain Network, Storage and a balloon driver (which is used to gather stats from Windows guest VMs) - If not added as ISO (CDROM), VM may not boot - Most modern Linux distributions already include drivers - Support Portal: Downloads > Tools and Firmware

REST API

- allows an external system to interrogate a cluster using a script that makes REST API calls. It uses HTTP requests (get, post, put, and delete) to retrieve information or to make changes to the cluster. - Responses are coded in JSON format. - Prism Element includes a REST API Explorer. - Displays a list of cluster objects that can be managed by the API - Sample API calls can be made to see output

1gbE and 10gbE Interface Best Practices

If you want to use the 10 GbE interfaces for guest VM traffic, make sure that the guest VMs do not use the VLAN over which the Controller VM and hypervisor communicate. If you want to use the 1 GbE interfaces for guest VM connectivity, follow the hypervisor manufacturer's switch port and networking configuration guidelines. Note: Do not include the 1 GbE interfaces in the same bond as the 10 GbE interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to bridge br0, either individually or in a second bond. Use them on other bridges.

Sysprep

Sysprep is a utility that prepares a Windows installation for duplication (imaging) across multiple systems. Sysprep is most often used to generalize a Windows installation. During generalization, Sysprep removes system-specific information and settings such as the security identifier (SID) and leaves installed applications untouched. You can capture an image of the generalized installation and use the image with an answer file to customize the installation of Windows on other systems. The answer file contains the information that Sysprep needs to complete an unattended installation. Sysprep customization requires a reference image: Log into the Web Console and browse to the VM dashboard. Select a VM to clone, click Launch Console, and log in with Administrator credentials. Configure Sysprep with a system cleanup. Specify whether or not to generalize the installation, then choose to shut down the VM. - Do not power on VM after this step

Creating a VM in Prism Self Service

This process is slightly different from creating a VM with administrative permissions. This is because self-service VMs are based on a source file stored in the Prism Central catalog. To create a VM using Prism Self-Service: In Prism Central, navigate to VM dashboard, click the List tab, and click Create VM. Select source images for the VM, including the VM template and disk images In the Deploy VM tab, provide the following information:VM nameTarget projectDisksNetworkAdvanced Settings (vCPUs and memory) After all the fields have been updated and verified, click Save to create the VM.

Create Image From CLI

To create an image (testimage) from an image located at http://example.com/disk_image, you can use the following command: <acropolis> image.create testimage source_url=http://example.com/image_iso container=default image_type=kIsoImage

Managing a VM

To modify a VM's configuration: Select the VM and click Update. The Update VM dialog box includes the same fields as the Create VM dialog box. Make the required changes and click Save. To delete a VM: Select the VM and click Delete. A confirmation prompt will appear; click OK to delete the VM. To clone a VM: Select the VM and click Clone. The Clone VM dialog box includes the same fields as the Create VM dialog box. However, all fields will be populated with information based on the VM that you are cloning. You can either:Enter a name for the cloned VM and click Save, orChange the information in some of the fields as desired, and then click Save. Other operations that are possible for a VM via one-click operations in Prism Central are: Launch console Power on/off Pause/Suspend Resume Take Snapshot Migrate (to move the VM to another host) Assign a category value Quarantine/Unquarantine Enable/disable Nutanix Guest Tools Configure host affinity Add snapshot to self-service portal template (Prism Central Administrator only) Manage VM ownership (for self-service VMs)

allssh commands

Use extreme caution when executing allssh commands. The allssh command executes a ssh command to all CVMs in the cluster.

Physical Network Layout Best Practices

Use redundant top-of-rack switches in a traditional leaf-spine architecture. The flat network design is well suited for a highly distributed, shared-nothing compute and storage architecture. Add all the nodes that belong to a given cluster to the same Layer-2 network segment. Nutanix supports other network layouts as long as you follow all other Nutanix recommendations.

AHV VM Features

VM features Intelligent placement Live migration Converged Backup/DR Image management VM operations Analytics Data path optimization

NGT OS Requirements

Versions: Windows 2008 or later, Windows 7 or later Only the 64-bit operating system is supported. You must install the SHA-2 code signing support update before installing NGT. Apply the security update in KB3033929 to enable SHA-2 code signing support on the Windows OS. If the installation of the security update in KB3033929 fails, apply one of the following rollups:KB3185330 (October 2016 Security Rollup)KB3197867 (November 2016 Security Only Rollup)KB3197868 (November 2016 Quality Rollup) For Windows Server Edition VMs, ensure that Microsoft VSS Services is enabled before starting the NGT installation. Versions: CentOS 6.5 and 7.0, Red Hat Enterprise Linux (RHEL) 6.5 and 7.0, Oracle Linux 6.5 and 7.0, SUSE Linux Enterprise Server (SLES) 11 SP4 and 12, Ubuntu 14.0.4 or later. The self-service restore feature is not available on Linux VMs. The SLES operating system is only supported for the application consistent snapshots with VSS feature. The SLES operating system is not supported for the cross-hypervisor disaster recovery feature.

NGT Requirements and Limitations

You must configure the cluster virtual IP address on the Nutanix cluster. If the virtual IP address of the cluster changes, it will impact all the NGT instances that are running in your cluster. For more information, see Impact of Changing Virtual IP Address of the Cluster. VMs must have at least one empty IDE CD-ROM slot to attach the ISO. Port 2074 should be open to communicate with the NGT-Controller VM service. The hypervisor should be ESXi 5.1 or later, or AHV 20160215 or later version. You should connect the VMs to a network that you can access by using the virtual IP address of the cluster.

ncli entity help

provides a list of all actions and parameters associated with the entity, as well as which parameters are required, and which are optional.

ncli entity action help

provides a list of all parameters associated with the action, as well as a description of each parameter.

ncli help

provides a list of entities and their corresponding actions.

LACP with Balance TCP Bond Mode

- Nutanix recommends dynamic link aggregation with LACP instead of static link aggregation due to improved failure detection and recovery. - Ensure that you have appropriately configured the upstream switches before enabling LACP. On the switch, link aggregation is commonly referred to as port channel or LAG, depending on the switch vendor. Using multiple upstream switches may require additional configuration such as MLAG or vPC. Configure switches to fall back to active-backup mode in case LACP negotiation fails sometimes called fallback or no suspend-individual. This setting assists with node imaging and initial configuration where LACP may not yet be available. - With link aggregation negotiated by LACP, multiple links to separate physical switches appear as a single layer-2 (L2) link. A traffic-hashing algorithm such as balance-tcp can split traffic between multiple links in an active-active fashion. Because the uplinks appear as a single L2 link, the algorithm can balance traffic among bond members without any regard for switch MAC address tables. Nutanix recommends using balance-tcp when using LACP and link aggregation, because each TCP stream from a single VM can potentially use a different uplink in this configuration. - Configure link aggregation with LACP and balance-tcp using the commands below on all Nutanix CVMs in the cluster. - If upstream LACP negotiation fails, the default AHV host configuration disables the bond, thus blocking all traffic. - In the AHV host and on most switches, the default OVS LACP timer configuration is slow, or 30 seconds. - Nutanix recommends setting lacp-time to decrease the time it takes to detect link failure from 90 seconds to 3 seconds.

SSL Certificate Authentication

- Nutanix supports SSL certificate-based authentication for console access. - AOS includes a self-signed SSL certificate by default to enable secure communication with a cluster. - AOS allows you to replace the default certificate through the web console Prism user interface.

OVS

- Open vSwitch OVS is an open source software switch implemented in the Linux kernel and designed to work in a multiserver virtualization environment. - By default, OVS behaves like a layer-2 switch that maintains a MAC address table. - OVS supports many popular switch features, such as VLAN tagging, load balancing, and Link Aggregation Control Protocol (LACP.) - Each AHV server maintains an OVS instance, and all OVS instances combine to form a single logical switch. Constructs called bridges manage the switch instances residing on the AHV hosts.

Changing the CVM Password

- Perform these steps on any one Controller VM in the cluster to change the password of the nutanix user. - After you have successfully changed the password, the new password is synchronized across all Controller VMs in the cluster.

Changing the Acropolis Host Password

- Perform these steps on every Acropolis host in the cluster 1. Log on to the AHV host with SSH. 2. Change the root password. 3. Respond to the prompts, providing the current and new root password.

Ports

- Ports are logical constructs created in a bridge that represent connectivity to the virtual switch. - Nutanix uses several port types, including internal, tap, VXLAN, and bond. - An internal port with the same name as the default bridge (br0) provides access for the AHV host. - Tap ports connect virtual NICs presented to VMs. - Use VXLAN ports for IP address management functionality provided by Acropolis. - Bonded ports provide NIC teaming for the physical interfaces of the AHV host.

Prism Capacity Planning (Pro)

- Predictive analytics based on capacity usage and workload behavior - Capacity optimization advisor - Capacity expansion forecast

RBAC

- Prism Central supports role-based access control (RBAC) that you can configure to provide customized access permissions for users based on their assigned roles. - Prism Central includes a set of predefined roles. - You can also define additional custom roles. - Configuring authentication confers default user permissions that vary depending on the type of authentication (full permissions from a directory service or no permissions from an identity provider). You can configure role maps to customize these user permissions. - You can refine access permissions even further by assigning roles to individual users or groups that apply to a specified set of entities.

Prism

- Prism is the management plane that provides a unified management interface that can generate actionable insights for optimizing virtualization, provides infrastructure management and everyday operations. - Prism gives Nutanix administrators an easy way to manage and operate their end-to-end virtual environments. Prism includes two software components: Prism Element and Prism Central. - Prism is an end-to-end management solution for any virtualized datacenter, with additional functionality for AHV clusters, and streamlines common hypervisor and VM tasks. - Prism provides one-click infrastructure management for virtual environments and is hypervisor agnostic. With AHV installed, Prism and aCLI (Acropolis Command Line Interface) provide more VM and networking options and functionality. - Runs on every node in the cluster and elects a leader. All requests are forwarded from followers to the leader using Linux iptables (allowing access using any controller VM IP) - Communicates with Zeus for cluster configuration data and Cassandra for statistics to present to user. Also communicates with ESXi for VM status and related info.

Prism Features

- Prism streamlines common hypervisor and VM tasks and focuses on common operational tasks in four areas: - Cluster management - Operational insight - Capacity planning - Performance monitoring

Prism Element

- Provides a graphical user interface to manage most activities in a Nutanix cluster. - Some of the major tasks you can perform using Prism Element include: View or modify cluster parameters. Create a storage container. Add nodes to the cluster. Upgrade the cluster to newer Acropolis versions. Update disk firmware and other upgradeable components. Add, update, and delete user accounts. Specify alert policies. - Service built into the platform for every Nutanix cluster deployed. - Provides the ability to fully configure, manage, and monitor a single Nutanix cluster running any hypervisor. - Localized 1-to-1 infrastructure management and operations

Balance-slb Bond Mode

- take advantage of the bandwidth provided by multiple upstream switch links... uses measured traffic load to rebalance VM traffic from highly used to less used interfaces. - Traffic from some source MAC hashes may move to a less active link to more evenly balance bond member utilization. - Each individual VM NIC uses only a single bond member interface at a time, but a hashing algorithm distributes multiple VM NICs' multiple source MAC addresses across bond member interfaces. - The default rebalance interval is 10 seconds, but Nutanix recommends setting this interval to 30 seconds to avoid excessive movement of source MAC address hashes between upstream switches. - Do not use link aggregation technologies such as LACP with balance-slb. The balance-slb algorithm assumes that upstream switch links are independent L2 interfaces. It handles broadcast, unicast, and multicast (BUM) traffic, selectively listening for this traffic on only a single active adapter in the bond. - Do not use IGMP snooping on physical switches connected to Nutanix servers using balance-slb. Balance-slb forwards inbound multicast traffic on only a single active adapter and discards multicast traffic from other adapters. Switches with IGMP snooping may discard traffic to the active adapter and only send it to the backup adapters. This mismatch leads to unpredictable multicast traffic behavior. Disable IGMP snooping or configure static IGMP groups for all switch ports connected to Nutanix servers using balance-slb. IGMP snooping is often enabled by default on physical switches.

DARE Implementation

1. Install SEDs for all data drives in a cluster. The drives are FIPS 140-2 Level 2 validated and use FIPS 140-2 validated cryptographic modules. 2. When you enable data protection for the cluster, the Controller VM must provide the proper key to access data on a SED. 3. Keys are stored in a key management server that is outside the cluster, and the Controller VM communicates with the key management server using the Key Management Interoperability Protocol (KMIP) to upload and retrieve drive keys. 4. When a node experiences a full power off and power on (and cluster protection is enabled), the Controller VM retrieves the drive keys from the key management server and uses them to unlock the drives. - Use Prism to manage key management device and certificate authorities. Each Nutanix node automatically: 1. Generates an authentication certificate and adds it to the key management device 2. Auto-generates and sets PINs on its respective FIPS-validated SED. - The Nutanix controller in each node then adds the PINs (aka KEK, key encryption key) to the key management device. - Once the PIN is set on an SED, you need the PIN to unlock the device (lose the PIN, lose data). - ESXi and NTNX boot partition remain unencrypted. SEDs support encrypting individual disk partitions selectively using the "BAND" feature (a range of blocks).

Curator Component

A Curator master node periodically scans the metadata database and identifies cleanup and optimization tasks that Stargate should perform. Curator shares analyzed metadata across other Curator nodes. Curator depends on Zeus to learn which nodes are available, and Medusa to gather metadata. Based on that analysis, it sends commands to Stargate.

Nutanix Enterprise Cloud

A converged, scale-out compute and storage system that is purpose-built to host and store virtual machines. The foundational unit for the cluster is a Nutanix node.

Cassandra Component

Cassandra is a distributed, high-performance, scalable database that stores all metadata about the guest VM data stored in a Nutanix datastore. Cassandra runs on all nodes of the cluster. Cassandra monitor Level-2 periodically sends heartbeat to the daemon, that include information about the load, schema, and health of all the nodes in the ring. Cassandra monitor L2 depends on Zeus/Zk for this information.

Medusa Component

Distributed systems that store data for other systems (for example, a hypervisor that hosts virtual machines) must have a way to keep track of where that data is. In the case of a Nutanix cluster, it is also important to track where the replicas of that data are stored. Medusa is a Nutanix abstraction layer that sits in front of the database that holds metadata. The database is distributed across all nodes in the cluster, using a modified form of Apache Cassandra.

OVS Best Practices

Do not modify the OpenFlow tables that are associated with the default OVS bridge br0.

Controller VM Best Practices

Do not remove the Controller VM from either the OVS bridge br0 or the native Linux bridge virbr0.

IPMI Port on Hypervisor Host Best Practices

Do not trunk switch ports that connect to the IPMI interface. Configure the switch ports as access ports for management simplicity.

Creating a VM in AHV

In Prism Central, navigate to VM dashboard, click the List tab, and click Create VM. In the Cluster Selection window, select the target cluster for your VM and click OK. In the Create VM dialog box, update the following information as required for your VM:NameDescription (optional)TimezonevCPUsNumber of cores per vCPUMemoryGPU and GPU modeDisks (CD-ROM or disk drives)Network interfaceNICVLAN name, ID, and UUIDNetwork connection stateNetwork address/prefixIP address (for NICs on managed networks only)VM host affinity After all fields have been updated and verified, click Save to create the VM. When creating a VM, you can also provide a user data file for Linux VMs, or an answer file for Windows VMs, for unattended provisioning. There are 3 ways to do this: If the file has been uploaded to a storage container on a cluster, click ADSF path and enter the path to the file. If the file is available on your local computer, click Upload a File, click Choose File, and then upload the file. If you want to create the file or paste the contents directly, click Type or paste script and then use the text box that is provided You can also copy or move files to a location on the VM for Linux VMs, or to a location in the ISO file for Windows VMs, during initialization. To do this, you need to specify the source file ADSF path and the destination path in the VM. To add other files or directories, repeat this process as necessary.

Upstream Physical Switch Best Practices

Nutanix does not recommend the use of Fabric Extenders (FEX) or similar technologies for production use cases. While initial, low load implementations might run smoothly with such technologies, poor performance, VM lockups, and other issues might occur as implementations scale upward. Nutanix recommends the use of 10Gbps, line-rate, nonblocking switches with larger buffers for production workloads. Use an 802.3-2012 standards-compliant switch that has a low latency, cut-through design and provides predictable, consistent traffic latency regardless of packet size, traffic pattern, or the features enabled on the 10 GbE interfaces. Port-to-port latency should be no higher than 2 microseconds. Use fast-convergence technologies (such as Cisco PortFast) on switch ports connected to the hypervisor host. Avoid using shared buffers for the 10 GbE ports. Use a dedicated buffer for each port.

Nutanix Health Monitoring

Nutanix provides a range of status checks to monitor the health of a cluster. Summary health status information for VMs, hosts, and disks displays on the home dashboard. In depth health status information for VMs, hosts, and disks is available through the Health dashboard. You can: Customize the frequency of scheduled health checks. Run Nutanix Cluster Check (NCC) health checks directly from the Prism. Collect logs for all the nodes and components. - If the Cluster Health service status is down for more than 15 minutes, an alert email is sent by the AOS cluster to configured addresses and to Nutanix support (if selected). In this case, no alert is generated in the Prism web console. The email is sent once every 24 hours. You can run the NCC check cluster_services_down_check to see the service status.

Cloud-Init

On non-Windows VMs, Cloud-config files, special scripts designed to be run by the Cloud-Init process, are generally used for initial configuration on the very first boot of a server. The cloud- config format implements a declarative syntax for many common configuration items and also allows you to specify arbitrary commands for anything that falls outside of the predefined declarative capabilities. This lets the file act like a configuration file for common tasks, while maintaining the flexibility of a script for more complex functionality. You must pre-install the utility in the operating system image used to create VMs. Cloud-Init runs early in the boot process and configures the operating system on the basis of data that you provide. You can use Cloud-Init to automate tasks such as: Setting a host name and locale Creating users and groups Generating and adding SSH keys so that users can log on Installing packages Copying files Bootstrapping other configuration management tools such as Chef, Puppet, and Salt

Zeus Component

Zeus is an interface to access the information stored within Zookeeper and is the Nutanix library that all other components use to access the cluster configuration. A key element of a distributed system is a method for all nodes to store and update the cluster's configuration. This configuration includes details about the physical components in the cluster, such as hosts and disks, and logical components, like storage containers.

Zookeeper Component

Zookeeper stores information about physical components, including their IP addresses, capacities, and data replication rules, in the cluster configuration. Zookeeper runs on either three or five nodes, depending on the redundancy factor (number of data block copies) applied to the cluster. Zookeeper uses multiple nodes to prevent stale data from being returned to other components. An odd number provides a method for breaking ties if two nodes have different information. Of these three nodes, Zookeeper elects one node as the leader. The leader receives all requests for information and confers with the two follower nodes. If the leader stops responding, a new leader is elected automatically. Zookeeper has no dependencies, meaning that it can start without any other cluster components running.


Set pelajaran terkait

Role of Kidneys in Acid/Base Balance

View Set

CH.8 Receiving, Storing, and Issuing

View Set

Personal Investment Banking Prep Material

View Set

Chapter 12. Demand Planning: Forecasting and Demand Management

View Set

Abeka 9th Grade Algebra 1 Test 1

View Set