Virtualization and Networking

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Basic elements of a processor (4)

Arithmetic logic unit (ALU), floating point unit (FPU), registers, L1/L2 cache memory

Batch file

= batch script = shell script

VPN

A VPN connection refers to the process of establishing a private and secure link or path between one or more local and remote network devices. A VPN connection is similar to a WAN connection, but offers more privacy and security. A VPN connection is generally established through a VPN manager (client/server) that utilizes networking protocols such as Point-to-Point Tunneling Protocol (PPTP) and Layer 2 Tunneling Protocol (L2TP). The VPN connection typically exists between the VPN client and server device. It creates a VPN tunnel between both the local and remote device, and ensures a secure communication between them. The VPN connection is only established when the client device authenticates itself on the VPN server or gateway.

Execution stack (call stack)

A call stack, in C#, is the list of names of methods called at run time from the beginning of a program until the execution of the current statement. A call stack is mainly intended to keep track of the point to which each active subroutine should return control when it finishes executing. Call stack acts as a tool to debug an application when the method to be traced can be called in more than one context. This forms a better alternative than adding tracing code to all methods that call the given method. Whenever an exception is thrown anywhere in the user code, the Common Language Runtime (CLR) will unwind the call stack and search for the catch block to determine the specific exception type. If there is no appropriate handler, CLR will terminate the application. Call stack, therefore, is used to tell the execution pointer where to go next. Call stack is organized as "stack," a data structure in memory for storing items in a last-in-first-out manner, so that the caller of the subroutine pushes the return address onto the stack and the called subroutine, after finishing, pops the return address off the call stack to transfer control to that address. In C#, any application begins with a "main" method, which in turn calls other methods. On every call to a method, the method is added to the top of the stack and is removed from the stack on its return to the caller. Also, the scope of a variable declared in a block is determined from the time its value is pushed onto the stack (as part of call stack) until the execution leaves the block when the variable and the call stack are popped off the stack. Thus, the stack maintains both local variables (value types) and the call stack (stack frames), the size of which indicates the complexity of a program. [IN CONTEXT OF C#]

Circuit-Level Gateway

A circuit-level gateway applies these methods when a connection such as Transmission Control Protocol is established and packets start to move

Computer Network

A computer network is a group of computer systems and other computing hardware devices that are linked together through communication channels to facilitate communication and resource-sharing among a wide range of users. Networks are commonly categorized based on their characteristics.

Hypervisor

A hypervisor is a hardware virtualization technique that allows multiple guest operating systems (OS) to run on a single host system at the same time. The guest OS shares the hardware of the host computer, such that each OS appears to have its own processor, memory and other hardware resources. A hypervisor is also known as a virtual machine manager (VMM). The term hypervisor was first coined in 1956 by IBM to refer to software programs distributed with IBM RPQ for the IBM 360/65. The hypervisor program installed on the computer allowed the sharing of its memory. The hypervisor installed on the server hardware controls the guest operating system running on the host machine. Its main job is to cater to the needs of the guest operating system and effectively manage it such that the instances of multiple operating systems do not interrupt one another.

LAN

A local area network (LAN) is a computer network within a small geographical area such as a home, school, computer laboratory, office building or group of buildings. A LAN is composed of inter-connected workstations and personal computers which are each capable of accessing and sharing data and devices, such as printers, scanners and data storage devices, anywhere on the LAN. LANs are characterized by higher communication and data transfer rates and the lack of any need for leased communication lines. In the 1960s, large colleges and universities had the first local area networks (LAN). In the mid-1970s, Ethernet was developed by Xerox PARC (Xerox Palo Alto Research Center) and deployed in 1976. Chase Manhattan Bank in New York had the first commercial use of a LAN in December 1977. In the late 1970s and early 1980s, it was common to have dozens or hundreds of individual computers located in the same site. Many users and administrators were attracted to the concept of multiple computers sharing expensive disk space and laser printers. From the mid-1980s to through the 1990s, Novell's Netware dominated the LAN software market. Over time, competitors such as Microsoft released comparable products to the point where nowadays, local networking is considered base functionality for any operating system.

LUNS logical units

A logical unit number (LUN) is a number used for identifying a logical unit relating to computer storage. A logical unit is a device addressed by protocols and related to fiber channel, small computer system interface (SCSI), Internet SCSI (iSCSI) and other comparable interfaces. LUNs are essential for managing the block storage arrays of a storage area network (SAN). A typical LUN is used with any component supporting read/write processes. LUNs are commonly used for logical discs produced on a SAN. The term LUN was initiated from the SCSI protocol and provided a methodology for identifying specific disc drives within a regular component such as a disc array. Frequently, the term LUN is used in reference to the actual disc drive, which is not technically accurate. Additionally, a LUN may refer to an input/output (I/O) access channel within selected programming languages. Today, LUNs are found not only on disc drives, but also on virtual partitions or on volumes of redundant arrays of independent disks (RAID) using multiple drives.

MIB

A management information base (MIB) is a hierarchical virtual database of network (or other entity) objects describing a device being monitored by a network management system (NMS). An MIB is used by Simple Network Management Protocol (SNMP) and remote monitoring 1 (RMON1). The MIB database of objects is intended to reference a complete collection of management information on an entity, such as a computer network; however, it is often used to refer to a subset of the database and is often called an MIB module. Each MIB is addressed or identified using an object identifier (OID), which is often a device's setting or status. The OID uniquely identifies a managed object in the MIB hierarchy. Each managed object is made up of one or more variables called object instances. These, too, are identified by OIDs. To remove ambiguous meanings and repair data defects, MIBs are updated, but these changes must be in conformance with Section 10, or RFC 2578, a specific recommendation for comment.The protocols SNMP and RMON1 both use MIB. SNMP gathers data from a single type of MIB; RMON 1 gathers data from nine additional types of MIBs that provide a richer set of data. But the objects (devices such as routers, switches and hubs) must be designed to use the data. There are two types of managed objects, scalar objects and tabular objects. These define a single object instance or multiple related object instances grouped in MIB tables, respectively.

MAC address

A media access control address (MAC address) is a unique identifier for an Ethernet or network adapter over a network. It distinguishes different network interfaces and is used for a number of network technologies, particularly most IEEE 802 networks, including Ethernet. In the OSI model, MAC addresses occur in the Media Access Control Protocol sub-layer. A MAC address is also known as physical address, hardware address and burned-in address. MAC addresses are generally assigned by the vendor/manufacturer of every network interface card (NIC) developed. They are implemented in most network types, but unlike IP address, MAC addresses are permanent and can't be changed. A MAC address is created using the specifications provided by IEEE. Each MAC address consists of a 12-digit hexadecimal notation, which is embedded within the NIC firmware and is composed of a six-digit manufacturer's organization unique identifier followed by a six-digit serialized or random unique identifier.

Network Computer

A network computer is an inexpensive personal computer designed for a centrally-managed network -- that is, data are stored and updated on a network server -- and lacks a disk drive, CD-ROM drive or expansion slots. A network computer depends on network servers for processing power and data storage. A network computer is sometimes referred to as a thin client. Network computers may also be referred to as diskless nodes or hybrid clients. Network computers designed to connect to the Internet may be called Internet boxes, NetPCs or Internet appliances A network computer offers the following advantages: lower production costs, lower operating costs and quiet operation. This reduced total cost of ownership (TCO) makes this kind of computer very popular among corporations. Network computers are also often used in hazardous environments where more expensive computers could get damaged or destroyed.

NFS

A network file system (NFS) is a type of file system mechanism that enables the storage and retrieval of data from multiple disks and directories across a shared network. A network file system enables local users to access remote data and files in the same way they are accessed locally. NFS is derived from the distributed file system mechanism. It is generally implemented in computing environments where the centralized management of data and resources is critical. Network file system works on all IP-based networks. It uses TCP and UDP for data access and delivery, depending on the version in use. Network file system is implemented in a client/server computing model, where an NFS sever manages the authentication, authorization and management of clients, as well as all the data shared within a specific file system. Once authorized, clients can view and access the data through their local systems much like they'd access it from an internal disk drive. The Network File System (NFS) is a client/server application that lets a computer user view and optionally store and update files on a remote computer as though they were on the user's own computer. The NFS protocol is one of several distributed file system standards for network-attached storage (NAS). NFS allows the user or system administrator to mount (designate as accessible) all or a portion of a file system on a server. The portion of the file system that is mounted can be accessed by clients with whatever privileges are assigned to each file (read-only or read-write). NFS uses Remote Procedure Calls (RPC) to route requests between clients and servers.

Registers

A processor register (CPU register) is one of a small set of data holding places that are part of the computer processor. A register may hold an instruction, a storage address, or any kind of data (such as a bit sequence or individual characters). Some instructions specify registers as part of the instruction. For example, an instruction may specify that the contents of two defined registers be added together and then placed in a specified register

Server

A server is a computer, a device or a program that is dedicated to managing network resources. Servers are often referred to as dedicated because they carry out hardly any other tasks apart from their server tasks. There are a number of categories of servers, including print servers, file servers, network servers and database servers. In theory, whenever computers share resources with client machines they are considered servers. Nearly all personal computers are capable of serving as network servers. However, usually software/hardware system dedicated computers have features and configurations just for this task. For example, dedicated servers may have high-performance RAM, a faster processor and several high-capacity hard drives. In addition, dedicated servers may be connected to redundant power supplies, several networks and other servers. Such connection features and configurations are necessary as many client machines and client programs may depend on them to function efficiently, correctly and reliably.

Virtual Machine

A software computer that, like a physical computer, runs on an operating system and applications - the VM is comprised of a set of specification and configuration files and it is backed by the physical resources of the host. VMs have virtual devices that provide the ame functionality as physical hardware and have additional benefits in terms of portability, manageability, and security. Key files that make up a virtual machine are the configurartion file, virtual disk file, NVRAM setting file, and the log file

Virtual LUN

A virtual logical unit number (virtual LUN) is an identifier for a storage area not directly linked to a physical disk drive or set of drives. A traditional LUN corresponds to a physical hard disk or storage device. By contrast, virtual LUNs are labels for virtual storage spaces or partitions from one or more hard disks. In general, virtual LUNs are used for different kinds of storage area networks with storage systems like SCSI or fiber channel setups. The fact that these storage identifiers are not linked to a specific physical hard disk makes them more versatile in many ways. In fact, one of the basic ideas behind virtual LUNs is that administrators can allocate smaller amounts of storage space across one or more hardware locations. That's why some call a virtual LUN a thin LUN or refer to its use in thin provisioning, where storage spaces are set up according to more conservative estimates of user needs, rather than according to heavier projected demands for storage space. The end result of some thin provisioning strategies is that less storage space goes unused. Another way to use virtual LUNs is to provide fault tolerance by writing data across more than one hard drive or desk. These new systems help with enterprise resource planning and data backup/recovery strategies

VPC

A virtual private cloud (VPC) is a hybrid model of cloud computing in which a private cloud solution is provided within a public cloud provider's infrastructure. VPC is a cloud computing service in which a public cloud provider isolates a specific portion of their public cloud infrastructure to be provisioned for private use. The VPC infrastructure is managed by a public cloud vendor; however, the resources allocated to a VPC are not shared with any other customer. VPCs were introduced specifically for those customers interested in taking advantage of the benefits of cloud computing but who have concerns over certain aspects of the cloud. Common concerns involve privacy, security and the loss of control over proprietary data. In response to this customer need, many public cloud vendors designed a VPC offering a part of a vendor's public infrastructure but having dedicated cloud servers, virtual networks, cloud storage and private ID addresses, reserved for a VPC customer. VPCs are sometimes referred to as private clouds, but there is a slight difference as VPCs are private clouds sourced over a third-party vendor infrastructure rather than over an enterprise IT infrastructure.

WAN

A wide area network (WAN) is a network that exists over a large-scale geographical area. A WAN connects different smaller networks, including local area networks (LANs) and metro area networks (MANs). This ensures that computers and users in one location can communicate with computers and users in other locations. WAN implementation can be done either with the help of the public transmission system or a private network. A WAN connects more than one LAN and is used for larger geographical areas. WANs are similar to a banking system, where hundreds of branches in different cities are connected with each other in order to share their official data. A WAN works in a similar fashion to a LAN, just on a larger scale. Typically, TCP/IP is the protocol used for a WAN in combination with devices such as routers, switches, firewalls and modems.

VMware Zombie

A zombie network is a network or collection of compromised computers or hosts that are connected to the Internet. A compromised computer becomes a zombie that is wirelessly controlled through standards based networking protocols like HTTP and Internet Relay Chat (IRC). A zombie network is also known as a botnet. Computers become part of a zombie network through malicious software (malware) that is unknowingly installed by users or automatically installed through a security network's back door, or by exploiting Web browser vulnerabilities. The malware leaves specified networking ports open, allowing computer access by outside users. Zombie networks run similar types of malware that may be multiple networks operated by different criminal entities (cyber or otherwise). Types of attacks perpetrated by a zombie network include denial of service attacks, adware, spyware, spam and click fraud.

.VMXF file

Additional VM configuration files

ATA

Advanced Technology Attachment (ATA) is a standard physical interface for connecting storage devices within a computer. ATA allows hard disks and CD-ROMs to be internally connected to the motherboard and perform basic input/output functions. ATA is also known as Integrated Device Electronics (IDE) and is referred to as ATA with Packet Interface (ATAPI). The ATA interface standard was designed to connect supported, integrated and portable storage devices without the need for an external controller. The ATA interface is basically a set of thin wires merged within a cable bus that are used to transfer data in and out of the disk drives. Initially, ATA supported parallel communication and was also called Parallel ATA (PATA). It consisted of a 40-pin controller cable and data transfer speed of 16-32 bits at a time. However, PATA was replaced by Serial ATA (SATA) - which has faster data I/O speeds - in computer systems developed from 2007 onwards.

The internet

All servers and nodes connected to the main backbones maintained by individual NSPs (NSPs provide the infrastructure that makes up the internet).

Staggered Spin-Up

Allows sequential hard disk drive startup, which helps even out power load distribution during system booting. Staggered spin-up is a physical performance strategy for serial ATA hard disk drives or RAID DISK drive systems. With staggered spin-up, engineers handle the electrical load and system capacity during startup by staggering the times when disk drives begin input/output (I/O) operations. With the traditional strategy, all drives spin up when device or system power is turned on, but staggered spin-up delays the spin up of some drives to provide a more stable demand to the power supply.

Port Multipliers

Allows the connection of up to 15 drives to a SATA controller. This facilitates the building of disk enclosures. a device that allows multiple SATA devices to be connected to a single SATA host port.

Intelligent Hub

Also known as manageable hubs, these hubs allow system administrators to monitor data passing through and to configure each port, meaning to determine which devices or network segments are plugged into the port. Some ports may even be left open with no connection.

AGP

An accelerated graphics port (AGP) is a point to point channel that is used for high speed video output. This port is used to connect graphic cards to a computer's motherboard. The primary purpose of an AGP is to accelerate 3D graphics output for high definition video. AGP provides much faster connectivity and throughput compared to PCI. An AGP is primarily designed to be used for 3D graphics, high definition games and engineering/architecture graphics.

API

An application programming interface (API) is a set of protocols, routines, functions and/or commands that programmers use to develop software or facilitate interaction between distinct systems. APIs are available for both desktop and mobile use, and are typically useful for programming GUI (graphic user interface) components, as well as allowing a software program to request and accommodate services from another program. BINAR] Embedded Analytics for Today's Information Workers Techopedia explains Application Programming Interface (API) An API can be seen as composed of two fundamental elements: a technical specification that establishes how information can be exchanged between programs (which itself is made up of request for processing and data delivery protocols) and a software interface that somehow publishes that specification. The basic concept behind the API has existed in some form for the entire history of digital technology, as the interaction between unique programs and digital systems has been a primary objective for much of that technology's existence. But with the rise of the world wide web, and the subsequent turn-of-the-millennium dot-com boom, the incentive for this technology reached an unprecedented level.

Enterprise Network

An enterprise network is an enterprise's communications backbone that helps connect computers and related devices across departments and workgroup networks, facilitating insight and data accessibility. An enterprise network reduces communication protocols, facilitating system and device interoperability, as well as improved internal and external enterprise data management. An enterprise network is also known as a corporate network The key purpose of an enterprise network is to eliminate isolated users and workgroups. All systems should be able to communicate and provide and retrieve information. Additionally, physical systems and devices should be able to maintain and provide satisfactory performance, reliability and security. Enterprise computing models are developed for this purpose, facilitating the exploration and improvement of established enterprise communication protocols and strategies. In scope, an enterprise network may include local and wide area networks (LAN/WAN), depending on operational and departmental requirements. An enterprise network can integrate all systems, including Windows and Apple computers and operating systems (OS), Unix systems, mainframes and related devices like smartphones and tablets. A tightly integrated enterprise network effectively combines and uses different device and system communication protocols.

I/O (Devices)

An input/output (I/O) device is a hardware device that has the ability to accept inputted, outputted or other processed data. It also can acquire respective media data as input sent to a computer or send computer data to storage media as storage output. Input devices provide input to a computer, while output devices provide a way for a computer to output data for communication with users or other computers. An I/O device is a device with both functionalities. Because I/O device data is bi-directional, such devices are usually categorized under storage or communications. Examples of I/O storage devices are CD/DVD-ROM drives, USB flash drives and hard disk drives. Examples of communication I/O devices are network adapters, Bluetooth adapters/dongles and modems.

PCI controller

Bus on the virtual motherboard that communicates with components such as hard disks and other devices.

Cache memory

Cache memory, also called CPU memory, is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. This memory is typically integrated directly with the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU. The basic purpose of cache memory is to store program instructions that are frequently re-referenced by software during operation. Fast access to these instructions increases the overall speed of the software program.

(System) call control

Call control is a function in a business telephone switch or PBX that routes telephone calls to the proper destination. Call control also maintains the connection between two endpoints of a call. It is one of the major categories of communications traffic in VoIP systems. Call control is also known as call processing. Call control is a feature of PBX systems which determines where calls are routed and maintains the connections. For example, call control might detect when a call has ended, or can restart a call if it was terminated abruptly. Other phone services, such as call waiting, are implemented in the call control system. Because PBX systems must be reliable, writing the software for call control can be a lengthy process. Call control is even more complex with the rise of VoIP and unified communications systems in the enterprise. In VoIP systems, call control uses the Q.931 protocol. Modern VoIP systems can include not only voice calls, but videoconferencing, which makes call control even more complicated than in traditional PBX systems.

CPU

Central Processing Unit - the unit that performs most of the processing inside a computer - controls instructions and data flow to and from other parts of the computer - relies heavily on a chip set, which is a group of microchips located on the motherboard.

Control Unit (CPU)

Component of CPU that extracts instructions from memory and decodes and executes them.

Arithmetic Logic Unit (ALU)(CPU)

Component of CPU that handles arithmetic and logical operations.

Host

Computer that uses virtualization software (ESX) to run virtual machines. Hosts provide the CPU and memory resources that VMs use and give VMs access to storage and network connectivity.

Datacenter

Container of objects like hosts and virtual machines

Kernel

Core component of the operating system. Uses interprocess communication and system calls, where it acts like a bridge between applications and the data processing performed at the hardware level. Handles disk management, task management, and memory management. When OS is loaded into memory, the kernel is loaded first, and it remains there until the OS is shut down again

Data redundancy

Data redundancy is a condition created within a database or data storage technology in which the same piece of data is held in two separate places. This can mean two different fields within a single database, or two different spots in multiple software environments or platforms. Whenever data is repeated, this basically constitutes data redundancy. This can occur by accident, but is also done deliberately for backup and recovery purposes. Within the general definition of data redundancy, there are different classifications based on what is considered appropriate in database management, and what is considered excessive or wasteful. Wasteful data redundancy generally occurs when a given piece of data does not have to be repeated, but ends up being duplicated due to inefficient coding or process complexity. A positive type of data redundancy works to safeguard data and promote consistency. Many developers consider it acceptable for data to be stored in multiple places. The key is to have a central, master field or space for this data, so that there is a way to update all of the places where data is redundant through one central access point. Otherwise, data redundancy can lead to big problems with data inconsistency, where one update does not automatically update another field. As a result, pieces of data that are supposed to be identical end up having different values.

Configuration File

Definition - What does Configuration File (Config File) mean? In computer science, configuration files provide the parameters and initial settings for the operating system and some computer applications. Configuration files are usually written in ASCII encoding and contain all necessary data about the specific application, computer, user or file. Configuration files can be used for a wide range of reasons, though they are mostly used by operating systems and applications to customize the environment. Configuration files are used for operation system settings, server processes or software applications. Configuration files are also known as config files. Configuration files can be identified with the help of the extensions such as .cnf, .cfg or .conf. Most computer applications and operating systems read their configuration files at bootup or startup. Certain applications periodically check the configuration files for changes. Administrators or authorized users can provide instructions to applications to re-read the configuration files and apply any changes to process as needed or even read arbitrary files as configuration files. There are no predefined conventions or standards as far as configuration files are concerned. Certain applications provide tools for modifying, creating or verifying the syntax of the configuration files. Some configuration files can be created, viewed or modified with the help of a text editor. In the case of Windows operating systems, the most important configuration files are stored in in the Registry and MIF files. System administrators can make use of configuration files to set policies of how applications should be run in the enterprise's devices and computers. Configuration files can be used by users to change settings without the need to recompile applications, programs or operating systems.

TCP/IP

Definition - What does Transmission Control Protocol/Internet Protocol (TCP/IP) mean? Transmission Control Protocol/Internet Protocol (TCP/IP) is the language a computer uses to access the Internet. It consists of a suite of protocols designed to establish a network of networks to provide a host with access to the Internet. TCP/IP is responsible for full-fledged data connectivity and transmitting the data end-to-end by providing other functions, including addressing, mapping and acknowledgment. TCP/IP contains four layers, which differ slightly from the OSI model. Nearly all computers today support TCP/IP. TCP/IP is not a single networking protocol - it is a suite of protocols named after the two most important protocols or layers within it - TCP and IP. As with any form of communication, two things are needed: a message to transmit and the means to reliably transmit the message. The TCP layer handles the message part. The message is broken down into smaller units, called packets, which are then transmitted over the network. The packets are received by the corresponding TCP layer in the receiver and reassembled into the original message. The IP layer is primarily concerned with the transmission portion. This is done by means of a unique IP address assigned to each and every active recipient on the network. TCP/IP is considered a stateless protocol suite because each client connection is newly made without regard to whether a previous connection had been established.

Desktop Virtualization

Desktop virtualization is a virtualization technology that separates an individual's PC applications from his or her desktop. Virtualized desktops are generally hosted on a remote central server, rather than the hard drive of the personal computer. Because the client-server computing model is used in virtualizing desktops, desktop virtualization is also known as client virtualization. Desktop virtualization provides a way for users to maintain their individual desktops on a single, central server. The users may be connected to the central server through a LAN, WAN or over the Internet. Desktop virtualization has many benefits, including a lower total cost of ownership (TCO), increased security, reduced energy costs, reduced downtime and centralized management. Limitations of desktop virtualization include difficulty in maintenance and set up of printer drivers; increased downtime in case of network failures; complexity and costs involved in VDI deployment and security risks in the event of improper network management.

Daemon

Disk and execution software - A disk and execution monitor (daemon) is a background process run in computer multitasking operating systems, usually at bootstrap time, to perform administrative changes or monitor services. Common daemon processes include email handlers, print spoolers and other programs that perform OS administrative tasks. Daemons also perform specified operations at predefined times in response to events. Unix daemon files generally have a "d" suffix. For example, "identd" refers to a daemon that provides the identity of a TCP connection. Daemon parent processes are often the initialization process. A process becomes a daemon by forking a child process and exiting the parent process, causing initialization to adopt the child process. Systems often start daemons at boot time, which to respond to network requests, hardware activity or programs that perform specified tasks. Daemons are also able to configure hardware and run scheduled tasks.

DRS

Distributed Resource Scheduler - scheduling tool for virtualization. Hypervisor software links various virtual machines, each of which resources from a given pool. DRS helps to provide resources in real-time by active or passive allocations of virtualization resources like virtual memory and CPU.

DHCP

Dynamic host control protocol - network management protocol used to dynamically assign IP addresses to new nodes entering the network.

DLL

Dynamic link library - a shared program module with ordered code, methods, functions, enumerations, and structures that may be dynamically called by an executing program during run time.

FTP server

File transfer protocol - allows you to transfer files from one place to another - used to upload files from your computer onto your website - these servers "sit" on the internet and allow for larger file upload, and are normally included in a web server.

Port Selectors

Facilitates redundancy for two hosts connected to a single drive, allowing the second host to take over in the event of a primary host failure. a 2-input-to-1-output SATA analog multiplexer for host controller failover applications. This product can be used with various back planes and cable lengths with minimum power consumption and package size.

Primary functions of a processor(4)

Fetch, decode, execute, write back

Packet Filtering

Firewalls filter packets that attempt to enter or leave a network and either accept or reject them depending on the predefined set of filter rules.

GNU

GNU's Not Unix. UNIX-compatible OS - collection of software applications, libraries, and developer tools AND a program to allocate resources and communicate with the hardware, or the kernel. Main components: GNU compiler collection, GNU C library, GNU Emacs text editor, GNOME desktop environment.

HA

High Availability - durable systems with continuous operation made possible through failover processes, RAID memory, and the automation of functions like rebooting VMs on stable hosts when failure occurs.

Motherboard components

I/O ports, peripheral connections, PCI expansion slots, bus and power connections, heat sinks and mounting points for fans and major components (like CPU and processing), supporting chipset for CPU bus and external components, BIOS, memory sockets for RAM ROM and cache, interconnecting circuitry

Thread

In computer programming, a thread is placeholder information associated with a single use of a program that can handle multiple concurrent users. From the program's point-of-view, a thread is the information needed to serve one individual user or a particular service request. If multiple users are using the program or concurrent requests from other programs occur, a thread is created and maintained for each of them. The thread allows a program to know which user is being served as the program alternately gets re-entered on behalf of different users. (One way thread information is kept by storing it in a special data area and putting the address of that data area in a register. The operating system always saves the contents of the register when the program is interrupted and restores it when it gives the program control again.)

Error log

In computer science, an error log is a record of critical errors that are encountered by the application, operating system or server while in operation. Some of the common entries in an error log include table corruption and configuration corruption. Error logs in many cases serve as extremely useful tools for troubleshooting and managing systems, servers and even networks. Error logs for different applications, operating systems, networks or servers are set up in different ways. Some error logs are configured to capture every single error which occurs in the system, whereas some are designed to selectively store error information pertaining to specific error codes. Some error logs only capture certain information about the error, whereas others are programmed to capture all available information such as timestamp, system information, user location and user entry. In many cases, access to error logs need special administrative rights, as these would help as a security measure against providing access to unauthorized resources or users from seeing the error documentation or details. Error logs are useful in many respects. In the case of servers and office networks, error logs track issues faced by users and help in root causes analysis of those issues. A network or system administrator can resolve errors more quickly and easily with the information available from the error logs. For webmasters, error log analysis provides information about the issues users encounter and can proactively resolve issues without anyone reporting on them. Error logs also could provide insights on hacking attempts, as most hacking attempts on systems and servers result in error or have a high probability of being captured in error logs as the hackers attempt to compromise the system.

ITIL

Information technology infrastructure library - a widely accepted best-practices framework for IT service management (ITSM) - includes practices, checklists, tasks, and procedures documenting the role of the ITSM function.

IrDA

Infrared (IR) is a wireless mobile technology used for device communication over short ranges. IR communication has major limitations because it requires line-of-sight, has a short transmission range and is unable to penetrate walls. IR transceivers are quite cheap and serve as short-range communication solutions. Because of IR's limitations, communication interception is difficult. In fact, Infrared Data Association (IrDA) device communication is usually exchanged on a one-to-one basis. Thus, data transmitted between IrDA devices is normally unencrypted. IR-enabled devices are known as IrDA devices because they conform to standards set by the Infrared Data Association (IrDA). IR light-emitting diodes (LED) are used to transmit IR signals, which pass through a lens and focus into a beam of IR data. The beam source is rapidly switched on and off for data encoding. The IR beam data is received by an IrDA device equipped with a silicon photodiode. This receiver converts the IR beam into an electric current for processing. Because IR transitions more slowly from ambient light than from a rapidly pulsating IrDA signal, the silicon photodiode can filter out the IrDA signal from ambient IR. IrDA transmitters and receivers are classified as directed and non-directed. A transmitter or receiver that uses a focused and narrow beam is directed, whereas a transmitter or receiver that uses an omnidirectional radiation pattern is non-directed.

Protocol

Internet Protocol (IP) is the principal set (or communications protocol) of digital message formats and rules for exchanging messages between computers across a single network or a series of interconnected networks, using the Internet Protocol Suite (often referred to as TCP/IP). Messages are exchanged as datagrams, also known as data packets or just packets. IP is the primary protocol in the Internet Layer of the Internet Protocol Suite, which is a set of communications protocols consisting of four abstraction layers: link layer (lowest), Internet layer, transport layer and application layer (highest). The main purpose and task of IP is the delivery of datagrams from the source host (source computer) to the destination host (receiving computer) based on their addresses. To achieve this, IP includes methods and structures for putting tags (address information, which is part of metadata) within datagrams. The process of putting these tags on datagrams is called encapsulation.

ISP

Internet service provider - sells services and provides connectivity to everybody else, especially the end consumer.

Micro Kernel

Kernel that defines a simple abstraction over hardware that uses primitives or system calls to implement minimum OS services such as multitasking, memory management, and inter-process communication

Linux OS

Linux kernel is defining component of the OS - supports the follwing files: Ext2, Ext3, Ext4, Jfs, ReiserFS, XFS, BTRFS, FAT, FAT32, NFTS - has BASH (bourne again shell) as its text mode interface ... this is the Linux default shell, which can support multiple command line interfaces - OS family is GNU - programmed in C language.

Live Migration

Live migration is the process of transferring a live virtual machine from one physical host to another without disrupting its normal operation. Live migration enables the porting of virtual machines and is carried out in a systematic manner to ensure minimal operational downtime. Live migration is generally performed when the host physical computer/server needs maintenance, updating and/or to be switched between different hosts. To start off, the data in a virtual machine's memory is first transferred to the target physical machine. Once the memory copying process is complete, an operational resource state consisting of CPU, memory and storage is created on the destination machine. After that, the virtual machine is suspended on the original site and copied and initiated on the destination machine along with its installed applications. The whole process has a minimal downtime of seconds in between migration - specifically in copying memory content. However, that can be reduced by a few techniques such as pre-paging and the probability density function of memory.

Load balancing

Load balancing is an even division of processing work between two or more computers and/or CPUs, network links, storage devices or other devices, ultimately delivering faster service with higher efficiency. Load balancing is accomplished through software, hardware or both, and it often uses multiple servers that appear to be a single computer system (also known as computer clustering). Management of heavy Web traffic relies on load balancing, which is accomplished either by assigning each request from one or more websites to a separate server, or by balancing work between two servers with a third server, which is often programmed with varied scheduling algorithms to determine each server's work. Load balancing is usually combined with failover (the ability to switch to a backup server in case of failure) and/or data backup services.

Processor

Logic circuitry that responds to and processes the basic instructions that drive a computer

MCSE

Microsoft Certified Systems Engineer (MCSE) is an IT professional who is certified in Microsoft Windows NT and 2000 operating systems (OS), Microsoft BackOffice Server products, networking and related desktop computer systems. In the IT industry, the MCSE certification serves as proof that an individual has the abilities, skills and knowledge required to administer certain IT roles. The MCSE certification is one of many Microsoft certifications that may be obtained by passing a set of exams designed to test proficiency on a combination of complementary Microsoft products.

master-slave model

Master/slave is a model of communication for hardware devices where one device has a unidirectional control over one or more devices. This is often used in the electronic hardware space where one device acts as the controller, whereas the other devices are the ones being controlled. In short, one is the master and the others are slaves to be controlled by the master. The most common example of this is the master/slave configuration of IDE disk drives attached on the same cable, where the master is the primary drive and the slave is the secondary drive. The master/slave model is commonly used in the technology industry, not just in electronic but in mechanical as well. In electronic technology, it is often used to simplify communication like, instead of having a separate interface to communicate with each disk drive, we can connect most of them via one interface and cable and the computer only has to communicate with one drive serving as the master, then any control command is simply propagated down to the slaves from the master. In mechanical technology, the term can refer to the configuration of motors such as two motors connected to different drives that are acting on the same load; one drive is defined as the master, doing the speed and control of the load, whereas the slave is there to help increase the torque. In pneumatic and hydraulic systems, we also have master cylinders, which feed the pressure and control the slave cylinders

Parallel Port

Most commonly used to connect a computer with a printer

vms0

NSX software virtual machine

NAT

Network address translation (NAT) is a router function that enables public and private network connections and allows single IP address communication. While there are many public networks worldwide, there is a limited number of private networks. NAT was introduced as an effective, timely solution to heavy network volume traffic. There are more than 350 million Internet users and approximately 100 million hosts. Users want to connect with each other, but IPv4 has limited individual IP addresses to handle client volume. NAT was introduced to resolve this problem, and manages multiple client requests over one private IP address required by public networks. At NAT's center is the router, which is used to hide actual public network addresses and readdress them with a new public IP address. For external networks, this new address may appear to be that of the router, although this is not the case.

Network Infrastructure

Network infrastructure is the hardware and software resources of an entire network that enable network connectivity, communication, operations and management of an enterprise network. It provides the communication path and services between users, processes, applications, services and external networks/the internet. Network infrastructure is typically part of the IT infrastructure found in most enterprise IT environments. The entire network infrastructure is interconnected, and can be used for internal communications, external communications or both. A typical network infrastructure includes: Networking Hardware: Routers Switches LAN cards Wireless routers Cables Networking Software: Network operations and management Operating systems Firewall Network security applications Network Services: T-1 Line DSL Satellite Wireless protocols IP addressing

Network redundancy

Network redundancy is a process through which additional or alternate instances of network devices, equipment and communication mediums are installed within network infrastructure. It is a method for ensuring network availability in case of a network device or path failure and unavailability. As such, it provides a means of network failover. Network redundancy is primarily implemented in enterprise network infrastructure to provide a redundant source of network communications. It serves as a backup mechanism for quickly swapping network operations onto redundant infrastructure in the event of unplanned network outages. Typically, network redundancy is achieved through the addition of alternate network paths, which are implemented through redundant standby routers and switches. When the primary path is unavailable, the alternate path can be instantly deployed to ensure minimal downtime and continuity of network services.

NSP

Network service provider - a business entity that providers/sells services such as network access into its backbone infrastructure or access to its network access points (to the internet).

Network Virtualization

Network virtualization refers to the management and monitoring of an entire computer network as a single administrative entity from a single software-based administrator's console. Network virtualization also may include storage virtualization, which involves managing all storage as a single resource. Network virtualization is designed to allow network optimization of data transfer rates, flexibility, scalability, reliability and security. It automates many network administrative tasks, which actually disguise a network's true complexity. All network servers and services are considered one pool of resources, which may be used without regard to the physical components. Network virtualization is especially useful for networks experiencing a rapid, large and unpredictable increase in usage. The intended result of network virtualization is improved network productivity and efficiency, as well as job satisfaction for the network administrator. Network virtualization involves dividing available bandwidth into independent channels, which are assigned, or reassigned, in real time to separate servers or network devices. Network virtualization is accomplished by using a variety of hardware and software and combining network components. Software and hardware vendors combine components to offer external or internal network virtualization. The former combines local networks, or subdivides them into virtual networks, while the latter configures single systems with containers, creating a network in a box. Still other software vendors combine both types of network virtualization

Storage

Non-volatile. Hard drives, Solid State drives. Retains its content even when the power is turned off.

Server Operating System

OS created to be more robust and to operate in higher traffic environments and for very long, continuous time periods - more complicated to work with, less UI if any at all, less navigation/service tools than desktop OS has.

RabbitMQ

Open-source message broker - helps globally share and monitor messages in a multi-client environment and enable communication between different connected systems.

Firewall components

Packet Filtering, Application Gateway, Circuit-Level Gateway, Proxy Servers, Stateful Inspection or Dynamic Packet Filtering

Key properties of VMs

Partitioning, isolation, encapsulation, hardware independence

Node

Point of intersection/connection within a network - major center where internet traffic is typically routed.

Isolation

Provide fault and security at the hardware level - preserve performance with advanced resource controls

Authentication

Process that ensures and confirms a user's identity. User proves access rights and identity (Usernames, passwords).

Proxy Servers

Proxy servers can mask real network addresses and intercept every message that enters or leaves a network.

PKI

Public key infrastructure - authentication that uses digital certificates to prove a users identity (RSA authentication)

Puppet

Puppet is an open source systems management tool for centralizing and automating configuration management. Configuration management is the detailed recording and updating of information that describes an enterprise's hardware and software. Puppet has two layers: a configuration language to describe how the hosts and services should look, and an abstraction layer that allows the administrator to implement the configuration on a variety of platforms, including Unix, Linux, Windows and OS X. Administrators can encode the configuration of a service as a policy, which Puppet then monitors and enforces. Puppet is written in Ruby and uses its own domain specific language (DSL) for creating and managing modules. The basic version of Puppet configuration management, which is called Open Source Puppet, is available directly from Puppet's website and is licensed under the Apache 2.0 system. Puppet Enterprise has additional functionality including orchestration, role-based access control (RBAC) and compliance reporting.

RAID memory

Redundant array of independent disks (RAID) is a method of storing duplicate data on two or more hard drives. It is used for data backup, fault tolerance, to improve throughput, increase storage functions and to enhance performance. RAID is attained by combining two or more hard drives and a RAID controller into a logical unit. The OS sees RAID as a single logical hard drive called a RAID array. There are different levels of RAID, each distributing data across the hard drives with their own attributes and features. Originally, there were five levels, but RAID has advanced to several levels with numerous nonstandard levels and nested levels. The levels are numbered RAID 0, RAID 1, RAID 2, etc. They are standardized by the storage networking industry association and are defined in the common RAID disk data format (DDF) standard data structure. RAID is mostly used for data protection allowing a continuation of two data copies, one in each drive. It is often used in high end servers and some small workstations. When RAID duplicates data, a physical disc is in RAID array. The RAID array is read by the OS as one single disc instead of multiple discs. The RAID objective for each disc is to provide better input/output (I/O) operations and enhanced data reliability. RAID levels can be individually defined or have nonstandard levels, as well as nested levels combining two or more basic levels of RAID.

Resource Pooling

Resource pooling is an IT term used in cloud computing environments to describe a situation in which providers serve multiple clients, customers or "tenants" with provisional and scalable services. These services can be adjusted to suit each client's needs without any changes being apparent to the client or end user. The idea behind resource pooling is that through modern scalable systems involved in cloud computing and software as a service (SaaS), providers can create a sense of infinite or immediately available resources by controlling resource adjustments at a meta level. This allows customers to change their levels of service at will without being subject to any of the limitations of physical or virtual resources. The kinds of services that can apply to a resource pooling strategy include data storage services, processing services and bandwidth provided services. Other related terms include rapid elasticity, which also involves the dynamic provisioning of services, and on-demand self-service, where customers could change their levels of service without actually contacting a service provider. All of this automated service provisioning is a lot like other kinds of business process automation, which replaced more traditional, labor-intensive strategies with new innovations that rely on increasingly powerful virtual networks and data handling resources. In these cases, the goal is to separate the client experience from the actual administration of assets, so that the process of delivery is opaque and the services seem to be automatically and infinitely available.

Partitioning

Running multiple operating systems on one physical machine - divide the system resources between virtual machines

Encapsulation

Save the entire state of a virtual machine to files - move and copy virtual machines as easily as moving a and copying files

SSH

Secure shell - cryptographic protocal and interface for executing network services, shell services, and secure network communication with a remote computer - lets users log on to a remote computer and perform shell and network services.

SATA

Serial Advanced Technology Attachment II (SATA II) is the second generation of computer bus interfaces used to connect motherboard host adapters to high-capacity storage devices, such as hard/optical/tape drives. SATA II is a successor to parallel Integrated Development Environment (IDE)/Advanced Technology Attachment (ATA) interface technologies, which ran at 3.0 Gbps - a throughput rate that nearly doubled the initial SATA specification. SATA II standard delivers additional improvements to SATA, which is provided in increments. SATA II was introduced in 2002 to provide higher data transfer rates (DTR) for server and network storage requirements. Subsequent SATA II releases focused on enhanced cabling, failover capabilities and higher signal speeds.

Server redundancy

Server redundancy refers to the amount and intensity of backup, failover or redundant servers in a computing environment. It defines the ability of a computing infrastructure to provide additional servers that may be deployed on runtime for backup, load balancing or temporarily halting a primary server for maintenance purposes. Server redundancy is implemented in an enterprise IT infrastructure where server availability is of paramount importance. To enable server redundancy, a server replica is created with the same computing power, storage, applications and other operational parameters. A redundant server is kept offline. That is, it powers on with network/Internet connectivity but is not used as a live server. In case of failure, downtime or excessive traffic at the primary server, a redundant server can be implemented to take the primary server's place or share its traffic load.

Server virtualization

Server virtualization is a virtualization technique that involves partitioning a physical server into a number of small, virtual servers with the help of virtualization software. In server virtualization, each virtual server runs multiple operating system instances at the same time. Typical enterprise data centers contain a huge number of servers. Many of these servers sit idle as the workload is distributed to only some of the servers on the network. This results in a waste of expensive hardware resources, power, maintenance and cooling requirements. Server virtualization attempts to increase resource utilization by partitioning physical servers into several multiple virtual servers, each running its own operating system and applications. Server virtualization makes each virtual server look and act like a physical server, multiplying the capacity of every single physical machine. The concept of server virtualization is widely applied in IT infrastructure as a way of minimizing costs by increasing the utilization of existing resources. Virtualizing servers is often a good solution for small- to medium-scale applications. This technology is widely used for providing cost-effective web hosting services.

ServiceNow

ServiceNow is a company that provides service management software as a service. It specializes in IT services management (ITSM), IT operations management (ITOM) and IT business management (ITBM). Company offerings are centered around the creation of what they call a "service model" that corrects the root cause of service issues and enables self-service. ServiceNow's tasks, activities and processes occur as cloud services, overseen as part of a comprehensive managed workflow that supports real-time communication, collaboration and resource sharing. ServiceNow has service management offerings for IT, human resources, security, customer service, software development, facilities, field service, marketing, finance and legal enterprise needs.

SNMP (agent)

Simple Network Management Protocol (SNMP) is a set of protocols for network management and monitoring. These protocols are supported by many typical network devices such as routers, hubs, bridges, switches, servers, workstations, printers, modem racks and other network components and devices. Supported devices are all network-attached items that must be monitored to detect conditions. These conditions must be addressed for proper, appropriate and ongoing network administration. SNMP standards include an application layer protocol, a set of data objects and a methodology for storing, manipulating and using data objects in a database schema. The SNMP protocol is included in the application layer of TCP/IP as defined by the Internet Engineering Task Force (IETF). Typically, the Simple Network Management Protocol uses one or several administrative computers, called managers, which oversee groups of networked computers and associated devices. A constantly running software program, called an agent, feeds information to the managers by way of SNMP. The agents create variables out of the data and organize them into hierarchies. The hierarchies, along with other metadata, may be types and descriptions of the variables and are described by management information bases - hierarchical virtual databases of network objects. Three key components of a network managed by SNMP are the managed devices (routers, servers, switches, etc.), software agents, and a network management system. There may be more than one NMS on a given managed network.

SCSI Port

Small computer system interface port - used to connect printers to up to 7 total devices, such as hard disks and tape drives, to the same port - these can support higher data transmission speeds than serial or parallel ports

Shell

Software that provides an interface for an operating system's users to provide access to the Kernel's services. Can be invoked through the shell command in the command-line interface (CLI). For some OS, the shell can be considered a place where applications can run in protected memory space with resources being shared among multiple shells

Storage vs Memory

Storage is non-volatile, it contains the programs and data until purposely changed or removed by the user. Memory is volatile, it is a temporary work-space for retrieving programs and processing data.

A+

The A+ certification is a basic certification that demonstrates proficiency with computer hardware and operating systems (OS). It is governed by nonprofit trade association CompTIA. The A+ certification helps prove the recipient's proficiency with the use of computers and related devices Core elements of A+ certification criteria include knowledge of computer anatomy, which is why many experts suggest that those pursuing this credential practice assembling and disassembling a physical computer. Other areas involve operating systems (OS) and knowledge of Microsoft products. Those seeking A+ certification also should be knowledgeable about certain tasks, like booting up a computer with various installed operating systems (OS). In addition to hardware configuration aspects, the A+ test also covers computer data usage elements, such as the basic structure of binary data and various aspects of file input/output (I/O). Test prep materials and other resources showing specific A+ certification test topics are available.

AMQP

The Advanced Message Queuing Protocol (AMQP) is an open-source standard that provides complete functional interoperability for business message communication between organizations or applications. The protocol helps in connecting systems and in providing business processes with the required data; it is also capable of transmitting instructions to achieve the goals. The protocol brings great benefits to organizations such as savings through commoditization, open standard-based connections to business partners, connections to different applications working on different platforms and many others. The Advanced Message Queuing Protocol was designed to provide features like open source, standardization, reliability, interoperability and security. It helps in connecting the organization, time, space and technologies. The protocol is binary, with features like negotiation, multichannel, portability, efficiency and asynchronous messaging. It is commonly split into two layers, namely, a functional layer and a transport layer. The functional layer helps in defining the commands for functioning on the part of the application, whereas the transport layer helps in carrying the different techniques such as framing, channel multiplexing, data representation, etc., between the server and the application. The Advanced Message Queuing protocol provides some key features that are beneficial for organizations as well as for applications. Rapid and guaranteed message deliveries, as well as reliability and message acknowledgments, are the main features of the protocol. These abilities help in the distribution of messages in a multi-client environment, in the delegation of time-consuming tasks and in making a server tackle immediate requests faster. The protocol also has the capability to globally share and monitor updates and also to enable communication between different systems that are connected. Another advantage of the protocol is full asynchronous functionality for systems as well as improved reliability and better uptime with regard to application deployments.

DOS

The Microsoft Disk Operating System (MS-DOS) is an operating system developed for PCs with x86 microprocessors. It is a command-line-based system, where all commands are entered in text form and there is no graphical user interface. MS-DOS was the most commonly used member of the family of disk operating systems. It was the main choice as an operating system for IBM PC-compatible computer systems during the 1980s to mid-1990s. MS-DOS was gradually replaced by system's with graphical user interfaces, particularly Microsoft Windows.

Application Gateway

The application gateway technique employs security methods applied to certain applications such as Telnet and File Transfer Protocol servers

Client-server model

The client-server model is a distributed communication framework of network processes among service requestors, clients and service providers. The client-server connection is established through a network or the Internet. The client-server model is a core network computing concept also building functionality for email exchange and Web/database access. Web technologies and protocols built around the client-server model are: Hypertext Transfer Protocol (HTTP) Domain Name System (DNS) Simple Mail Transfer Protocol (SMTP) Telnet Clients include Web browsers, chat applications, and email software, among others. Servers include Web, database, application, chat and email, etc. A server manages most processes and stores all data. A client requests specified data or processes. The server relays process output to the client. Clients sometimes handle processing, but require server data resources for completion. The client-server model differs from a peer-to-peer (P2P) model where communicating systems are the client or server, each with equal status and responsibilities. The P2P model is decentralized networking. The client-server model is centralized networking. One client-server model drawback is having too many client requests underrun a server and lead to improper functioning or total shutdown. Hackers often use such tactics to terminate specific organizational services through distributed denial-of-service (DDoS) attacks.

Network Hub

These are common connection points for network devices, which connect segments of a LAN (local area network) and may contain multiple ports - an interface for connecting network devices such as printers, storage devices, workstations and servers. A data packet arriving at one hub's port may be copied to other ports allowing all segments of the network to have access to the data packet.

Switching Hub

These hubs actually read the attributes of each unit of data. The data is then forwarded to the correct or intended port.

Passive Hub

These only serve as paths or conduits for data passing from one device, or network segment, to another.

Hot Plugging

This feature helps users to change or remove storage devices even when the computer is running. the ability to replace or install a device without shutting down the attached computer. Hot plugging is implemented when a peripheral device is added or removed; a device or working system requires reconfiguration; a defective component requires replacement or a device and computer require data synchronization. Hot swapping allows easy accessibility to equipment and the convenience of uninterrupted systems.

Stateful Inspection or Dynamic Packet Filtering

This method compares not just the header information, but also a packet's most important inbound and outbound data parts. These are then compared to a trusted information database for characteristic matches. This determines whether the information is authorized to cross the firewall into the network.

Wireless

Today, smartphone can be used as data modems, creating a wireless access point for a personal computer internet connection or connection to a proprietary network - Nearly all cell phones support the Hayes command set standard, allowing the phone to appear as an external modem when connected via USB, serial cable, IrDA infrared or bluetooth wireless - WiFi and WiMAX standards may also be used for wireless firewall, serial, or USB modems operating at microwave frequencies ... these modems may attach to a desktop, laptop, or a PDA.

USB Port

Universal Serial Bus port - ports used to connect many devices including all previously mentioned plus keyboards, scanners, external hard drives, USB devices, cameras, cell phones, and other peripheral devices

Native Command Queuing (NCQ)

Usually, the commands reach a disk for reading or writing from different locations on the disk. When the commands are carried out based on the order in which they appear, a substantial amount of mechanical overhead is generated because of the constant repositioning of the read/write head. SATA II drives make use of an algorithm to identify the most effective order in which to carry out commands. This helps to reduce mechanical overhead and improve performance. Native command queuing (NCQ) is a technology enabling SATA hard drives to accept more than one command at a time by optimizing the order in which read and write commands are executed. This increases the performance of the drive by limiting the number of drive head movements when multiple read/write requests are queued. NCQ replaces tagged command queuing (TCQ), which is used with parallel ATA (PATA). The manner in which TCQ interacts with the operating system (OS) taxes the CPU in return for little performance gain. Both in the hard drive and in the SATA host bus adapter, NCQ must be supported and enabled and the proper driver must be loaded into the OS. Some OSs include the required generic drivers (such as Windows Vista and Windows 7) whereas others require vendor-specific drivers to be loaded to enable NCQ, like Windows XP. NCQ may also be used in solid-state drives (SSDs), drives containing data in non-volatile memory chips and contain no moving parts. Here, latency (the delay in processing commands) is found on the host not on the drive. The drive uses NCQ to ensure it has commands to process while the host adapter is processing CPU tasks.

.NVRAM file

VM BIOS or EFI configuration file

.VMX file

VM configuration file

.VMSN file

VM snapshot data file

.VMSD file

VM snapshot file

.VMSS file

VM suspend file

.VSWP file

VM swap file

VMware ESX(i)

VMware ESX is an enterprise-level product developed by VMware Inc. that is used for server virtualization. It runs without requiring an existing operating system on the physical machine. VMware ESX is embedded hypervisor software and is available in two versions: ESX Server and ESXi Server. VMware is one of the largest companies producing different kinds of software and products for virtualization. VMware products play a role in providing virtualization deployment in the IT industry, helping make IT infrastructure more reliable, flexible and accessible compared to traditional hardware-based IT solutions. VMware ESX and ESXi can be installed as part of the VMware infrastructure to allow centralized administration of enterprise desktops and data center applications.

EVC

VMware EVC is a feature in VMware vSphere virtualization that allows virtual machines (VMs) to move between ESX/ESXi hosts on different CPUs. VMware EVC stands for VMware Enhanced vMotion Compatibility. VMware EVC hides relevant CPU features that do not match across all vMotion-enabled hosts, such as clock speed or number of cores. This feature works for different versions of CPUs from the same chipmaker. VMware EVC cannot enable vMotions between AMD and Intel processors, however.

vCloud Director

VMware vCloud Director (vCD) is deployment, automation and management software for virtual infrastructure resources in multi-tenant cloud environments. VMware vCD is available to cloud service providers to make infrastructure services, such as storage, security and virtualized networking, available as catalog-based services to internal users through a Web portal. VMware vCD features policy controls to apply pre-determined limits on users to regulate the consumption of resources and restrict access.

vSphere

VMware vSphere is the brand name for VMware's suite of virtualization products. VMware vSphere, which is a necessary component of the vCloud Suite for cloud computing, includes VMware ESXi - abstracts processor, memory, storage, and other resources into multiple virtual machines (VMs). VMware vCenter Server - central control point for data center services such as access control, performance monitoring and alarm management. VMware vSphere Client - allows users to remotely connect to ESXi or vCenter Server from any Windows PC. VMware vSphere Web Client - allows users to remotely connect to vCenter Server from a variety of Web browsers and operating systems (OSes). VMware vSphere SDKs - provides interfaces for accessing vSphere components. vSphere Virtual Machine File System (VMFS) - provides a high performance cluster file system for ESXi VMs. vSphere Virtual SMP - allows a single virtual machine to use multiple physical processors at the same time. vSphere vMotion - allows live migration for powered-on virtual machines in the same data center. vSphere Storage vMotion - allows virtual disks or configuration files to be moved to a new data store while a VM is running. vSphere High Availability (HA) - allows virtual machines to be restarted on other available servers. vSphere Distributed Resource Scheduler (DRS) - divides and balances computing capacity for VMs dynamically across collections of hardware resources. vSphere Storage DRS - divides and balances storage capacity and I/O across collections of data stores dynamically. vSphere Fault Tolerance - provides continuous availability. vSphere Distributed Switch (VDS) - allows VMs to maintain network configurations as the VMs migrate across multiple hosts. Host Profiles - provides a way to create user-defined configuration policies.

Remote Access Server

VPN servers that allow you to connect to a server to get access to the rest of the network

VMK

Virtual Machine Kernel - virtualization infrastructure (hypervisor) that enables, emulates and provides for the creation of VMs on operating systems - supports multiple guest operating system images - allocates separate virtualized computing resouces for each virtual machine (processing, storage, memory)

.VMDK file

Virtual disk characteristics files

Virtualization

Virtualization refers to the creation of a virtual resource such as a server, desktop, operating system, file, storage or network. The main goal of virtualization is to manage workloads by radically transforming traditional computing to make it more scalable. Virtualization has been a part of the IT landscape for decades now, and today it can be applied to a wide range of system layers, including operating system-level virtualization, hardware-level virtualization and server virtualization. The most common form of virtualization is the operating system-level virtualization. In operating system-level virtualization, it is possible to run multiple operating systems on a single piece of hardware. Virtualization technology involves separating the physical hardware and software by emulating hardware using software. When a different OS is operating on top of the primary OS by means of virtualization, it is referred to as a virtual machine. A virtual machine is nothing but a data file on a physical computer that can be moved and copied to another computer, just like a normal data file. The computers in the virtual environment use two types of file structures: one defining the hardware and the other defining the hard drive. The virtualization software, or the hypervisor, offers caching technology that can be used to cache changes to the virtual hardware or the virtual hard disk for writing at a later time. This technology enables a user to discard the changes done to the operating system, allowing it to boot from a known state. Virtualization can be categorized into different layers: desktop, server, file, storage and network. Each layer of virtualization has its own set of advantages and complexities. The technology offers many benefits, including low or no-cost deployment, full resource utilization, operational cost savings and power savings. However, deploying virtualization technology requires careful planning and skilled technical experts. Since the virtual machines use the same resources to run, it may lead to slow performance.

Memory

Volatile. Made up of RAM chips. RAM loses all of its content when the power is turned off.

Bourne Shell

a UNIX shell (command processor) that is used for scripting. Known for "sh" command and dollar symbol $ used in the command prompts. This shell also executes commands and functions that are predefined or integrated, files that follow a command path, and text file commands.

Hybrid Client

a client application that can do most of its processes on its own but may rely on a server for critical data or for storage.

Thick/Fat Client

a client application that can do most of its processing and does not necessarily rely on a central server BUT may need to connect with one for some information, uploading, or to update data or the program itself.

Thin Client

a client application with minimum functions that uses the resources provided by host computer - its job is to display the results processed by a server, it relies on the server to do most or all of the processing.

Zenoss

a commercial, open-source software suite for monitoring and managing physical, virtual, and cloud-based IT infrastructure - virtual, physical, and cloud monitoring from a single agent-less product ... automated discovery and monitoring of physical and virtual devices ... dynamic service impact analysis to maintain operational health ... automated root-cause analysis.

File Server

a computer on a network that has files that others on the network can access

Print Server

a computer with a printer attached that can share out that printer to other computers on the network

Web Server

a computer with special software installed in it that allows the computer to present a website to users trying to access the server computer

Port (General)

a connection point or interface between a computer and an external or internal device

Router

a device that analyzes the contents of data packets transmitted within a network or to another network - determine whether the source and destination are on the same network or whether data must be transferred from one network type to another, which requires encapsulating the data packet with routing protocol header information for the new network type When several routers are used in a collection of interconnected networks, they exchange and analyze information, and then build a table of the preferred routes and the rules for determining routes and destinations for that data. As a network interface, routers convert computer signals from one standard protocol to another that's more appropriate for the destination network Large routers determine interconnectivity within an enterprise, between enterprises and the Internet, and between different internet service providers (ISPs); small routers determine interconnectivity for office or home networks. ISPs and major enterprises exchange routing information using border gateway protocol (BGP)

Wireless router

a device that enables wireless network packet forwarding and routing, and serves as an access point in a local area network. It works much like a wired router but replaces wires with wireless radio signals to communicate within and to external network environments. It can function as a switch and as an Internet router and access point.

Bluecoat proxy

a device that ensures real-time web protection with advanced security features including web security, WAN optimization, personal security, service provider caching, etc..

Gzip

a file format and a software application use for file compression and decompression

Hub (Networking)

a hardware device that relays communication data. A hub sends data packets (frames) to all devices on a network, regardless of any MAC addresses contained in the data packet. A switch is different than a hub in that it keeps a record of all MAC addresses of all connected devices. Thus, it knows which device or system is connected to which port. When a data packet is received, the switch immediately knows which port to send it to. Unlike a hub, a 10/100 Mbps switch will allocate the full 10/100 Mbps to each of its ports, and users always have access to the maximum bandwidth - a huge advantage of a switch over a hub.

Core Switch

a high-capacity switch generally positioned within the backbone or physical core of a network. Core switches serve as the gateway to a wide area network (WAN) or the Internet - they provide the final aggregation point for the network and allow multiple aggregation modules to work together. A core switch is also known as a tandem switch or a backbone switch. In a public WAN, a core switch interconnects edge switches that are positioned on the edges of related networks. In a local area network (LAN), this switch interconnects work group switches, which are relatively low-capacity switches that are usually positioned in geographic clusters. As the name implies, a core switch is central to the network and needs to have significant capacity to handle the load sent to it. There isn't a precise definition as to how powerful this is, but clearly it is much bigger than an average desktop switch

Switch

a high-speed device that receives incoming data packets and redirects them to their destination on a local area network (LAN). A LAN switch operates at the data link layer (Layer 2) or the network layer of the OSI Model and, as such it can support all types of packet protocols. Essentially, switches are the traffic cops of a simple local area network. A switch in an Ethernet-based LAN reads incoming TCP/IP data packets/frames containing destination information as they pass into one or more input ports. The destination information in the packets is used to determine which output ports will be used to send the data on to its intended destination. Switches are similar to hubs, only smarter. A hub simply connects all the nodes on the network -- communication is essentially in a haphazard manner with any device trying to communicate at any time, resulting in many collisions. A switch, on the other hand, creates an electronic tunnel between source and destination ports for a split second that no other traffic can enter. This results in communication without collisions. Switches are similar to routers as well, but a router has the additional ability to forward packets between different networks, whereas a switch is limited to node-to-node communication on the same network

Virtual Disk

a large physical files, or a set of files, that can be copied, moved, archived, and backed up as easily as any other file.

Wireless Modem

a modem that bypasses the telephone system and connects directly to a wireless network, through which it can directly access the internet connectivity provided by an ISP

Modem

a network device that both modulates and demodulates analog carrier signals (sine waves) for encoding and decoding digital information for processing - responsible for sending and receiving digital information between personal computers

Ping

a network diagnostic tool used to test the connectivity between two nodes or devices

OpenStack

a platform for developing, deploying and hosting cloud computing solutions using open source software. The OpenStack project is a suite of open source software, services and standards, primarily designed for IaaS cloud offerings and provides 5 different solution stacks including OpenStack compute, object storage, image service, identity and dashboard. The primary objective behind OpenStack project is to create a global standard and software stack for developing cloud solutions helping cloud providers and end-users alike. The project aims to build a unanimous cloud operating platform, where all the participating organizations will build cloud solutions that are not only scalable, elastic and secure but also globally accessible, executable and migrated to other platforms without any vendor lock-in.

Power port (Hard drive)

a port that is connected by a cable, which carries the power the hard drive needs from the computer's power supply

Shell Script

a small computer program that is designed to be run or executed in sequence by the UNIX shell (CLI). Basically, it is a set of commands that the shell in a UNIX-based OS follows - the commands in the shell can contain parameters and sub-commands that tell the shell what to do. These are used for repetitive tasks that are time-consuming.

Virtual Switch (vSwitch)

a software application that allows communication between virtual machines. A vSwitch does more than just forward data packets, it intelligently directs the communication on a network by checking data packets before moving them to a destination. Virtual switches are usually embedded into installed software, but they may also be included in a server's hardware as part of its firmware. A virtual switch is completely virtual and can connect to a network interface card (NIC). The vSwitch merges physical switches into a single logical switch. This helps to increase bandwidth and create an active mesh between server and switches. A virtual switch is meant to provide a mechanism to reduce the complexity of network configuration. This is achieved by reducing the number of switches that need to be managed after taking the network size, data packets and architecture into account. Because a virtual switch is intelligent, it can also ensure the integrity of the virtual machine's profile, which includes network and security settings. This proves a big help to network administrators as moving virtual machines across physical hosts can be time-consuming and pose security risks.

Module

a software component or part of a program that contains one or more routines - one or more independently developed modules make up a program - lets programmers use pre-written code for new applications - allows programmers to focus on only one area of the functionality.

Heartbeat cable

a specific piece of hardware used to connect multiple servers in a process called failover. In this kind of setup, the heartbeat cable handles a "pulse", or recurring signal from the first server to the second server. If the first server encounters a problem, the second server can be programmed to assist when the heartbeat from the cable is interrupted. Various kinds of connections can be programmed in different ways. In some cases, a failover event triggered by a missing pulse on a heartbeat cable connection can send a message to technicians or generate some other warning. This sort of failover connection allows for automated takeover of tasks that a given piece of hardware cannot complete. Failover setups are often part of emergency planning by businesses and organizations that want to protect productivity from natural disasters or other kinds of threats to operations at a given site. Part of the issue with using a heartbeat cable involves accurate configuration. In many cases, a cable can be connected from "RF out" to "RF in" on two servers, but there may be complications. Installers may need to read relevant manuals to figure out, for example, how to ensure a particular display result for hardware status.

Edge Switch

a switch located at the meeting point of two networks. These switches connect end-user local area networks (LANs) to Internet service provider (ISP) networks. Edge switches can be routers, routing switches, integrated access devices (IADs), multiplexers and a variety of MAN and WAN devices that provide entry points into enterprise or service provider core networks. Edge switches are also referred to as access nodes or service nodes Edge switches are located closer to client machines than the backbone of the network. They query route servers for address resolution when destination stations are outside attached LANs. Edge devices also convert LAN frames into asynchronous transfer mode (ATM) cells and vice versa. They set up a switched virtual circuit in an ATM network, map the LAN frames into ATM frames and forward traffic to the ATM backbone. As such, they perform functions associated with routers and become major components in a LAN environment with an ATM backbone. On the other hand, edge devices also translate between different types of protocols. For instance, Ethernet uses an asynchronous transfer mode backbone to connect to other core networks. These networks send data in cells and use connection-oriented virtual circuits. IP networks are packet-oriented, so if ATM is used as a core, packets will be encapsulated in cells and the destination address converted to a virtual circuit identifier. Edge switches for WANs are multiservice units supporting a wide variety of communication technologies including Integrated Services Digital Networks (ISDNs), frame relays, T1 circuits and ATMs. Edge switches also provide enhanced services such as virtual private networking support, VoIP and quality of service (QoS).

Batch Script

a text files that contains certain commands that are executed in sequence - used to simplify certain repetitive tasks or routines in an OS ... also used in complex network and system administration - the commands in a batch script (file) are executed by a special interface called a shell.

Shell Variable

a variable that is only available to the current shell. A shell is the operating system's command interpreter - it processes the commands entered on the command line or read from a shell script file.

VMware NSX

a virtual networking and security software product family - provisions virtual networking environments without command line interfaces or other direct administrator intervention - abstracts network operations from the underlying hardware onto a distributed virtualization layer (like server virtualization for processing power and operating systems) - this software works to expose logical firewalls, switches, routers, ports, and other networking elements to allow virtual networking among vendor-agnostic hypervisors, cloud management systems, and associated network hardware.

Image

an exact replica of the contents of a storage device (like a hard drive) stored on a second storage device

Docker

an open platform that helps with the universal distribution of applications. It has become a standard for certain types of container virtualization systems and has been adopted by various companies as a software container strategy. a tool that helps to ship code to servers in efficient ways. Docker deals with complex software stacks and distributed hardware infrastructure, helping IT people to avoid difficulties in bridging development, Q&A and production, as well as to avoid what Docker founder Solomon Hykes calls the "matrix from hell" - a situation where developers have to look closely at every type of distribution over every type of hardware and software scenario. The philosophy behind Docker is to help provide universal execution, using inherent Linux kernel properties that offer support for easier application handling. For example, instead of utilizing methods that would allow for library interdependency or other difficulties, Docker provides a smooth separation or "sandboxing," where a given library is installed multiple times in different containers so that each individual library instance is not interdependent with any other.

Debian

an open-source and free operating system (OS) that is based on a graphical user interface (GUI). It incorporates GNU project tools and capabilities and is packaged with thousands of software applications for easy installation and execution. It may be used as a desktop, server or embedded OS and supports a number of processor frameworks, including Intel, AMD and ARM. Default Debian installation packages are bundled with utility/development tools, communications/email software, networking services and other applications designed for desktops and servers.

Load balancer

any software or hardware device that facilitates the load balancing process for most computing appliances, including computers, network connections and processors. It enables the optimization of computing resources, reduces latency and increases output and the overall performance of a computing infrastructure. A load balancer is primarily implemented in computer networking processes that distribute and manage loads across several devices, resources and services to increase network performance. A load balancer is implemented through software and hardware. A software load balancer may be a DNS load balancing solution, software-based switch or router that evenly balances network traffic between different devices and network connections. Similarly, hardware-based load balancers are in the form of physical switches, routers or servers that manage the workload distribution within several devices to reduce or normalize overall load.

Tomcat

application server from the apache software foundation that executes Java servlets and renders web pages that include Java server page coding

Avamar

backup and recovery software and system that facilitates fast, efficient daily full backups using integrated variable-length deduplication technology

Internal port

connection point between the computer and a hard drive or other internal component

External port

connection point between the computer and modems, printers, mice, and other external devices

Failover

constant capability to automatically and seamlessly switch to a highly reliable back-up. Can be done redundantly or in a stand-by operational mode upon the failure of a primary server, application, or other primary system component. Fail over for a server uses a heartbeat cable connecting two servers - when the pulse between the servers changes, the secondary server will be triggered to take over the primary servers work and alert the data center.

.LOG file

current VM log file

vCenter

data center management server application developed by VMware Inc. to monitor virtualized environments. VCenter Server provides centralized management and operation, resource provisioning and performance evaluation of virtual machines residing on a distributed virtual data center. VMware VCentre Server is designed primarily for VSphere, VMware's platform for building virtualized cloud infrastructures. VCenter Server is installed at the primary server of a virtualized data center and operates as the virtualization or virtual machine manager for that environment. It also provides data center administrators and a central management console to manage all the system's virtual machines. Virtual center provides statistical information about the resource use of each virtual machine and provisions the ability to scale and adjust the compute, memory, storage and other resource management functions from a central application.

ECC RAM

error-correcting code memory - does a self-check on the RAM to see if it is functioning properly, and if it isn't then it makes corrections to fix the RAM

Components often integrated into the Motherboard

external storage, video display, sound controllers, disk controllers, graphic controllers, sound card output, Ethernet network controller, USB controller, IrDA controller, temp/voltage, fan-speed sensors, etc...

Cluster

groups of hosts that share resources in a resource pool

5 pillars of information assurance

integrity, availability, confidentiality, and non-repudiation

Virtual serial port

interface for connecting peripherals to the VM. can connect to a physical serial port, to a file on the host computer, or over the network - can establish a direct connection between two VMs or between a VM and an application on the host computer.

Virtual parallel port

interface for connecting peripherals to the virtual machine

Monolithic Kernel

kernel where all operating system services run along the main thread, which resides in the same memory area (services and kernel main thread) which provides powerful and rich hardware access.

Disk Storage

magnetic disk is a storage device that uses a magnetization process to write, rewrite, and access data. Its covered in a magnetic coating and it stores data in the form of tracks, spots, and sectors. Consists of a rotating magnetic surface and a mechanical arm that circulates over it to read and write data using a magnetization process.

Floating point unit (FPU)

math (numeric) co-processor - specialized co-processor that manipulates number quicker than basic microprocessor circuitry can.

Server consolidation

maximizing the use of server resources and reducing the amount of servers required through server virtualization

Data port (Hard drive)

port that uses either a serial advanced technology attachment (SATA) or an advanced technology attachment (ATA) interface, which connects to the computers hard drive, enabling communication with the motherboard.

Drive (Internal Hard Drive)

primary storage device located in a computer system - usually contains pre-installed software applications, the OS, and other files - makes use of two main ports - contains all the computer's vital applications and the user's personal files.

Hardware Independence

provision or migrate any virtual machine to any physical server

Email server

servers (software) specially designed to route email - these take all the email from a given organization and they route the mail files to the right locations.

Firewall

software used to maintain the security of a private network. Firewalls block unauthorized access to or from private networks and are often employed to prevent unauthorized Web users or illicit software from gaining access to private networks connected to the Internet. A firewall may be implemented using hardware, software, or a combination of both. A firewall is recognized as the first line of defense in securing sensitive information. For better safety, the data can be encrypted.

Server Hardware

special hardware for continuous server operation, capable of handling large amount of data processing

JumpBox (Jump Server)

special purpose computer (server) on a network used to manage devices in a separate security zone - device that spans two dissimilar security zones and provides a controlled means of access between them.

Database server

store data and allow outside applications to put data into the database, or to pull data out, or to manipulate data - normally resides in a web server (or alongside the web server).

Hard Disk

stores the VMs OS, program files, and other data associated with its activities.

vpxa

the VCenter agent (ESX side)

vpxd

the VCenter daemon (VC side)

Virtual Memory

the amount of memory which applications that are running inside the VM have available to them

Motherboard

the computer's main circuit board, includes the following components attached to a fixed planar surface

Hub (Computing)

the connection point in a computer device where data from many directions converge and are then sent out in many directions to respective devices. A hub may also act as a switch by preventing specific data packets from proceeding to a destination. In addition to receiving and transmitting communication data, a hub may also server as a switch. For example, an airport acts much like a hub in the sense that passengers converge there and head out in many different directions. Suppose that an airline passenger arrives at the airport hub and is then called back home unexpectedly, or receives instructions to change his or her destination. The same may occur with a computing hub when it acts as a switch by preventing specific data packets from proceeding to a destination, while sending other data packets on a specific route. Where packets are sent depends on attributes (MAC addresses) within the data packets. A switch may also act as a hub.

Cloud Computing

the delivery of shared computing resources on demand through the internet

Virtual Disk Image (VDI)

the image of a virtual hard disk of the logical disk associated with a virtual machine - replicates a VM's hard disk to be used later on as backup, restoration, or to be copied to a new VM.

Internet Routing

the process of transmitting and routing IP packets over the Internet between two or more nodes. It is the same as standard routing procedures but incorporates packet routing techniques and processes on external networks or those that are hosted or Internet enabled. It utilizes IP-based networks, but mainly those which are publicly accessible such as that of ISPs.

Client

the receiving end of a service or the requester of a service in a client-server model type of system. A client can be a simple application or a whole system that accesses service being provided by a server.

When an application runs...

the software and data are copied from the storage to memory (RAM), and the memory is where all calculating and comparing takes place.

Serial port

used for connections to mice and modems

vcpdb

vCenter server database?

vccmt

vCloud connector node

vcddb

vCloud director database?


संबंधित स्टडी सेट्स

500 Questions - Operant Conditioning and Cognitive Learning

View Set

Chapter 4 Section 1 and 2 Study Guide

View Set

U.S. II Midterm Study Guide Part 2

View Set

Anatomy & Physiology Chapter 8 Study Guide (Test 2)

View Set