Unit 3 Linux Boot Process

¡Supera tus tareas y exámenes ahora con Quizwiz!

GRUB preface (some more information)

- A bootloader is the first software program that runs when a computer starts. It is responsible for loading and transferring control to the Linux kernel. The kernel, in turn, initializes the rest of the operating system. - The name GRUB officially refers to version 2 of the software, see [2]. If you are looking for the article on the legacy version, see GRUB Legacy. - GRUB supports Btrfs as root (without a separate /boot file system) compressed with either zlib or LZO - GRUB does not support F2fs as root so you will need a separate /boot with a supported file system.

Runlevels and their purposes

0 A transitional runlevel, meaning that it's used to shift the computer from one state to another. Specifically, it shuts down the system. On modern hardware, the computer should completely power down. If not, you're expected to either reboot the computer manually or power it off. 1, s, or S Single-user mode. What services, if any, are started at this runlevel varies by distribution. It's typically used for low-level system maintenance that may be impaired by normal system operation, such as resizing partitions. 2 On Debian and its derivatives, a full multi-user mode with X running and a graphical login. Most other distributions leave this runlevel undefined. 3 On Fedora, Mandriva, Red Hat, and most other distributions, a full multi-user mode with a console (non-graphical) login screen. 4 Usually undefined by default and therefore available for customization. 5 On Fedora, Mandriva, Red Hat, and most other distributions, the same behavior as runlevel 3 with the addition of having X run with an XDM (graphical) login. 6 Used to reboot the system. This runlevel is also a transitional runlevel. Your system is completely shut down, and then the computer reboots automatically. If you run into peculiar runlevel numbers, consult /etc/inittab—it defines them and often contains comments explaining the various runlevels.

General Linux Boot Process

1. BIOS BIOS stands for Basic Input/Output System Performs some system integrity checks Searches, loads, and executes the boot loader program. It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically F12 of F2, but it depends on your system) during the BIOS startup to change the boot sequence. Once the boot loader program is detected and loaded into the memory, BIOS gives the control to it. So, in simple terms BIOS loads and executes the MBR boot loader. 2. MBR MBR stands for Master Boot Record. It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda MBR is less than 512 bytes in size. This has three components 1) primary boot loader info in 1st 446 bytes 2) partition table info in next 64 bytes 3) mbr validation check in last 2 bytes. It contains information about GRUB (or LILO in old systems). So, in simple terms MBR loads and executes the GRUB boot loader. 3. GRUB GRUB stands for Grand Unified Bootloader. If you have multiple kernel images installed on your system, you can choose which one to be executed. GRUB displays a splash screen, waits for few seconds, if you don't enter anything, it loads the default kernel image as specified in the grub configuration file. GRUB has the knowledge of the filesystem (the older Linux loader LILO didn't understand filesystem). Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is sample grub.conf of CentOS. #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/boot/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.18-194.el5PAE) root (hd0,0) kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/ initrd /boot/initrd-2.6.18-194.el5PAE.img As you notice from the above info, it contains kernel and initrd image. So, in simple terms GRUB just loads and executes Kernel and initrd images. 4. Kernel Mounts the root file system as specified in the "root=" in grub.conf Kernel executes the /sbin/init program Since init was the 1st program to be executed by Linux Kernel, it has the process id (PID) of 1. Do a 'ps -ef | grep init' and check the pid. initrd stands for Initial RAM Disk. initrd is used by kernel as temporary root file system until kernel is booted and the real root file system is mounted. It also contains necessary drivers compiled inside, which helps it to access the hard drive partitions, and other hardware. 5. Init Looks at the /etc/inittab file to decide the Linux run level. Following are the available run levels 0 - halt 1 - Single user mode 2 - Multiuser, without NFS 3 - Full multiuser mode 4 - unused 5 - X11 6 - reboot Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate program. Execute 'grep initdefault /etc/inittab' on your system to identify the default run level If you want to get into trouble, you can set the default run level to 0 or 6. Since you know what 0 and 6 means, probably you might not do that. Typically you would set the default run level to either 3 or 5. 6. Runlevel programs When the Linux system is booting up, you might see various services getting started. For example, it might say "starting sendmail .... OK". Those are the runlevel programs, executed from the run level directory as defined by your run level. Depending on your default init level setting, the system will execute the programs from one of the following directories. Run level 0 - /etc/rc.d/rc0.d/ Run level 1 - /etc/rc.d/rc1.d/ Run level 2 - /etc/rc.d/rc2.d/ Run level 3 - /etc/rc.d/rc3.d/ Run level 4 - /etc/rc.d/rc4.d/ Run level 5 - /etc/rc.d/rc5.d/ Run level 6 - /etc/rc.d/rc6.d/ Please note that there are also symbolic links available for these directory under /etc directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d. Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K. Programs starts with S are used during startup. S for startup. Programs starts with K are used during shutdown. K for kill. There are numbers right next to S and K in the program names. Those are the sequence number in which the programs should be started or killed. For example, S12syslog is to start the syslog deamon, which has the sequence number of 12. S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So, syslog program will be started before sendmail. There you have it. That is what happens during the Linux boot process. http://www.thegeekstuff.com/2011/02/linux-boot-process/

GRUB and Linux Boot Loaders

After the BIOS initializes the hardware and finds the first device to boot, the boot loader takes over. On a normal Linux server this will be the GRUB program, although in the past a different program called LILO was also used. GRUB is normally what is used when you boot from a hard drive, while systems that boot from USB, CD-ROM, or the network might use syslinux, isolinux, or pxelinux respectively as their boot loader instead of GRUB. Although the specifics of syslinux and other boot loaders are different from GRUB, they all essentially load some sort of software and read a configuration file that tells them what operating systems they can boot, where to find their respective kernels, and what settings to give the system as it boots. When GRUB is loaded, a small bit of code (what it calls stage 1) is executed from the MBR. Since you can only fit 446 bytes of boot code into the MBR (the rest contains your partition table), GRUB's stage 1 code is just enough for it to locate the rest of the code on disk and execute that. The next stage of GRUB code allows it to access Linux file systems, and it uses that ability to read and load a configuration file that tells it what operating systems it can boot, where they are on the disk, and what options to pass them. In the case of Linux, this might include a number of different kernel versions on the disk and often includes special rescue modes that can help with troubleshooting. Usually the configuration file also describes some kind of menu you can use to see and edit all of your boot options. On most modern systems GRUB can display a nice splash screen, sometimes with graphics and often with a countdown. Usually you will see a menu that gives you a list of operating systems you can boot from (Figure 3-1), although sometimes you have to press a key like Esc (or Shift with GRUB2) to see the menu. GRUB also allows you view and edit specific boot-time settings that can be handy during troubleshooting since you can fix mistakes that you might have made in your GRUB configuration without a rescue disk.

Can't Mount the Root File System

Apart from GRUB errors, one of the most common boot problems is from not being able to mount the root file system. After GRUB loads the kernel and initrd file into RAM, the initrd file is expanded into an initramfs temporary root file system in RAM. This file system contains kernel modules and programs the kernel needs to locate and mount the root file system and continue the boot process. To best troubleshoot any problems in which the kernel can't mount the root file system, it's important to understand how the kernel knows where the root file system is to begin with. You can get UUID by typing in "blkid" The Root Kernel Argument The kernel knows where the root file system is because of the root option passed to it by GRUB. If you were to look in a GRUB configuration file for the line that contains kernel arguments, you might see something like root=/dev/sda2, root=LABEL=/, or root=UUID=528c6527-24bf-42d1-b908-c175f7b06a0f. In the first example, the kernel is given an explicit disk partition, /dev/sda2. This method of specifying the root device is most common in older systems and has been replaced with either disk labels or UUIDs, because any time a disk is added or repartitioned, it's possible that what was once /dev/sda2 is now /dev/sdb2 or /dev/sda3. To get around the problem of device names changing around, distributions started labeling partitions with their mount point, so the root partition might be labeled / or root and the /home partition might be labeled home or /home. Then, instead of specifying the device at the root= line, in GRUB you would specify the device label such as root=LABEL=/. That way, if the actual device names changed around, the labels would still remain the same and the kernel would be able to find the root partition. Labels seemed to solve the problem of device names changing but introduced a different problem—what happens when two partitions are labeled the same? What started happening is that someone would add a second disk to a server that used to be in a different system. This new disk might have its own / or /home label already, and when added to the new system, the kernel might not end up mounting the labels you thought it should. To get around this issue, some distributions started assigning partitions UUIDs (Universal Unique Identifiers). The UUIDs are long strings of characters that are guaranteed to be unique across all disk partitions in the world, so you could add any disk to your system and feel confident that you will never have the same UUID twice. Now instead of specifying a disk label at the boot prompt, you would specify a UUID like root=UUID=528c6527-24bf-42d1-b908-c175f7b06a0f. The Root Device Changed One of the most common reasons a kernel can't mount the root partition is because the root partition it was given has changed. When this happens you might get an error along the lines of "ALERT! /dev/sdb2 does not exist" and you might then get dropped to a basic initramfs shell. On systems that don't use UUIDs, this is most often because a new disk was added and the device names have switched around (so, for instance, your old root partition was on /dev/sda2 and now it's on /dev/sdb2). If you know that you have added a disk to the system recently, go to the GRUB menu and press E to edit the boot arguments. If you notice that you set the root argument to be a disk device, experiment with changing the device letter. So, for instance, if you have root=/dev/sda2, change it to root=/dev/sdb2 or root=/dev/sdc2. If you aren't sure how your disk devices have been assigned, you might need to boot into a rescue disk and then look through the output of a command like fdisk -l as the root user to see all of the available partitions on the system. Here's some example output of fdisk -l that shows two disks, /dev/sda and /dev/sdb. The /dev/sda disk has three partitions: /dev/sda1, /dev/sda2, and /dev/sda3, and /dev/sdb has only one: /dev/sdb1. # fdisk -l Disk /dev/sda: 11.6 GB, 11560550400 bytes 4 heads, 32 sectors/track, 176400 cylinders Units = cylinders of 128 * 512 = 65536 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009c896 Device Boot Start End Blocks Id System /dev/sda1 1 76279 4881840 83 Linux /dev/sda2 76280 91904 1000000 82 Linux swap / Solaris /dev/sda3 91905 168198 4882816 83 Linux Disk /dev/sdb: 52.4 GB, 52429848576 bytes 4 heads, 32 sectors/track, 800016 cylinders Units = cylinders of 128 * 512 = 65536 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c406f Device Boot Start End Blocks Id System /dev/sdb1 1 762924 48827120 83 Linux If you find that the system does boot correctly once you change to a different device, then you can edit your GRUB configuration file (/boot/grub/grub.conf or /boot/grub/menu.lst for GRUB 1; most GRUB 2 systems use UUIDs and auto-detect the root partition) and change the root= line permanently. By the way, if the device did change, you will probably need to change the same entry in /etc/fstab as well. For systems that use partitions labels, the inability to mount the root file system might be caused either by a disk being added that has a partition with the same label, or it could be from an administrator who changed the root partition label. The best way to diagnose disk label problems is to edit the root= line and, instead of a label, specify the disk device itself. Again, if you don't know how your disks are laid out, boot into a rescue disk and type fdisk -l. If you do find that you are able to successfully boot once you set root to a disk device instead of a label, you can either update your GRUB configuration file to use the disk device instead of a label, or you can use the program e2label to change the partition label of your root partition back to what it should be. So, for instance, to assign a label of / to /dev/sda2 as root, you would type e2label /dev/sda2 / In the case of duplicate labels, use the e2label tool as well to rename the duplicate root partition to be something else. You can type e2label along with just the disk device name (like e2label /dev/sda2) to display what the current label is set to. If your system uses UUIDs and the kernel can't find the root partition, it's possible that the UUID changed. Normally the UUID should be assigned when a partition is formatted, so it is unusual for this to happen to a root partition. That said, it often happens when someone clones a system based off one that uses UUIDs. When they create the root partition for the cloned system, it gets a new UUID, yet when they copy over the GRUB configuration files, it specifies the old UUID. Like with disk label problems, a quick way to troubleshoot this issue is to edit the boot prompt at the GRUB menu and change the root= line to specify a specific device instead of a UUID. If you find you get further along in the boot process that way, then you can use the blkid command to see the UUID that's assigned to a particular device: $ sudo blkid -s UUID /dev/sda2 /dev/sda2: UUID="528c6527-24bf-42d1-b908-c175f7b06a0f" Once you know what the UUID should be, you can then edit your GRUB configuration file (and /etc/fstab) so that it references the proper UUID. The Root Partition Is Corrupt or Failed The other main reason why a kernel may not be able to mount the root file system is that it is corrupt or the disk itself has completely failed. When a file system gets mounted, if errors are detected on the file system, it will automatically start a repair process; however, in many cases the corruption is significant enough that the boot process will drop you to a basic shell so you can manually attempt to repair the file system. If your boot process gets to this state, go to the Repair Corrupted File Systems section of Chapter 4 for details on how to correct the errors. If you fear that your disk has completely failed, check out Chapter 10, which talks about how to diagnose hardware issues.

Basics of the /etc/inittab File

Basics of the /etc/inittab File Entries in /etc/inittab follow a simple format. Each line consists of four colon-delimited fields: id:runlevels:action:process Each of these fields has a specific meaning: Identification Code The id field consists of a sequence of one to four characters that identifies its function. Applicable Runlevels The runlevels field consists of a list of runlevels for which this entry applies. For instance, 345 means the entry is applicable to runlevels 3, 4, and 5. Action to Be Taken Specific codes in the action field tell init how to treat the process. For instance, wait tells init to start the process once when entering a runlevel and to wait for the process's termination, and respawn tells init to restart the process whenever it terminates (which is great for login processes). Several other actions are available; consult the man page for inittab for details. Process to Run The process field is the process to run for this entry, including any options and arguments that are required. The part of /etc/inittab that tells init how to handle each runlevel looks like this: l0:0:wait:/etc/init.d/rc 0 l1:1:wait:/etc/init.d/rc 1 l2:2:wait:/etc/init.d/rc 2 l3:3:wait:/etc/init.d/rc 3 l4:4:wait:/etc/init.d/rc 4 l5:5:wait:/etc/init.d/rc 5 l6:6:wait:/etc/init.d/rc 6 These lines start with codes that begin with an l (a lowercase letter L, not a number 1) followed by the runlevel number—for instance, l0 for runlevel 0, l1 for runlevel 1, and so on. These lines specify scripts or programs that are to be run when the specified runlevel is entered. In the case of this example, all the scripts are the same (/etc/init.d/rc), but the script is passed the runlevel number as an argument. Some distributions call specific programs for certain runlevels, such as shutdown for runlevel 0.

Changing Runlevels on a Running System

Changing Runlevels on a Running System Sometimes you may want to change runlevels on a running system. You might do this to get more services, such as going from a console to a graphical login runlevel, or to shut down or reboot your computer. This can be accomplished with the init (or telinit), shutdown, halt, reboot, and poweroff commands. Changing Runlevels with init or telinit The init process is the first process run by the Linux kernel, but you can also use it to have the system reread the /etc/inittab file and implement changes it finds there or to change to a new runlevel. The simplest case is to have it change to the runlevel you specify. For instance, to change to runlevel 1 (the runlevel reserved for single-user or maintenance mode), you would type this command: # init 1 To reboot the system, you can use init to change to runlevel 6 (the runlevel reserved for reboots): # init 6 A variant of init is telinit. This program can take a runlevel number just like init to change to that runlevel, but it can also take the Q or q option to have the tool reread /etc/inittab and implement any changes it finds there. Thus, if you've made a change to the runlevel in /etc/inittab, you can immediately implement that change by typing telinit q.

Changing runlevels with shutdown

Changing Runlevels with shutdown Although you can shut down or reboot the computer with init, doing so has some problems. One issue is that it's simply an unintuitive command for this action. Another is that changing runlevels with init causes an immediate change to the new runlevel. This may cause other users on your system some aggravation because they'll be given no warning about the shutdown. Thus, it's better to use the shutdown command in a multi-user environment when you want to reboot, shut down, or switch to single-user mode. This command supports extra options that make it friendlier in such environments. The shutdown program sends a message to all users who are logged into your system and prevents other users from logging in during the process of changing runlevels. The shutdown command also lets you specify when to effect the runlevel change so that users have time to exit editors and safely stop other processes they may have running. When the time to change runlevels is reached, shutdown signals the init process for you. In the simplest form, shutdown is invoked with a time argument like this: # shutdown now This changes the system to runlevel 1, the single-user or maintenance mode. The now parameter causes the change to occur immediately. Other possible time formats include hh:mm, for a time in 24-hour clock format (such as 6:00 for 6:00 a.m. or 13:30 for 1:30 p.m.), and +m for a time m minutes in the future. You can add extra parameters to specify that you want to reboot or halt (that is, power off) the computer. Specifically, -r reboots the system, -H halts it (terminates operation but doesn't power it off), and -P powers it off. The -h option may halt or power off the computer, but usually it powers it off. For instance, you can type shutdown -r +10 to reboot the system in 10 minutes. To give people some warning about the impending shutdown, you can add a message to the end of the command: # shutdown -h +15 "system going down for maintenance" If you schedule a shutdown but then change your mind, you can use the -c option to cancel it: # shutdown -c "never mind" Upstart and systemd provide shutdown commands of their own that function like the shutdown command of SysV. You may want to check your computer's man page for shutdown to verify that it works in the way described here; with development active in the realm of startup systems, you may find some surprises! Changing Runlevels with the halt, reboot, and poweroff Commands Three additional shortcut commands are halt, reboot, and poweroff. (In reality, reboot and poweroff are usually symbolic links to halt. This command behaves differently depending on the name with which it's called.) As you might expect, these commands halt the system (shut it down without powering it off), reboot it, or shut it down and (on hardware that supports this feature) turn off the power, respectively. As with telinit and shutdown, these commands are available in SysV, Upstart, and systemd.

Checking your runlevel

Checking Your Runlevel Sometimes it's necessary to check your current runlevel. Typically, you'll do this prior to changing the runlevel or to check the status if something isn't working correctly. Two different runlevel checks are possible: checking your default runlevel and checking your current runlevel. Checking and Changing Your Default Runlevel On a SysV-based system, you can determine your default runlevel by inspecting the /etc/inittab file with the less command or opening it in an editor. Alternatively, you may use the grep command to look for the line specifying the initdefault action. On a Debian system, you'll see something like this: # grep :initdefault: /etc/inittab id:2:initdefault: If grep returns nothing, chances are you've either mistyped the command or your computer is using Upstart, systemd, or some other initialization tool. On some systems, the second colon-delimited field will contain a 3, 5, or some value other than the 2 shown here. You may notice that the id line doesn't define a process to run. In the case of the initdefault action, the process field is ignored. If you want to change the default runlevel for the next time you boot your system, edit the initdefault line in /etc/inittab and change the runlevel field to the value you want. If your system lacks an /etc/inittab file, create one that contains only an initdefault line that specifies the runlevel you want to enter by default. If your system doesn't use SysV, you'll need to adjust the default runlevel in some other way, as described later in "Using Alternative Boot Systems." Determining Your Current Runlevel If your system is up and running, you can determine your runlevel information with the runlevel command: # runlevel N 2 The first character is the previous runlevel. When the character is N, this means the system hasn't switched runlevels since booting. It's possible to switch to different runlevels on a running system with the init and telinit programs, as described next. The second character in the runlevel output is your current runlevel. Both Upstart and systemd provide runlevel commands for compatibility with SysV. These alternatives don't technically use runlevels, though, so the information is a sort of "translation" of what the startup system is using to SysV terms.

initramfs vs initrd

Differences between initramfs and initrd initramfs is a Linux 2.6 and above feature made up from a cpio archive of files that enables an initial root filesystem and init program to reside in kernel memory cache, rather than on a ramdisk, as with initrd filesystems. with initramfs, you create an archive with the files which the kernel extracts to a tmpfs. intramfs can increase boot-time flexibility, memory efficiency, and simplicity dracut is the tool used to create the initramfs image. initramfs location of init : /init initrd is for Linux kernels 2.4 and lower initrd is deprecated and is replaced by initramfs initrd requires at least one file system driver be compiled into the kernel initrd is a ram based block device which means that it required a fixed block of memory even if unused and as a block device, it requires a file-system, initramfs is file based (cpio of files) kdump uses initrd -> /boot/initrd-2.6.32-358.2.1.el6.x86_64kdump.img mkinitrd is the tool used to create the initrd image. initrd location of init : /sbin/init

Fix GRUB

Fix GRUB The difficulty in identifying and fixing problems with GRUB is in the fact that without a functioning boot loader, you can't boot into your system and use the tools you would need to repair GRUB. There are a few different ways that GRUB might be broken on your system, but before we discuss those, you should understand that in the interest of booting quickly, some systems set GRUB with a short timeout of only a few seconds before they boot the default OS, even on servers. What's worse, some systems even hide the initial GRUB prompt from the user, so you have to press a special key (Esc for GRUB 1 releases, also known as GRUB legacy, and Shift for GRUB 2, also just known as GRUB) within a second or two after your BIOS has passed off control to GRUB. If you don't know which version of GRUB you have installed, you may have to boot the system a few times and try out both Esc and Shift to see if you can get some sort of GRUB window to display. After that, you might still have to deal with a short timeout before GRUB boots the default OS, so you'll need to press a key (arrow keys are generally safe) to disable the timeout. The following sections discuss a few of the ways GRUB might be broken and then follow up with some general approaches to repair it. No GRUB Prompt The first way GRUB might be broken on your system is that it could have been completely removed from your MBR. Unfortunately, since GRUB is often hidden from the user even when it works correctly, you may not be able to tell whether GRUB is configured wrong or not installed at all. Test by pressing either the Esc or Shift keys during the boot process to confirm that no GRUB prompt appears. It's rather rare for GRUB to disappear from the MBR completely, but it most often happens on dual-boot systems where you might load both Linux and Windows. The Windows install process has long been known to wipe out the boot code in the MBR, in which case you would get no GRUB prompt at all and instead would boot directly into Windows. Dual-boot setups are fairly rare on servers, however, so most likely if GRUB was completely removed from your MBR, your only clue would be some error from the BIOS stating that it couldn't find a suitable boot device. If you have already gone through the steps listed earlier to test your boot device order in your BIOS and still get this error, somehow GRUB was erased from the MBR. This error might also occur on systems using Linux software RAID where the primary disk may have died. While some modern installs of GRUB can automatically install themselves to the MBR on all disks involved in a RAID, if your install doesn't default to that mode (or you are using an old version of GRUB and didn't manually install GRUB to the MBR of the other disks in your RAID array), when the primary disk dies there will be no other instance of GRUB on the remaining disks you can use. Stage 1.5 GRUB Prompt Another way GRUB can fail is that it can still be installed in the MBR, however, for some reason it can't locate the rest of the code it needs to boot the system. Remember that GRUB's first stage has to fit in only 446 bytes inside the MBR, so it contains the code it needs to locate and load the rest of the GRUB environment. GRUB normally loads what it calls stage 1.5 (GRUB 2 calls this core.img), which contains the code that can read Linux file systems and access the final GRUB stage, stage 2. Once stage 2 or core.img is loaded, GRUB can read its default configuration file from the file system, load any extra modules it needs, and display the normal GRUB menu. When GRUB can't find the file system that contains stage 2 or its configuration files, you might be left with a message that reads "loading stage 1.5" followed by either by an error or a simple grub> prompt. If you get an error that loading stage 1.5 failed, move on to the section that talks about how to repair GRUB. If you get as far as a grub> prompt, that means that at least stage 1.5 did load, but it might be having trouble either loading stage 2 or reading your GRUB configuration file. This can happen if the GRUB configuration file or the stage 2 file gets corrupted, or if the file system that contains those files gets corrupted (in which case you'll want to read Chapter 4 on how to repair file systems). If you are particularly savvy with GRUB commands, or don't have access to a rescue disk, it might be possible to boot your system from the basic grub> prompt by typing the same GRUB boot commands that would be configured in your GRUB configuration file. In fact, if GRUB gets as far as the final stage and displays a prompt, you can use GRUB commands to attempt to read partitions and do some basic troubleshooting. That said, most of the time it's just simpler and faster to boot into a rescue disk and repair GRUB from there. Misconfigured GRUB Prompt Finally, you might find that you have a full GRUB menu loaded, but when you attempt to boot the default boot entry, GRUB fails and either returns you to the boot menu or displays an error. This usually means there are errors in your GRUB configuration file and either the disk or partition that is referenced in the file has changed (or the UUID changed, more on that in the upcoming section on how to fix a system that can't mount its root file system). If you get to this point and have an alternative older kernel or a rescue mode configured in your GRUB menu, try those and see if you can boot to the system with an older config. If so, you can follow the steps in the next section to repair GRUB from the system itself. Otherwise, if you are familiar with GRUB configuration, you can press E and attempt to tinker with the GRUB configuration from the GRUB prompt, or you can boot to a rescue disk. Repair GRUB from the Live System If you are fortunate enough to be able to boot into your live system (possibly with an older kernel or by tinkering with GRUB options), then you might have an easier time repairing GRUB. If you can boot into your system, GRUB was probably able to at least get to stage 2 and possibly even read its configuration file, so it's clearly installed in the MBR; the next section will go over the steps to reinstall GRUB to the MBR. Once you are booted into the system, if the problem was with your GRUB configuration file, you can simply open up the configuration file (/boot/grub/menu.lst for GRUB 1, or /etc/default/grub for GRUB 2). In the case of GRUB 2, the real configuration file is in /boot/grub/grub.cfg, but that file is usually generated by a script and isn't intended to be edited by regular users, so once you edit /etc/default/grub, you will need to run the /usr/sbin/update-grub script to generate the new grub.cfg file. Even in the case of GRUB 1, the menu.lst file might be automatically generated by a script like update-grub depending on your distribution. If so, the distribution will usually say as much in a comment at the top of the file along with providing instructions on how to edit and update the configuration file. Repair GRUB with a Rescue Disk Most of the time when you have a problem with GRUB, it prevents you from booting into the system to repair it, so the quickest way to repair it is with a rescue disk. Most distributions make the process simpler for you by including a rescue disk as part of the install disk either on CD-ROM or a USB image. For instance, on a Red Hat or CentOS install disk you can type linux rescue at the boot prompt to enter the rescue mode. On an Ubuntu install disk, the rescue mode is listed as one of the options in the boot menu. For either rescue disk you should read the official documentation to find out all of the features of the rescue environment, but we will now discuss the basic steps to restore GRUB using either disk. In the case of the Ubuntu rescue disk, after the disk boots it will present you with an option to reinstall the GRUB boot loader. You would select this option if you got no GRUB prompt at all when the system booted. Otherwise, if you suspect you just need to regenerate your GRUB configuration file, select the option to open a shell in the root environment, run update-grub to rebuild the configuration file, type exit to leave the shell, and then reboot the system. In the case of the Red Hat or CentOS rescue disk, boot with the linux rescue boot option, then type chroot /mnt/sysimage to mount the root partition. Once the root partition is mounted and you have a shell prompt, if you need to re-install GRUB to the MBR, type /sbin/grub-install /dev/sda. Replace /dev/sda with your root partition device (if you are unsure what the device is, type df at this prompt and look to see what device it claims /mnt/sysimage is). From this prompt you can also view the /boot/grub/grub.conf file in case you need to make any custom changes to the options there.

Choosing between GPT and MBR

GUID Partition Table (GPT) is an alternative, contemporary, partitioning style; it is intended to replace the old Master Boot Record (MBR) system. GPT has several advantages over MBR which has quirks dating back to MS-DOS times. With the recent developments to the formatting tools fdisk (MBR) and gdisk (GPT), it is equally easy to get good dependability and performance for GPT or MBR. One should consider these to choose between GPT and MBR: - If using GRUB legacy as the bootloader, one must use MBR. - To dual-boot with Windows (both 32-bit and 64-bit) using Legacy BIOS, one must use MBR. - To dual-boot Windows 64-bit using UEFI instead of BIOS, one must use GPT. - If none of the above apply, choose freely between GPT and MBR; since GPT is more modern, it is recommended in this case. - It is recommended to use always GPT for UEFI boot as some UEFI firmwares do not allow UEFI-MBR boot.

Linux Boot Process

Its mentioned that if you want to be good at troubleshooting, it's important that you understand how systems work. That philosophy definitely applies to troubleshooting boot problems, especially since they can have so many different causes. 1. BIOS - Basic Input/Output System executes MBR. 2. MBR - Master Boot Record executes GRUB 3. GRUB - Grand Unified Bootloader executes kernel. 4. Kernel - Kernel executes /sbin/init 5. Init - Executes runlevel programs 6. Runlevel - Runlevel programs are executed from /etc/rc.d/rc*.d/

Managing Runlevel Services

Managing Runlevel Services The SysV startup scripts in the runlevel directories are symbolic links back to the original script. This is done so you don't need to copy the same script into each runlevel directory. Instead, you can modify the original script without having to track down its copies in all the SysV runlevel directories. You can also modify which programs are active in a runlevel by editing the link filenames. Numerous utility programs are available to help you manage these links, such as chkconfig, update-rc.d, and rc-update. I describe the first of these tools because it's supported on many distributions. If your distribution doesn't support these tools, you should check distribution-centric documentation. These tools may provide impaired functionality on systems that don't use SysV natively; you may need to locate Upstart- or systemd-specific tools instead. To list the services and their applicable runlevels with chkconfig, use the --list option. The output looks something like this but is likely to be much longer: # chkconfig --list pcmcia 0:off 1:off 2:on 3:on 4:on 5:on 6:off nfs-common 0:off 1:off 2:off 3:on 4:on 5:on 6:off xprint 0:off 1:off 2:off 3:on 4:on 5:on 6:off setserial 0:off 1:off 2:off 3:off 4:off 5:off 6:off This output shows the status of the services in all seven runlevels. For instance, you can see that nfs-common is inactive in runlevels 0−2, active in runlevels 3−5, and inactive in runlevel 6. If you're interested in a specific service, you can specify its name: # chkconfig --list nfs-common nfs-common 0:off 1:off 2:off 3:on 4:on 5:on 6:off To modify the runlevels in which a service runs, use a command like this: # chkconfig --level 23 nfs-common on

Can't Mount Secondary File Systems

Many servers have multiple file systems that might get mounted automatically as the system boots. These file systems are defined in the /etc/fstab file and might look somewhat like the following: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/sda1 / ext3 defaults 0 0 /dev/sda2 swap swap defaults 0 0 /dev/sda3 /var ext3 defaults 0 0 /dev/sdb1 /home ext3 defaults 0 0 In this example you can see that in addition to the / partition that's on /dev/sda1, the system also mounts /var from /dev/sda3 and /home from /dev/sdb1. If either /var or /home are corrupted and can't automatically be repaired or can't be found, the boot process will stop and drop you to a shell prompt where you can investigate matters further. In these circumstances, just repeat the same troubleshooting steps you might perform for a problem with a root file system and look for device names that have changed, new labels, or different UUIDs.

The Kernel and Initrd

Once you select a particular kernel in GRUB (or the countdown times out and it picks one for you), GRUB will load the Linux kernel into RAM, execute it, and pass along any boot-time arguments that were configured for it. Usually GRUB will also load an initrd (initial RAM disk) along with the kernel. This file, on a modern Linux system, is a gzipped cpio archive known as an initramfs file, and it contains a basic, small Linux root file system. On that file system are some crucial configuration files, kernel modules, and programs that the kernel needs in order to find and mount the real root file system. In the old days all of this boot time capability would be built directly into the Linux kernel. However, as hardware support grew to include a number of different file systems and SCSI and IDE devices along with extra features like software RAID, LVM, and file system encryption, the kernel got too large. Therefore, these features were split out into individual modules so that you could load only the modules you needed for your system. Since the disk drivers and file system support were split out into modules, you were faced with a chicken or egg problem. If the modules are on the root file system, but you need those modules to read the root file system, how can you mount it? The solution was to put all those crucial modules into the initrd. As the kernel boots, it extracts the initramfs file into RAM and then runs a script called init in the root of that initramfs. This script is just a standard shell script that does some hardware detection, creates some mount points, and then mounts the root file system. The kernel knows where the root file system is, because it was passed as one of the boot arguments (root=) by GRUB when it first loaded the kernel. The final step for the initramfs file after it mounts the real root file system is to execute the /sbin/init program, which takes over the rest of the boot process.

PV-GRUB

PV-GRUB Amazon Machine Images that use paravirtual (PV) virtualization use a system called PV-GRUB during the boot process. PV-GRUB is a paravirtual boot loader that runs a patched version of GNU GRUB 0.97. When you start an instance, PV-GRUB starts the boot process and then chain loads the kernel specified by your image's menu.lst file. PV-GRUB understands standard grub.conf or menu.lst commands, which allows it to work with all currently supported Linux distributions. Older distributions such as Ubuntu 10.04 LTS, Oracle Enterprise Linux or CentOS 5.x require a special "ec2" or "xen" kernel package, while newer distributions include the required drivers in the default kernel package. Most modern paravirtual AMIs use a PV-GRUB AKI by default (including all of the paravirtual Linux AMIs available in the Amazon EC2 Launch Wizard Quick Start menu), so there are no additional steps that you need to take to use a different kernel on your instance, provided that the kernel you want to use is compatible with your distribution. The best way to run a custom kernel on your instance is to start with an AMI that is close to what you want and then to compile the custom kernel on your instance and modify the menu.lst file as shown in Configuring GRUB to boot with that kernel. You can verify that the kernel image for an AMI is a PV-GRUB AKI by executing the following command with the Amazon EC2 command line tools (substituting the kernel image ID you want to check): $ ec2-describe-images -a -F image-id=aki-880531cd IMAGE aki-880531cd amazon/pv-grub-hd0_1.04-x86_64.gz ... The name field of the output should contain pv-grub. Limitations: PV-GRUB has the following limitations: You can't use the 64-bit version of PV-GRUB to start a 32-bit kernel or vice versa. You can't specify an Amazon ramdisk image (ARI) when using a PV-GRUB AKI. AWS has tested and verified that PV-GRUB works with these file system formats: EXT2, EXT3, EXT4, JFS, XFS, and ReiserFS. Other file system formats might not work. PV-GRUB can boot kernels compressed using the gzip, bzip2, lzo, and xz compression formats. Cluster AMIs don't support or need PV-GRUB, because they use full hardware virtualization (HVM). While paravirtual instances use PV-GRUB to boot, HVM instance volumes are treated like actual disks, and the boot process is similar to the boot process of a bare metal operating system with a partitioned disk and bootloader. PV-GRUB versions 1.03 and earlier don't support GPT partitioning; they support MBR partitioning only. If you plan to use a logical volume manager (LVM) with Amazon EBS volumes, you need a separate boot partition outside of the LVM. Then you can create logical volumes with the LVM. Configuring GRUB To boot PV-GRUB, a GRUB menu.lst file must exist in the image; the most common location for this file is /boot/grub/menu.lst.

Upstart

System V init is a good system and has worked well on Linux for years; however, it is not without some drawbacks. For one, init scripts don't automatically have a mechanism to respawn if the service dies. So, for instance, if the cron daemon crashes for some reason, you would have to create some other tool to monitor and restart that process. Another issue with init scripts is that they are generally affected only by changes in runlevel or when the system starts up but otherwise are not executed unless you do so manually. Init scripts that depend on a network connection are a good example. On Red Hat and Debian-based systems an init script, called network or networking, respectively, establishes the network connection. Any init scripts that depend on a network connection are named with a higher number than this init script to ensure they run after the network script has run. What if you unplug the network cable from a server and then start it up? Well, the networking script would run, but all of the init scripts that need a network connection would time out one by one. Eventually you would get a login prompt and be able to log in. Now after you logged in, if you plugged in the network cable and restarted the networking service, you would be on the network, yet none of the services that need a network connection would automatically restart. You would have to start them manually one by one. Upstart was designed not only to address some of the shortcomings of the System V init process, but also to provide a more robust system for managing services. One main feature of Upstart is that it is event-driven. Upstart constantly monitors the system for certain events to occur, and when they do, Upstart can be configured to take action based on those events. Some sample events might be system start-up, system shutdown, the Ctrl-Alt-Del sequence being pressed, the runlevel changing, or an Upstart script starting or stopping. To see how an event-driven system can improve on traditional init scripts, let's take the previous example of a system booted with an unplugged network cable. You could create an Upstart script that is triggered when a network cable is plugged in. That script could then restart the networking service for you. You could then configure any services that require a network connection to be triggered whenever the networking service starts successfully. Now when the system boots, you could just plug in the network cable and Upstart scripts would take care of the rest. Upstart does not yet completely replace System V init, at least when it comes to services on the system. At the moment, Upstart does replace the functionality of init and the /etc/inittab file, and it manages changes to runlevels, system start-up and shutdown, and console ttys. More and more core functionality is being ported to Upstart scripts, but you will still find some of the standard init scripts in /etc/init.d and all of the standard symlinks in /etc/rc?.d. The difference is that Upstart now starts and stops services when runlevels change. Upstart scripts reside in /etc/init and have different syntax from init scripts since they aren't actually shell scripts. To help illustrate the syntax, here's an example Upstart script (/etc/init/rc.conf) used to change between runlevels

Classic System V Init, or Runlevels

System V refers to a particular version of the original UNIX operating system that was developed by AT&T. In this style of init, the init process reads a configuration file called /etc/inittab to discover its default runlevel, discussed next. It then enters that runlevel and starts processes that have been configured to run at that runlevel. The System V init process is defined by different system states known as runlevels. Runlevels are labeled by numbers ranging from 0 to 6, and each number can potentially represent a completely different system state. For instance, runlevel 0 is reserved for a halted system state. When you enter runlevel 0, the system shuts down all running processes, unmounts all file systems, and powers off. Likewise, runlevel 6 is reserved for rebooting the machine. Runlevel 1 is reserved for single-user mode—a state where only a single user can log in to the system. Generally, few processes are started in single-user mode, so it is a very useful runlevel for diagnostics when a system won't fully boot. Even in the default GRUB menu you will notice a recovery mode option that boots you into runlevel 1. Runlevels 2 through 5 are left for the distribution, and finally you, to define. The idea behind having so many runlevels is to allow you to create different modes the server could enter. Traditionally a number of Linux distributions have set one runlevel for a graphical desktop (in Red Hat, this was runlevel 5) and another runlevel for a system with no graphics (Red Hat used runlevel 3 for this). You could define other runlevels too—for instance, one that starts up a system without network access. Then when you boot, you could pass an argument at the boot prompt to override the default runlevel with the runlevel of your choice. Once the system is booted, you can also change the current runlevel with the init command followed by the runlevel. So, to change to single-user mode, you might type sudo init 1. In addition to /etc/inittab, a number of other important files and directories for a System V init system organize start-up and shutdown scripts, or init scripts, for all of the major services on the system: • /etc/init.d This directory contains all of the start-up scripts for every service at every runlevel. Typically these are standard shell scripts, and they conform to a basic standard. Each script accepts at least two arguments, start and stop, which respectively start up or stop a service (such as, say, your web server). In addition, init scripts commonly accept a few extra options such as restart (stops and then starts the service), status (returns the current state of a service), reload (tells the service to reload its settings from its configuration files), and force-reload (forces the service to reload its settings). When you run an init script with no arguments, it should generally return a list of arguments that it accepts. • /etc/rc0.d through /etc/rc6.d These directories contain the init scripts for each respective runlevel. In practice, these are generally symlinks into the actual files under /etc/init.d. What you will notice, however, is that the init scripts in these directories have special names assigned to them that start with an S (start), K (kill), or D (disable) and then a number. When init enters a runlevel, it runs every script that begins with a K in numerical order and passes the stop argument, but only if the corresponding init script was started in the previous runlevel. Then init runs every script that begins with an S in numerical order and passes the start argument. Any scripts that start with D init ignores—this allows you to temporarily disable a script in a particular runlevel, or you could just remove the symlink altogether. So if you have two scripts, S01foo and S05bar, init would first run S01foo start and then S05bar start when it entered that particular runlevel. • /etc/rcS.d In this directory you will find all of the system init scripts that init runs at start-up before it changes to a particular runlevel. Be careful when you tinker with scripts in this directory because if they stall, they could prevent you from even entering single-user mode. • /etc/rc.local Not every distribution uses rc.local, but traditionally this is a shell script set aside for the user to edit. It's generally executed at the end of the init process, so you can put extra scripts in here that you want to run without having to create your own init script. Here is an example boot process for a standard System V init system. First init starts and reads /etc/inittab to determine its default runlevel, which in this example is runlevel 2. Then init goes to /etc/rcS.d and runs each script that begins with an S in numerical order with start as an argument. Then init does the same for the /etc/rc2.d directory. Finally init is finished but stays running in the background, waiting for the runlevel to change.

The SysV Startup Scripts

The /etc/init.d/rc or /etc/rc.d/rc script performs the crucial task of running all the scripts associated with the runlevel. The runlevel-specific scripts are stored in /etc/rc.d/rc?.d, /etc/init.d/rc?.d, /etc/rc?.d, or a similar location. (The precise location varies between distributions.) In all these cases, ? is the runlevel number. When entering a runlevel, rc passes the start parameter to all the scripts with names that begin with a capital S and passes the stop parameter to all the scripts with names that begin with a capital K. These SysV startup scripts start or stop services depending on the parameter they're passed, so the naming of the scripts controls whether they're started or stopped when a runlevel is entered. These scripts are also numbered, as in S10network and K35smb. The rc program runs the scripts in numeric order. This feature enables distribution designers to control the order in which scripts run by giving them appropriate numbers. This control is important because some services depend on others. For instance, network servers must normally be started after the network is brought up. In reality, the files in the SysV runlevel directories are symbolic links to the main scripts, which are typically stored in /etc/rc.d, /etc/init.d, or /etc/rc.d/init.d (again, the exact location depends on the distribution). These original SysV startup scripts have names that lack the leading S or K and number, as in smb instead of K35smb. To determine which services are active in a runlevel, search the appropriate SysV startup script directory for scripts with filenames that begin with an S. Alternatively, you can use a runlevel management tool, as described next.

Additional info

The /etc/inittab file is one SysV feature that may not be used by newer startup systems, such as Upstart and systemd. Ubuntu 12.04, which uses Upstart, provides no /etc/inittab file at all. Fedora 17, which uses systemd, provides an /etc/inittab file that contains nothing but comments noting its obsolescence. OpenSUSE 12.1 is also based on systemd, and it provides an /etc/inittab file, but it's no longer used in any meaningful way. Some other distributions, such as Debian, continue to use SysV, and the exam continues to emphasize SysV (including /etc/inittab).

/sbin/init

The /sbin/init program is the parent process of every program running on the system. This process always has a PID of 1 and is responsible for starting the rest of the processes that make up a running Linux system. Those of you who have been using Linux for a while know that init on Ubuntu Server is different from what you might be used to. There are a few different standards for how to initialize a UNIX operating system, but most classic Linux distributions have used what is known as the System V init model (described momentarily), whereas some modern Linux distributions have switched to other systems like Upstart or, most recently, systemd. For instance, Ubuntu Server has switched to Upstart but has still retained most of the outward structure of System V init such as runlevels and /etc/rc?.d directories for backward compatibility; however, Upstart now manages everything under the hood. Since the most common two init systems you will run across on a server are System V init and Upstart, the following sections will describe those two.

Using GRUB Legacy as the Boot Loader

The Grand Unified Bootloader (GRUB) is the default boot loader for most Linux distributions; however, GRUB is really two boot loaders: GRUB Legacy and GRUB 2. Although these two boot loaders are similar in many ways, they differ in many important details. GRUB Legacy is, as you might expect, the older of the two boot loaders. It used to be the dominant boot loader for Linux, but it's been eclipsed by GRUB 2. Nonetheless, because the two boot loaders are so similar, I describe GRUB Legacy first and in more detail; the upcoming section, "Using GRUB 2 as the Boot Loader," focuses on its differences from GRUB Legacy. In the following pages, I describe how to configure, install, and interact with GRUB Legacy. Configuring GRUB Legacy The usual location for GRUB Legacy's configuration file on a BIOS-based computer is /boot/grub/menu.lst. Some distributions (such as Fedora, Red Hat, and Gentoo) use the filename grub.conf rather than menu.lst. The GRUB configuration file is broken into global and per-image sections, each of which has its own options. Before getting into section details, though, you should understand a few GRUB quirks. GRUB Legacy separates partition numbers from drive numbers with a comma, as in (hd0,0) for the first partition on the first disk

Master Boot Record

The Master Boot Record (MBR) is the first 512 bytes of a storage device. It contains an operating system bootloader and the storage device's partition table. Note: As a newer partitioning scheme, the GUID Partition Table (part of the Unified Extensible Firmware Interface specification) can be used also on BIOS systems via a protective MBR. GPT solves some legacy problems with MBR but also may have compatibility problems.

Installing Boot loaders

The computer's boot process begins with a program called a boot loader. This program runs before any OS has loaded, although you normally install and configure it from within Linux (or some other OS). Boot loaders work in particular ways that depend on both the firmware you use and the OS you're booting. Understanding your boot loader's principles is necessary to properly configure them, so before delving into the details of specific boot loaders, I describe these boot loader principles. In Linux, the most-used boot loader is the Grand Unified Boot Loader (GRUB), which is available in two versions: GRUB Legacy (with version numbers up to 0.97) and GRUB 2 (with version numbers from 1.9x to 2.x, with 2.00 being the latest as I write). The boot process begins with the BIOS, you tell the BIOS which boot device to use—a hard disk, a floppy disk, a CD-ROM drive, or something else. Assuming you pick a hard disk as the primary boot device (or if higher-priority devices aren't bootable), the BIOS loads code from the Master Boot Record (MBR), which is the first sector on the hard disk. This code is the primary boot loader code. In theory, it could be just about anything, even a complete (if tiny) OS. In practice, the primary boot loader does one of two things: It examines the partition table and locates the partition that's marked as bootable. The primary boot loader then loads the boot sector from that partition and executes it. This boot sector contains a secondary boot loader, which continues the process by locating an OS kernel, loading it, and executing it. This option is depicted by the A arrows in Figure 5.1. It locates an OS kernel, loads it, and executes it directly. This approach bypasses the secondary boot loader entirel Installing GRUB Legacy The command for installing GRUB Legacy on a BIOS-based computer is grub-install. You must specify the boot sector by device name when you install the boot loader. The basic command looks like # grub-install /dev/sda or # grub-install '(hd0)' Either command will install GRUB Legacy into the first sector (that is, the MBR) of your first hard drive. In the second example, you need single quotes around the device name. If you want to install GRUB Legacy in the boot sector of a partition rather than in the MBR, you include a partition identifier, as in /dev/sda1 or (hd0,0). Using GRUB 2 as the Boot Loader In principle, configuring GRUB 2 is much like configuring GRUB Legacy; however, some important details differ. First, the GRUB 2 configuration file is /boot/grub/grub.cfg. (Some distributions place this file in /boot/grub2, enabling simultaneous installations of GRUB Legacy and GRUB 2.) GRUB 2 adds a number of features, such as support for loadable modules for specific filesystems and modes of operation, that aren't present in GRUB Legacy. (The insmod command in the GRUB 2 configuration file loads modules.) GRUB 2 also supports conditional logic statements, enabling loading modules or displaying menu entries only if particular conditions are met.

The Linux Boot Process The BIOS

The very first system involved in the boot process is the BIOS (Basic Input Output System). This is the first screen you will see when you boot, and although the look varies from system to system, the BIOS initializes your hardware, including detecting hard drives, USB disks, CD-ROMs, network cards, and any other hardware it can boot from. The BIOS will then go step-by-step through each boot device based on the boot device order it is configured to follow until it finds one it can successfully boot from. In the case of a Linux server, that usually means reading the MBR (master boot record: the first 512 bytes on a hard drive) and loading and executing the boot code inside the MBR to start the boot process.

Virtualization types

Virtualization Types Amazon Machine Images use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). All current generation instance types support HVM AMIs. Some previous generation instance types, such as T1, C1, M1, and M2 do not support Linux HVM AMIs. Some current generation instance types, such as T2, I2, R3, G2, and C4 do not support PV AMIs. The main difference between PV and HVM AMIs is the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance. Note For the best performance, we recommend that you use current generation instance types and HVM AMIs when you launch new instances. For more information on current generation instance types, see the Amazon EC2 Instances detail page. If you are using previous generation instance types and you are curious about upgrade paths, see Upgrade Paths. Paravirtual (PV) Paravirtual AMIs boot with a special boot loader called PV-GRUB, which starts the boot cycle and then chain loads the kernel specified in the menu.lst file on your image. Paravirtual guests can run on host hardware that does not have explicit support for virtualization, but they cannot take advantage of special hardware extensions such as enhanced networking or GPU processing. Historically, PV guests had better performance than HVM guests in many cases, but because of enhancements in HVM virtualization and the availability of PV drivers for HVM AMIs, this is no longer true. For more information on PV-GRUB and its use in Amazon EC2, see PV-GRUB. Hardware Virtual Machine (HVM) HVM AMIs are presented with a fully virtualized set of hardware and boot by executing the master boot record of the root block device of your image. This virtualization type provides the ability to run an operating system directly on top of a virtual machine without any modification, as if it were run on the bare-metal hardware. The Amazon EC2 host system emulates some or all of the underlying hardware that is presented to the guest. Unlike PV guests, HVM guests can take advantage of hardware extensions that provide fast access to the underlying hardware on the host system. For more information on CPU virtualization extensions available in Amazon EC2, see Server Virtualization on the Intel website. HVM AMIs are required to take advantage of enhanced networking and GPU processing. In order to pass through instructions to specialized network and GPU devices, the OS needs to be able to have access to the native hardware platform; HVM virtualization provides this access. For more information, see Enhanced Networking and Linux GPU Instances. PV on HVM Paravirtual guests traditionally performed better with storage and network operations than HVM guests because they could leverage special drivers for I/O that avoided the overhead of emulating network and disk hardware, whereas HVM guests had to translate these instructions to emulated hardware. Now these PV drivers are available for HVM guests, so operating systems that cannot be ported to run in a paravirtualized environment can still see performance advantages in storage and network I/O by using them. With these PV on HVM drivers, HVM guests can get the same, or better, performance than paravirtual guests.

What's an initial RAM disk

What's an initial RAM disk? The initial RAM disk (initrd) is an initial root file system that is mounted prior to when the real root file system is available. The initrd is bound to the kernel and loaded as part of the kernel boot procedure. The kernel then mounts this initrd as part of the two-stage boot process to load the modules to make the real file systems available and get at the real root file system. The initrd contains a minimal set of directories and executables to achieve this, such as the insmod tool to install kernel modules into the kernel. In the case of desktop or server Linux systems, the initrd is a transient file system. Its lifetime is short, only serving as a bridge to the real root file system. In embedded systems with no mutable storage, the initrd is the permanent root file system. This article explores both of these contexts.

How to extend EXT4 root volume(GPT) on HVM based instances (i.e. cc2.8xlarge)

https://megamind.amazon.com/node/2144

Backup partition table

you can back up your partition table, if it is a msdos label disk with sfdisk sfdisk -d /dev/sda > sda.partition replace /dev/sda with your actual disk name when you boot into a livecd. if it is a gpt table, you can use parted /dev/sda print > sda.gpt.partion there are other ways. depending on whether you are using mbr or uefi, the boot sector/partition is different. for mbr, it is just the first sector of the disk of 512 bytes that you can save with dd.

/etc/init.d /etc/rc0.d - /etc/rc6.d runlevels

• /etc/init.d This directory contains all of the start-up scripts for every service at every runlevel. Typically these are standard shell scripts, and they conform to a basic standard. Each script accepts at least two arguments, start and stop, which respectively start up or stop a service (such as, say, your web server). In addition, init scripts commonly accept a few extra options such as restart (stops and then starts the service), status (returns the current state of a service), reload (tells the service to reload its settings from its configuration files), and force-reload (forces the service to reload its settings). When you run an init script with no arguments, it should generally return a list of arguments that it accepts. • /etc/rc0.d through /etc/rc6.d These directories contain the init scripts for each respective runlevel. In practice, these are generally symlinks into the actual files under /etc/init.d. What you will notice, however, is that the init scripts in these directories have special names assigned to them that start with an S (start), K (kill), or D (disable) and then a number. When init enters a runlevel, it runs every script that begins with a K in numerical order and passes the stop argument, but only if the corresponding init script was started in the previous runlevel. Then init runs every script that begins with an S in numerical order and passes the start argument. Any scripts that start with D init ignores—this allows you to temporarily disable a script in a particular runlevel, or you could just remove the symlink altogether. So if you have two scripts, S01foo and S05bar, init would first run S01foo start and then S05bar start when it entered that particular runlevel. Following are the available run levels 0 - halt 1 - Single user mode 2 - Multiuser, without NFS 3 - Full multiuser mode 4 - unused 5 - X11 6 - reboot


Conjuntos de estudio relacionados

Revocation, Suspension. and Cancellation

View Set

E-Commerce- 8.04 JavaScript Questions

View Set