Copyright © 2004 Red Hat, Inc.
The following topics are covered in this document:
Changes to the Red Hat Enterprise Linux installation program (Anaconda)
Using the zfcp driver
Using mdadm to configure RAID-based and multipath storage
Configuring IPL from a SCSI device
Changes to drivers and hardware support
Changes to packages
The following section includes information specific to the Red Hat Enterprise Linux installation program, Anaconda.
In order to upgrade an already-installed Red Hat Enterprise Linux 3 system to Update 2, you must use Red Hat Network to update those packages that have changed. The use of Anaconda to upgrade to Update 2 is not supported.
Use Anaconda only to perform a fresh install of Red Hat Enterprise Linux 3 Update 2.
If you are copying the contents of the Red Hat Enterprise Linux 3 Update 2 CD-ROMs (in preparation for a network-based installation, for example) be sure you copy the CD-ROMs for the operating system only. Do not copy the Extras CD-ROM, or any of the layered product CD-ROMs, as this will overwrite files necessary for Anaconda's proper operation.
These CD-ROMs must be installed after Red Hat Enterprise Linux has been installed.
This section contains general information not specific to any other section of this document.
For information regarding various system configuration limits, refer to:
To speed login when NIS is used, it is now possible to request the use of the netid.byname map instead of the groups.byname map for providing group-related information to NIS clients. This map is traditionally not used for this purpose, but in most configurations contains the necessary information, and is generated by default on recent Linux and Solaris™ NIS servers.
To enable this feature, find the following line in /etc/default/nss:
Next, use a text editor to remove the leading '#' character, saving your changes when done.
No cross-checks of the netid.byname map are done by either the NIS server or client. Therefore, the responsibility of ensuring that netid.byname contains appropriate information rests with the system administrator.
It is also possible to improve NIS performance by using the services.byservicename map. If this map exists and has been built properly, its use can be enabled by the following setting in /etc/default/nss:
The services.byservicename map must contain both names of services and aliases as keys, both without protocol specified and with protocol. Recently-updated Red Hat Enterprise Linux and Solaris NIS servers provide properly-built services.byservicename maps.
The Red Hat Enterprise Linux 3 Update 2 Extras CD-ROM includes the fonts-monotype package. This optional package contains the Albany™, Cumberland™, and Thorndale™ fonts by Agfa Monotype. These fonts provide a core set of document fonts with metrics close to those of core fonts included with other common operating systems.
Red Hat Enterprise Linux 3 Update 2 features LAuS, the Linux Auditing System. This system is composed of kernel-resident and user-space components that facilitate highly-configurable and robust logging of system call use. This document provides an overview of how the auditing system is put together and basic information on how to get it running. Pointers to relevant documentation are also provided that should help in making the best use of this new capability.
LauS consist of two types of components:
The kernel component
The User-space components
The default kernel provided with Red Hat Enterprise Linux 3 Update 2 contains modifications that enable system-call auditing. When auditing is not in use, these modifications are performance-neutral. The kernel component provides access to the auditing facilities through a character-special device — /dev/audit. Through this device, a user-space daemon (auditd) can enable or disable auditing and can provide the kernel with the rulesets it is to use to determine when an invocation of a system call must be logged. This device is also used by auditd to retrieve audit records from the kernel for transfer to the audit log. Refer to the audit(4) man page for information about supported ioctl() calls and /proc/ interfaces for managing and tuning auditing behavior.
There are a number of programs provided that transfer audit records from the kernel to the audit log and manipulate the resulting data. These programs and their documentation are found in the laus package.
Auditing is performed for a process if that process registers itself with the kernel as auditable. This registration is propagated to any process started from a registered process. Modifications were made to PAM to assure the auditing of all user sessions when kernel auditing is enabled.
The audit daemon can be run as a service and configured with chkconfig. The audit daemon reads a number of files from /etc/audit/ at startup.
The contents of /etc/audit/audit.conf specify how and where to write audit records and what to do if the logs overrun available disk space. The contents of /etc/audit/filesets.conf and /etc/audit/filters.conf specify the rulesets the kernel uses to determine if a system call is auditable. The audit daemon can also be run with the -r option to instruct auditd to reload the rulesets and communicate any changes to the kernel. Refer to the auditd(8), audit-filters(5), audit-conf(5), and audit-filesets(5) man pages for more information.
This program enables an auditing context for itself and execs the program specified on its command line. This can be used to enable auditing on processes that are not generally part of a user session. Refer to the aurun(8) man page for more information.
This program writes the contents of the audit log to standard output. There are also options for specifying the level of detail required. Refer to the aucat(1) man page for more information.
This program writes audit log records matching specified patterns to standard output. Refer to the augrep(1) man page for more information.
The Pluggable Authentication Modules package has been modified to log authentication activity. Failed and successful authentications are logged to the audit log. PAM marks for auditing all sessions which are started from successful authentication and generates an audit record when the session is terminated.
To use the SCSI-over-fiber driver (known as zfcp), take the following steps:
1. Add the appropriate device map to the zfcp module options information in /etc/modules.conf
2. Make the appropriate device file(s)
3. Partition the disk(s) as desired
4. Use mkinitrd to generate a new initrd file
5. Run zipl to update the system bootloader
The first step is to create a device map appropriate for your system configuration. The exact format of the device map varies depending on the following variables:
· The number of devices
· The number of paths to those devices
· The number of LUNs to be presented by those devices
The following section illustrates several different device maps.
DEVICE MAPS FOR SINGLE DEVICE/SINGLE PATH/SINGLE LUN
In the following sample /etc/modules.conf file, a single SCSI LUN is presented to the kernel as /dev/sda; using device 0x4000, it presents the SCSI LUN 0x5010 to the Linux kernel via the supplied World-Wide Port Name (WWPN) 0x5105076300c213e9:
alias eth0 qeth options dasd_mod dasd=200-201 options scsi_mod max_scsi_luns=50 options zfcp 'map="0x4000 0x01:0x5105076300c213e9 0x0:0x5010000000000000;"'
When the zfcp module is installed, console messages similar to the following show the drive identified as sda:
zfcp: zfcp_module_init: driver version 0x3009d Vendor: IBM Model: 2105F20 Rev: .674 Type: Direct-Access ANSI SCSI revision: 03 Attached scsi disk sda at scsi0, channel 0, id 1, lun 0 SCSI device sda: 7812544 512-byte hdwr sectors (4000 MB)
DEVICE MAPS FOR SINGLE DEVICE/SINGLE PATH/MULTIPLE LUNS
In the following sample /etc/modules.conf file, multiple SCSI LUNs are presented using a single path:
alias eth0 qeth options dasd_mod dasd=200-201 options scsi_mod max_scsi_luns=50 options zfcp 'map=" 0x4000 0x01:0x5105076300c213e9 0x0:0x5010000000000000; 0x4000 0x01:0x5105076300c213e9 0x1:0x5011000000000000; 0x4000 0x01:0x5105076300c213e9 0x2:0x5012000000000000 "'
Although the map above is presented in a more easy-to-read format (using line breaks), the actual zfcp options line should not include line breaks, but should be all on one line.
When the zfcp module is installed, console messages similar to the following show the drives identified as sda through sdc:
zfcp: zfcp_module_init: driver version 0x3009d Vendor: IBM Model: 2105F20 Rev: .674 Type: Direct-Access ANSI SCSI revision: 03 Vendor: IBM Model: 2105F20 Rev: .674 Type: Direct-Access ANSI SCSI revision: 03 Vendor: IBM Model: 2105F20 Rev: .674 Type: Direct-Access ANSI SCSI revision: 03 Attached scsi disk sda at scsi0, channel 0, id 1, lun 0 Attached scsi disk sdb at scsi0, channel 0, id 1, lun 1 Attached scsi disk sdc at scsi0, channel 0, id 1, lun 2 SCSI device sda: 7812544 512-byte hdwr sectors (4000 MB) SCSI device sdb: 7812544 512-byte hdwr sectors (4000 MB) SCSI device sdc: 7812544 512-byte hdwr sectors (4000 MB)
DEVICE MAPS FOR MULTIPLE DEVICES/SINGLE PATH/MULTIPLE LUNS
In the following sample /etc/modules.conf file, multiple SCSI LUNs are presented using multiple devices. The map defines four disks using three devices (0x4000 through 0x4002) via 2 WWPNs (0x5105076300c213e9 and 0x5105076300cb13e9):
alias eth0 qeth options dasd_mod dasd=200-201 options scsi_mod max_scsi_luns=50 options zfcp 'map=" 0x4000 0x01:0x5105076300c213e9 0x0:0x5010000000000000; 0x4000 0x02:0x5105076300cb13e9 0x0:0x5011000000000000; 0x4001 0x01:0x5105076300c213e9 0x0:0x5012000000000000; 0x4002 0x01:0x5105076300c213e9 0x0:0x5013000000000000 "'
When the zfcp module is installed, console messages similar to the following show the drives identified as sda through sdd:
zfcp: zfcp_module_init: driver version 0x3009d Vendor: IBM Model: 2105F20 Rev: .674 Type: Direct-Access ANSI SCSI revision: 03 Vendor: IBM Model: 2105F20 Rev: .674 Type: Direct-Access ANSI SCSI revision: 03 Vendor: IBM Model: 2105F20 Rev: .674 Type: Direct-Access ANSI SCSI revision: 03 Vendor: IBM Model: 2105F20 Rev: .674 Type: Direct-Access ANSI SCSI revision: 03 Attached scsi disk sda at scsi0, channel 0, id 1, lun 0 Attached scsi disk sdb at scsi0, channel 0, id 2, lun 0 Attached scsi disk sdc at scsi1, channel 0, id 1, lun 0 Attached scsi disk sdd at scsi2, channel 0, id 1, lun 0 SCSI device sda: 7812544 512-byte hdwr sectors (4000 MB) SCSI device sdb: 7812544 512-byte hdwr sectors (4000 MB) SCSI device sdc: 7812544 512-byte hdwr sectors (4000 MB) SCSI device sdd: 7812544 512-byte hdwr sectors (4000 MB)
The next step is to create the necessary device files. This is done via the mknod command. For example, to create a device file for the first SCSI disk, use the following command:
# mknod /dev/sda b 8 0
Note that you will likely require additional device files for partition access (sda1, for example). For more information, refer to the mknod man page.
Next, the disk can be partitioned. In addition to dividing the disk into more appropriately-sized partitions, this step also provides confirmation that the device map and device file have been properly created.
(The following example shows the fdisk utility being used, although parted (a more sophisticated and flexible disk partitioning utility) could also be used.)
# fdisk /dev/sda Command (m for help): p Disk /dev/sda: 4000 MB, 4000022528 bytes 124 heads, 62 sectors/track, 1016 cylinders Units = cylinders of 7688 * 512 = 3936256 bytes Device Boot Start End Blocks Id System /dev/sda1 1 900 3459569 83 Linux /dev/sda2 901 1016 445904 fd Linux raid autodetect Command (m for help):
Once the disk is accessible and has been partitioned as desired, it is necessary to ensure that the disk is accessible during IPL. To do this, a new initial ramdisk (often referred to as "initrd") file must be created. The mkinitrd command is used to do this:
# mkinitrd -f --with=zfcp /boot/initrd-2.4.21-9.EL.img-zfcp 2.4.21-9.EL
Depending on your system environment, you may need to modify the mkinitrd command to include additional modules. For example, the sample below includes the module supporting RAID level 0:
# mkinitrd -f --with=zfcp --with=raid0 /boot/initrd-2.4.21-9.EL.img-zfcp 2.4.21-9.EL
Next, change the ramdisk value in the zipl.conf file so that it points to your newly-created initrd file:
[defaultboot] default=linux target=/boot/ [linux] image=/boot/vmlinuz-2.4.21-9.EL ramdisk=/boot/initrd-2.4.21-9.EL.img-zfcp parameters="root=LABEL=/"
Finally, run zipl to install the bootloader:
When zipl is complete, verify the changes by performing an IPL.
The mdadm command may be new to readers of this document. However, like the various tools comprising the raidtools package, mdadm can be used to perform all the necessary functions related to administering multiple-device sets. In this section, we show how mdadm can be used to:
· Create a RAID device
· Create a multipath device
Creating a RAID Device With mdadm
To create a RAID device, edit the /etc/mdadm.conf file to define appropriate DEVICE and ARRAY values:
DEVICE /dev/sd[abcd]1 ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
In this example, the DEVICE line is using traditional file name globbing (refer to the glob(7) man page for more information) to define the following SCSI devices:
The ARRAY line defines a RAID device (/dev/md0) that is comprised of the SCSI devices defined by the DEVICE line.
Prior to the creation or usage of any RAID devices, the /proc/mdstat file shows no active RAID devices:
Personalities : read_ahead not set Event: 0 unused devices: <none>
Next, use the above configuration and the mdadm command to create a RAID 0 array:
mdadm -C /dev/md0 --level=raid0 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 Continue creating array? yes mdadm: array /dev/md0 started.
Once created, the RAID device can be queried at any time to provide status information. The following example shows the output from the command mdadm --detail /dev/md0:
/dev/md0: Version : 00.90.00 Creation Time : Mon Mar 1 13:49:10 2004 Raid Level : raid0 Array Size : 15621632 (14.90 GiB 15.100 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Mar 1 13:49:10 2004 State : dirty, no-errors Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 UUID : 25c0f2a1:e882dfc0:c0fe135e:6940d932 Events : 0.1
Creating a Multipath Device With mdadm
In addition to creating RAID arrays, mdadm can also be used to take advantage of hardware supporting more than one I/O path to individual SCSI LUNs (disk drives). The goal of multipath storage is continued data availability in the event of hardware failure or individual path saturation. Because this configuration contains multiple paths (each acting as an independent virtual controller) accessing a common SCSI LUN (disk drive), the Linux kernel detects each shared drive once "through" each path. In other words, the SCSI LUN (disk drive) known as /dev/sda may also be accessible as /dev/sdb, /dev/sdc, and so on, depending on the specific configuration.
In order to provide a single device that can remain accessible if an I/O path fails or becomes saturated, mdadm includes an additional parameter to its --level option. This parameter — multipath — directs the md layer in the Linux kernel to re-route I/O requests from one pathway to another in the event of an I/O path failure.
To create a multipath device, edit the /etc/mdadm.conf file to define values for the DEVICE and ARRAY lines that reflect your hardware configuration.
Unlike the previous RAID example (where each device specified in /etc/mdadm.conf must represent different physical disk drives), each device in this file refers to the same shared disk drive.
The command used for the creation of a multipath device is similar to that used to create a RAID device; the difference is the replacement of a RAID level parameter with the multipath parameter:
mdadm -C /dev/md0 --level=multipath --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 Continue creating array? yes mdadm: array /dev/md0 started.
In this example, the hardware consists of one SCSI LUN presented as four separate SCSI devices, each accessing the same storage by a different pathway. Once the multipath device /dev/md0 is created, all I/O operations referencing /dev/md0 will be directed to /dev/sda1, /dev/sdb1, /dev/sdc1, or /dev/sdd1 (depending on which path is currently active and operational).
The configuration of /dev/md0 can be examined more closely using the command mdadm --detail /dev/md0 to verify that it is, in fact, a multipath device:
/dev/md0: Version : 00.90.00 Creation Time : Tue Mar 2 10:56:37 2004 Raid Level : multipath Array Size : 3905408 (3.72 GiB 3.100 GB) Raid Devices : 1 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Mar 2 10:56:37 2004 State : dirty, no-errors Active Devices : 1 Working Devices : 4 Failed Devices : 0 Spare Devices : 3 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 17 1 spare /dev/sdb1 2 8 33 2 spare /dev/sdc1 3 8 1 3 spare /dev/sda1 UUID : 4b564608:fa01c716:550bd8ff:735d92dc Events : 0.1
Another feature of mdadm is the ability to force a device (be it a member of a RAID array or a path in a multipath configuration) to be removed from an operating configuration. In the following example, /dev/sda1 is flagged as being faulty, is then removed, and finally is added back into the configuration. For a multipath configuration, these actions would not impact any I/O activity taking place at the time:
# mdadm /dev/md0 -f /dev/sda1 mdadm: set /dev/sda1 faulty in /dev/md0 # mdadm /dev/md0 -r /dev/sda1 mdadm: hot removed /dev/sda1 # mdadm /dev/md0 -a /dev/sda1 mdadm: hot added /dev/sda1 #
Currently, Anaconda does not support direct installations of Red Hat Enterprise Linux 3 to SCSI devices. However, for those system administrators interested in the ability to IPL from a SCSI device, the following manual procedure can be used.
This procedure presents a step-by-step guide to migrating a Red Hat Enterprise Linux 3 Update 2 system environment (either as a z/VM guest or running in an LPAR) from an ECKD DASD installation to a SCSI disk installation. To do that, you must start with Red Hat Enterprise Linux 3 installed on an ECKD DASD. You must also have access to an empty SCSI disk.
You must start with Red Hat Enterprise Linux 3 Update 2.
(1) load the SCSI and zFCP device drivers
Login to your ECKD installation as root and load the corresponding device drivers to access your target SCSI disk
First, add the SCSI device driver:
# modprobe scsi_mod
Next, add the zfcp device driver and specify the parameters required to identify your SCSI disk.
The following numbers are examples; you must determine the proper numbers for your system configuration.
· The device number of your zFCP adapter (0x5480)
· The WWPN (World Wide Port Name) of your storage device (0x5005076300cb93cb)
· The LUN (Logical Unit Number) of your SCSI disk (0x5123000000000000)
The most common errors when specifying these parameters are missing zeros or forgetting to start each number with "0x".
# modprobe zfcp loglevel="0x00000000" map="0x54800x1:0x5005076300cb93cb 0x0:0x5123000000000000"
Add the SCSI disk device driver:
# modprobe sd_mod
Make sure you are using the right SCSI disk. In the /proc/scsi/zfcp/map file there should be only one entry. If there is more than one entry, you must use the corresponding device node. In this example, it will be /dev/sda because it is the first one listed. However, if your SCSI disk is the second one listed, you must use /dev/sdb instead. Another possibility is to unload the three device drivers using the rmmod command. This removes all /proc/scsi/zfc/map entries; leaving you free to load them again with only one disk.
# cat /proc/scsi/zfcp/map 0x5480 0x1:0x5005076300cb93cb 0x0:0x5123000000000000
You should now detect your SCSI disk in the /proc/ file system:
# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 01 Lun: 00 Vendor: IBM Model: 2105F20 Rev: 2.91 Type: Direct-Access ANSI SCSI revision: 03
(2) Partitioning, formatting, and mounting
Using the fdisk command, create one partition on your SCSI disk.
(The following example shows the fdisk utility being used, although parted (a more sophisticated and flexible disk partitioning utility) could also be used.)
# fdisk /dev/sda Command (m for help): p Disk /dev/sda: 2000 MB, 2000027648 bytes 62 heads, 62 sectors/track, 1016 cylinders Units = cylinders of 3844 * 512 = 1968128 bytes Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1016, default 1): <Enter> Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1016, default 1016): <Enter> Using default value 1016 Command (m for help): p Disk /dev/sda: 2000 MB, 2000027648 bytes 62 heads, 62 sectors/track, 1016 cylinders Units = cylinders of 3844 * 512 = 1968128 bytes Device Boot Start End Blocks Id System /dev/sda1 1 1016 1952721 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Create an ext3 file system on the new partition using the mke2fs command. The -j option creates the journal, which makes an ext3 out of an ext2 file system.
Make sure you specify the partition (/dev/sda1) instead of the entire device (/dev/sda).
# mke2fs -j /dev/sda1 mke2fs 1.32 (09-Nov-2002) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 244736 inodes, 1952721 blocks 97636 blocks (5.00%) reserved for the super user First data block=1 239 block groups 8192 blocks per group, 8192 fragments per group 1024 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409, 663553, 1024001 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 38 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Mount the newly-created file system on /mnt. You must specify the file system to the mount command by using the -t option.
# mount -t ext3 /dev/sda1 /mnt/
(3) Create the file structure
Next, you must create the same file structure as on your ECKD DASD. Start by creating a number of directories:
# cd /mnt/ # mkdir mnt proc tmp # ls -l total 15 drwx------ 2 root root 12288 Jan 30 13:38 lost+found drwxr-xr-x 2 root root 1024 Jan 30 13:41 mnt drwxr-xr-x 2 root root 1024 Jan 30 13:41 proc drwxr-xr-x 2 root root 1024 Jan 30 13:41 tmp
Next, copy all other files and directories to your SCSI disk.
# cp -r --no-dereference --preserve=all --target-directory=/mnt \ > /bin /boot /dev /etc /home /initrd /lib /lib64 /misc /opt /root /sbin /usr /var # ls bin boot dev etc home initrd lib lib64 lost+found misc mnt opt proc root sbin tmp usr var
Next, change the permissions of the /tmp/ directory. Each user should be able to write to the /tmp/ directory, but each user should only be able to delete their own files and not the files of other users:
# chmod 1777 /mnt/tmp/ # ls -l /mnt/ | grep tmp drwxrwxrwt 2 root root 1024 Jan 30 13:41 tmp
(4) Edit configuration files
Edit your parameter file, and specify your new root file system. This should be your SCSI disk:
# cat /mnt/boot/parmfile.1 root=LABEL=/1 # vi /mnt/boot/parmfile.1 # cat /mnt/boot/parmfile.1 root=/dev/sda1
Edit the /etc/fstab file and change the root file system device to your SCSI disk (/dev/sda1, in our example). Since the /boot/ directory has also been copied to the SCSI disk, its line is no longer required:
For the purposes of this example, a swap partition has not been included.
# cat /mnt/etc/fstab LABEL=/1 / ext3 defaults 1 1 LABEL=/boot1 /boot ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /proc proc defaults 0 0 /dev/dasda3 swap swap defaults 0 0 # vi /mnt/etc/fstab # cat /mnt/etc/fstab /dev/sda1 / ext3 defaults 1 1 none /dev/pts devpts gid=5,mode=620 0 0 none /proc proc defaults 0 0
Finally, it is necessary to add the mapping of the SCSI disk to the modules.conf configuration file. This mapping will be passed to the zfcp device driver while booting and is required to mount the SCSI root file system. Be careful and check for typos.
Due to the way insmod parameter parsing takes place, you must surround the entire list of parameters with apostrophes "'" (sometimes referred to as "single quotes". You must also use double quotes to surround the mapping parameters. The following example shows the correct use of the single and double quotes.
# cat /mnt/etc/modules.conf alias eth0 qeth options dasd_mod dasd=5c31 # vi /mnt/etc/modules.conf # cat /mnt/etc/modules.conf alias eth0 qeth options dasd_mod dasd=5c31 options zfcp 'map="0x5480 0x1:0x5005076300cb93cb 0x0:0x5123000000000000"'
(5) Create a new ramdisk
Next, you must create a new ramdisk with the required device drivers. For example, the zfcp device driver is required in order to mount the root file system, which is on a SCSI disk. In other words, the zfcp driver module has to be loaded from the SCSI disk at some point in the boot process, but FCP is already needed to access this disk and to mount the root file system — a ramdisk resolves this apparent paradox.
In order to create the ramdisk for your mounted SCSI disk and not to overwrite your existing DASD ramdisk (in addition to using the edited modules.conf file and not the original ECKD DASD modules.conf) you must use a change-root (or chroot environment. By doing this, you will cause the /mnt/ directory to temporarily become the root directory:
Do not forget the chroot command. Otherwise, you may change your DASD ramdisk and use the wrong mapping due to the use of the incorrect modules.conf file.
# chroot /mnt/
One way to check that the proper chroot command was issued is to look for files in the /mnt/ directory. It should be empty, because /mnt/ now corresponds to the /mnt/ directory on your SCSI disk. The directories on your DASD are now longer accessible.
It is now time to build a new ramdisk using the following command:
# cd boot # mkinitrd -v --with=scsi_mod --with=zfcp --with=sd_mod initrd-2.4.21-9.EL.scsi.img 2.4.21-9.EL Looking for deps of module ide-disk Looking for deps of module ext3 jbd Looking for deps of module jbd Looking for deps of module scsi_mod Looking for deps of module zfcp scsi_mod qdio Looking for deps of module scsi_mod Looking for deps of module qdio Looking for deps of module sd_mod scsi_mod Looking for deps of module scsi_mod Using modules: ./kernel/fs/jbd/jbd.o ./kernel/fs/ext3/ext3.o ./kernel/drivers/scsi/scsi_mod.o \ ./kernel/drivers/s390/qdio.o ./kernel/drivers/s390/scsi/zfcp.o ./kernel/drivers/scsi/sd_mod.o Using loopback device /dev/loop0 /sbin/nash -> /tmp/initrd.nUPsUg/bin/nash /sbin/insmod.static -> /tmp/initrd.nUPsUg/bin/insmod `/lib/modules/2.4.21-9.EL/./kernel/fs/jbd/jbd.o' -> `/tmp/initrd.nUPsUg/lib/jbd.o' `/lib/modules/2.4.21-9.EL/./kernel/fs/ext3/ext3.o' -> `/tmp/initrd.nUPsUg/lib/ext3.o' `/lib/modules/2.4.21-9.EL/./kernel/drivers/scsi/scsi_mod.o' -> `/tmp/initrd.nUPsUg/lib/scsi_mod\ .o' `/lib/modules/2.4.21-9.EL/./kernel/drivers/s390/qdio.o' -> `/tmp/initrd.nUPsUg/lib/qdio.o' `/lib/modules/2.4.21-9.EL/./kernel/drivers/s390/scsi/zfcp.o' -> `/tmp/initrd.nUPsUg/lib/zfcp.o' `/lib/modules/2.4.21-9.EL/./kernel/drivers/scsi/sd_mod.o' -> `/tmp/initrd.nUPsUg/lib/sd_mod.o' Loading module jbd Loading module ext3 Loading module scsi_mod Loading module qdio Loading module zfcp with options 'map="0x5480 0x1:0x5005076300cb93cb 0x0:0x5123000000000000"' Loading module sd_mod
Check the output of the mkinitrd and make sure the zfcp device driver is present and has the correct options (including the quotes) as shown in the example. If everything looks correct, you can now exit the chroot environment:
# exit exit
(6) Make your SCSI disk bootable
The SCSI disk must be prepared with the zipl tool. In this example, the zipl command-line options are being used instead of the zipl configuration file.
Make sure you specify the newly-built ramdisk, and that you change directory to the SCSI disk before issuing the zipl command. The "-t ." option writes the boot record to the disk on which your current directory resides.
# cd /mnt/boot/ # /root/s390-tools-1.2.4/zipl/src/zipl -V -t . -i vmlinuz-2.4.21-9.EL -p parmfile.1 -r initrd-2.4.21-9.EL.scsi.img Target device information Device..........................: 08:00 Partition.......................: 08:01 Device name.....................: sda Type............................: disk partition Disk layout.....................: SCSI Geometry - heads................: 62 Geometry - sectors..............: 62 Geometry - cylinders............: 1016 Geometry - start................: 62 File system block size..........: 1024 Physical block size.............: 512 Device size in physical blocks..: 3905442 Building bootmap './bootmap' Adding IPL section kernel image......: vmlinuz-2.4.21-9.EL at 0x10000 kernel parmline...: 'root=/dev/sda1 ' at 0x1000 initial ramdisk...: initrd-2.4.21-9.EL.scsi.img at 0x800000 Preparing boot device: sda. Detected SCSI PCBIOS disk layout. Writing SCSI master boot record. Syncing disks... Done.
Note the references to SCSI in the output from the zipl command; this is one way to be sure that the issued command was correct.
The process of preparing the SCSI disk is now complete. Unmount the SCSI disk and remove the device drivers, which are no longer needed. You can then shutdown your ECKD Linux environment:
# cd # umount /mnt/ # rmmod sd_mod # rmmod zfcp # rmmod scsi_mod # halt
Only the IPL steps under z/VM are described here.
First, login to a CMS session and attach an FCP adapter to your VM guest:
att 5480 * 00: FCP 5480 ATTACHED TO LINUX17 5480 Ready; T=0.01/0.01 14:39:52 q v fcp 00: FCP 5480 ON FCP 5480 CHPID 50 SUBCHANNEL = 000E 00: 5480 QDIO-ELIGIBLE QIOASSIST-ELIGIBLE Ready; T=0.01/0.01 14:39:57
At this point the adapter is available; the other required parameters for SCSI IPL are specified next. This could be done using the new set loaddev CP command (z/VM 4.4). Note that there is a special syntax for this command. For example, there must be be a blank after the first 8 characters of a number. Refer to the z/VM documentation for details:
set loaddev port 50050763 00cb93cb lun 51230000 00000000 Ready; T=0.01/0.01 14:36:13 q loaddev PORTNAME 50050763 00CB93CB LUN 51230000 00000000 BOOTPROG 0 BR_LBA 00000000 00000000 Ready; T=0.01/0.01 14:36:17
The last step is to IPL, using the FCP adapter as parameter:
i 5480 00: HCPLDI2816I Acquiring the machine loader from the processor controller. 00: HCPLDI2817I Load completed from the processor controller. 00: HCPLDI2817I Now starting machine loader version 0001. 01: HCPGSP2630I The virtual machine is placed in CP mode due to a SIGP stop and store status from CPU 00. 02: HCPGSP2630I The virtual machine is placed in CP mode due to a SIGP stop and store status from CPU 00. 03: HCPGSP2630I The virtual machine is placed in CP mode due to a SIGP stop and store status from CPU 00. 00: MLOEVL012I: Machine loader up and running (version 0.13). 00: MLOPDM003I: Machine loader finished, moving data to final storage location. Linux version 2.4.21-9.EL (firstname.lastname@example.org) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-26)) #1 SMP Thu Jan 8 17:26:32 EST 2004 We are running under VM (64 bit mode) On node 0 totalpages: 65536 zone(0): 65536 pages. zone(1): 0 pages. zone(2): 0 pages. Kernel command line: root=/dev/sda1 Highest subchannel number detected (hex) : 000E Calibrating delay loop... 2241.33 BogoMIPS Memory: 245936k/262144k available (2371k kernel code, 0k reserved, 1003k data, 3 20k init)
At this point, your Linux environment should come up properly. Finally, login to your Linux and conform that the root file system is, in fact, located on your SCSI disk:
> ssh root@53v15g17 root@53v15g17's password: <password> Last login: Mon Feb 16 14:37:16 2004 # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 1.9G 1.4G 387M 79% /
(9) Hints and Tips
A prepared SCSI disk cannot be SCSI IPL'd using a different adapter device number (devno) or WWPN. The reason is due to the fact that these parameters are on the ramdisk and will be used to mount the root file system. If the "original preparation path" is not available, once the SCSI IPL itself is complete your Linux environment will exit with a kernel panic. This is the same behavior as on ECKD DASDs.
Should you find yourself in this situation, the following steps may help you:
(a) Mount your SCSI disk to an ECKD DASD Linux to /mnt/
(b) Change the mapping in /mnt/etc/modules.conf (using the new devno and/or new WWPN)
(c) Re-build your ramdisk (making sure to use chroot and mkinitrd as described previously)
(d) Re-write your boot configuration to your SCSI disk (using the zipl step described previously)
This section contains information related to the Red Hat Enterprise Linux 3 Update 2 kernel.
Red Hat Enterprise Linux 3 Update 2 includes a modification to the way the Linux kernel timer interrupt is handled. Normally, a hardware timer is set to generate periodic interrupts at a fixed rate (100 times a second for most architectures). These periodic timer interrupts are used by the kernel to schedule various internal housekeeping tasks, such as process scheduling, accounting, and maintaining system uptime.
While a timer-based approach works well for a system environment where only one copy of the kernel is running, it can cause additional overhead when many copies of the kernel are running on a single system, as z/VM® guests, for example. In these cases, having 1,000 copies of the kernel each generating interrupts many times a second can result in excessive system overhead.
Therefore, Red Hat Enterprise Linux 3 Update 2 now includes the ability to turn off periodic timer interrupts. This is done through the /proc/ file system; to disable periodic timer interrupts, issue the following command:
echo "0" > /proc/sys/kernel/hz_timer
To enable periodic timer interrupts, issue the following command:
echo "1" > /proc/sys/kernel/hz_timer
By default, periodic timer interrupts are enabled.
This can also be set at boot-time; to do so, add the following line to /etc/sysctl.conf to disable periodic timer interrupts:
kernel.hz_timer = 0
Disabling periodic timer interrupts can violate basic assumptions in system accounting tools. Should you notice a malfunction related to system accounting, verify that the malfunction disappears if periodic timer interrupts are enabled, then submit a bug at http://bugzilla.redhat.com/ (for malfunctioning bundled tools), or inform the tool vendor (for malfunctioning third-party tools).
This update includes bug fixes for a number of drivers. The more significant driver updates are listed below. In some cases, the original driver has been preserved under a different name, and is available as a non-default alternative for organizations that wish to migrate their driver configuration to the latest versions at a later time.
The migration to the latest drivers should be completed before the next Red Hat Enterprise Linux update is applied, because in most cases only one older-revision driver will be preserved for each update.
These release notes also indicate which older-revision drivers have been removed from this kernel update. These drivers have the base driver name with the revision digits appended; for example, megaraid_2002.o. You must remove these drivers from /etc/modules.conf before installing this kernel update.
Keep in mind that the only definitive way to determine what drivers are being used is to review the contents of /etc/modules.conf. Use of the lsmod command is not a substitute for examining this file.
IBM ServeRAID (ips driver)
The ips driver has been updated from 6.10.52 to 6.11.07
The new driver is scsi/ips.o
The older driver has been preserved as addon/ips_61052/ips_61052.o
The 6.00.26 driver (ips_60026.o) has been removed
LSI Logic RAID (megaraid driver)
The megaraid2 driver has been updated from v2.00.9 to v22.214.171.124
The new driver is scsi/megaraid2.o
The older driver has been preserved as addon/megaraid_2009/megaraid_2009.o
The default driver remains the v1.18k driver (megaraid.o)
LSI Logic MPT Fusion (mpt* drivers)
These drivers have been updated from 2.05.05+ to 2.05.11.03
The new drivers are located in message/fusion/
The older drivers have been preserved in addon/fusion_20505/
Compaq SA53xx Controllers (cciss driver)
The cciss driver has been updated from 2.4.47.RH1 to 2.4.50.RH1
QLogic Fibre Channel (qla2xxx driver)
These drivers have been updated from 6.06.00b11 to 6.07.02-RH2
The new drivers are located in addon/qla2200/
The older driver have been preserved in addon/qla2200_60600b11/
Note that the QLA2100 adapter has been retired by QLogic. This adapter is no longer supported by QLogic or Red Hat. Therefore, the driver is located in the kernel-unsupported package.
Intel PRO/1000 (e1000 driver)
This driver has been updated from 5.2.20-k1 to 126.96.36.199-k1
Broadcom Tigon3 (tg3 driver)
This driver has been updated from v2.3 to v2.7
Network Bonding (bonding driver)
This driver has been updated from 2.2.14 to 2.4.1
Serial ATA (libata driver)
This driver has been updated to version 1.01
This section contains listings of packages that have been updated or added from Red Hat Enterprise Linux 3 as part of Update 2.
These package lists include packages from all variants of Red Hat Enterprise Linux 3. Your system may not include every one of the packages listed here.
The following packages have been updated from the original release of Red Hat Enterprise Linux 3:
The following packages have been added to Red Hat Enterprise Linux 3 Update 2:
The following packages have been removed from Red Hat Enterprise Linux 3 Update 2:
( s390x )