Monday, December 15, 2008

File System Snapshot

File system snapshots are available from AIX 5L v5.2 in JFS2 file systems. Snapshots had to be created into separate logical volumes. This is called “JFS2 External Snapshot”.

Starting from AIX V6.1 IBM offers the ability to create snapshots with the file system. This is called as “JFS2 Internal Snapshot”. These internal snapshots are stored under /fsmountpoint/.snapshot/snapshotname.

Both the internal and the external snapshots keep track of the changes to the snapped file system by saving the modified or deleted file blocks. Snapshots provide point-in-time (PIT) images of the source file system. Basically snapshots are used for taking PIT images (backup) of a file system during production runtime.

Advantages of Internal Snapshot:

a) No super user permissions are necessary to access data from a snapshot, since no initial mount operation is required.
b) No additional file system or logical volume needs to be maintained and monitored.
c) Snapshots are easily NFS exported, since they are in held in the same filesystem.

Management of Internal snapshots:

A JFS2 file system must be created with the new -a isnapshot=yes option. Existing file systems created without the isnapshot option cannot be used for internal snapshots. They have to be recreated or have to use external snapshots.

To create an internal snapshot:

# snapshot -o snapfrom=/oracle -n snap10
Snapshot "snap10" for file system /oracle created.

To create an external snapshot:
# snapshot -o snapfrom=/oralce /dev/snaporacle
This command creates a snapshot for the /oracle file system on the /dev/snaporacle logical volume, which already exists.

To list all snapshots for a file system:

# snapshot –q /oracle
Snapshots for /oracle
Current Name Time
* snap10 Mon Dec 15 09:17:51 CDT 2008

Files under this snapshot “snap10” are available under /oracle/.snapshot/snap10 directory. These files are read only; no modifications are allowed.

To delete an internal snapshot:

# snapshot –d –n snap10 /oracle


SMIT Screens:

To access smitty menu items, following the below steps

smitty> system storage management> file systems> add change show delete filesystems> enhanced journaled filesystems

shows the below options

List Snapshots for an Enhanced Journaled File System
Create Snapshot for an Enhanced Journaled File System
Mount Snapshot for an Enhanced Journaled File System
Remove Snapshot for an Enhanced Journaled File System
Unmount Snapshot for an Enhanced Journaled File System
Change Snapshot for an Enhanced Journaled File System
Rollback an Enhanced Journaled File System to a Snapshot

Some points on Internal snapshots:

1. A snapped file system can be mounted read only on previous AIX 5L versions. The snapshot itself cannot be accessed. The file system must be in a clean state; run the fsck command to ensure that this is true.

2. A file system created with the ability for internal snapshots can still have external snapshots.

3. Once a file system has been enabled to use internal snapshots, this cannot be undone.

4. If the fsck command has to modify the file system, any internal snapshots for the file system will be deleted by fsck.

5. Snapped file systems cannot be shrunk.

6. The defragfs command cannot be run on a file system with internal snapshots.


Some points on Internal and External snapshots:

1. A file system can use exclusively one type of snapshot at the same time.

2. External snapshots are persistent across a system reboot.

3. Typically, a snapshot will need two to six percent of the space needed for the snapped file system. For a highly active file system, 15 percent is estimated.

4. During the creation of a snapshots, only read access to the snapped file system is allowed.

5. There is reduced performance for write operations to a snapped file system. Read operations are not affected.

6. Snapshots are not replacement for backups. A snapshot depends always on the snapped file system, while backups have no dependencies on the source.

7. Neither the mksysb nor alt_disk_install commands will preserve snapshots.

8. A file system with snapshots cannot be managed by DMAPI. A file system being managed by DMAPI cannot create a snapshot.

File systems on CD-ROM and DVD disks

Normally we used to manually mount/unmount/eject media in CDs and DVDs. AIX offers an automatic management of cd/dvd media using cdromd daemon. This is done by a daemon called cdromd. Cdromd manages a set a cd/dvd drives.

You can manually mount a read/write UDFS with the following command:

# mount -V udfs DevName MtPt
where DevName is the name of the DVD drive and MtPt is the mount point for the file system.

To mount a media in cd0 which is managed by cdromd,
# cdmount cd0

To have the cdromd daemon enabled on each system startup, add the following line to /etc/inittab:

cdromd:23456789:wait:/usr/bin/startsrc -s cdromd

Use the below commands to start/sop/list cdromd daemon,

# startsrc -s cdromd

# stopsrc -s cdromd

# lssrc -s cdromd

Here are some examples to manage the media,

To eject a media from cd1,
# cdeject cd0

To ask cdromd if cd0 is managed,
# cdcheck –a cd0

To ask cdromd if a media is not present on cd0,
# cdcheck –e cd0

To unmount a file system on cd0,
# cdumount cd0

To suspend management of cd0 by cdromd, (this ejects the media)
# cdutil –s cd0

To suspend management of cd0 by cdromd without ejecting the media,
# cdutil –s –k cd0

To resume management of cd0 by cdromd,
# cdutil –r cd0

Thursday, December 11, 2008

WPAR

WPARs are a new term introduced within AIX 6.1. Prior to WPAR, we used logical partitions to isolate the operating environment. This is no longer necessary as we can have multiple WPARs running in a single LPAR. If you are Solaris or HP-UX admin, you would have seen zones in solaris and virtual machines in HP-UX. WPAR is more or less similar to them.

There are 2 types of wpars

a. System WPAR
System WPARs are autonomous virtual system environments with their own private file systems, users and groups, login, network space and administrative domain.
For system WPARs, local file system spaces, such as /home and /usr, are constructed from isolated sections of the file system space for the global environment. By default, these spaces are located in the /wpars directory of the global environment. All processes run within the WPAR appear in the base directory for the WPAR. For example, users in the WPAR “part1” would see the /wpars/part1/usr directory as the /usr directory.

b. Application WPAR
Application workload partitions (WPARs) provide an environment for the isolation of applications and their resources to enable checkpoint, restart, and relocation at the application level. These WPARs share the global environment's file system namespace. When an application WPAR is created, it has access to all mounts available to the global environment's file system.


Creating an application wpar:

You can create an application WPAR using the wparexec command.

You must supply the path to the application or command that you want to create an application WPAR for, and you must supply any command line arguments when you run the wparexec command. The application can either come from a specification file, or be specified on the command line. Unlike system WPARs, it is not necessary to assign an explicit name to an application WPAR. Although both WPAR types require a name, the names for application WPARs are generated based on the name of the application running in the WPAR.

Complete the following steps to create an application WPAR:

1. Log in as the root user to the system where you want to create and configure the workload partition. This login places you into the global environment.

2. To create and configure the workload partition, run the following command:
wparexec -n wparname -- /usr/bin/ps -ef > /ps.out

The output should look similar to the following:
wparexec: Verifying filesystems...
wparexec: Workload partition wparname created successfully.
startwpar: COMMAND START, ARGS: wparname
startwpar: Starting workload partition 'wparname'
startwpar: Mounting all workload partition file systems
startwpar: Loading workload partition
startwpar: Shutting down all workload partition processes
rmwpar: Removing workload partition firstapp
rmwpar: Return Status = SUCCESS
startwpar: Return Status = SUCCESS
You have now successfully created an application WPAR.
Application WPARs start as soon as the wparexec command is issued, and stop as soon as the application completes its operation. When the operation is complete, the configuration for the application WPAR is destroyed.

System WPAR:

To create a system WPAR:

Here is an example for the system WPAR creation,

# mkwpar -n system1
mkwpar: Creating file systems...
/
/home
/opt
/proc
/tmp
/usr
/var

<<>>

FILESET STATISTICS
------------------
241 Selected to be installed, of which:
241 Passed pre-installation verification
----
241 Total to be installed

+-----------------------------------------------------------------------------+
Installing Software...
+-----------------------------------------------------------------------------+


Filesets processed: 6 of 241 (Total time: 2 secs).

installp: APPLYING software for:
X11.base.smt 6.1.0.1
Filesets processed: 7 of 241 (Total time: 3 secs).
installp: APPLYING software for:
X11.help.EN_US.Dt.helpinfo 6.1.0.0
Filesets processed: 8 of 241 (Total time: 3 secs).
installp: APPLYING software for:
bos.acct 6.1.0.1
Filesets processed: 9 of 241 (Total time: 3 secs).
installp: APPLYING software for:
bos.acct 6.1.0.2
Filesets processed: 10 of 241 (Total time: 4 secs).
installp: APPLYING software for:
bos.adt.base 6.1.0.0
bos.adt.insttools 6.1.0.0
Filesets processed: 12 of 241 (Total time: 4 secs).
installp: APPLYING software for:
bos.compat.links 6.1.0.0
bos.compat.net 6.1.0.0
bos.compat.termcap 6.1.0.0

Workload partition devpayrollWPAR01 created successfully.
mkwpar: 0960-390 To start the workload partition, execute the
following as root: startwpar [-v] system1


It normally takes 2 to 4 minutes for the creation of a system WPAR.

By default, the file systems for a new system WPAR are located in the /wpars/wpar_name directory.

You can override the default location using the following command:
mkwpar -n wpar_name -d /newfs/wpar_name

To change the name of a system WPAR:
chwpar -n wpar_name

Configure networks for system WPARs:
You can configure the network for a system WPAR using the -h flag or the -N flag for the mkwpar command or the chwpar command.

If you do not specify any network information when you create a system WPAR, the name of the WPAR resolves to an OP address on the same network as any active global interface.

Here is an example to create a system WPAR and configure a IP address on it,

# mkwpar -n wpar_name -N interface=en0 address=224.128.9.3 \
netmask=255.255.255.0 broadcast=224.128.9.255

This creates a alias ip on the network interface 'en0' in the base Operating system.

Using the below command you can change the ip address later on

# chwpar -N address=224.128.9.3 netmask=255.255.255.128 \
broadcast=224.128.9.127 wpar_name

Changing the hostname in a system WPAR:
By default, the name for a system WPAR is used as its host name. You can use the -h flag with the mkwpar command or the chwpar command to change the host name for a system WPAR.
Example: # chwpar -h new_hostname wpar_name


Removing a network from a system WPAR:
You can remove a network from a system WPAR using the chwpar command with the -K flag.

Example: # chwpar -K -N address=124.128.9.3 wpar_name

Configuring domain resolution for system WPARs:
You can configure the domain resolution for system WPARs using the -r flag for the mkwpar command.

Below command copies the global environment’s domain resolution configuration into the system wpars,

# mkwpar -n wpar_name -r

Configuring system WPAR-specific routing:
You can configure a WPAR to use its own routing table using the -i flag and the -I flag for the mkwpar command, the wparexec command, or the chwpar command.

Configuring resource controls for system WPARs:
You can configure the resource controls to limit the physical resources a system WPAR has access to using the -R flag for the mkwpar command and chwpar command.
To initialize resource control settings, run the following mkwpar command:
mkwpar -n wpar_name -R active=yes CPU=10%-20%,50% totalProcesses=1024
In this example, the WPAR is entitled to the following system resources:

· A minimum of 10% of the global environment’s processors upon request
· A maximum of 20% of the global environment’s processors when there is contention
· A maximum of 50% of the global environment’s processors when there is no contention
· A maximum of 1024 processes at a time
To change resource control settings dynamically for an existing active or inactive application WPAR run the following chwpar command:
chwpar -R totalThreads=2048 shares_memory=100 wpar_name
Note: You can also use the -K flag for the chwpar command to remove individual attributes from the profile and restore those controls to their default, as follows:
chwpar -K -R totalProcesses shares_CPU wpar_name

Starting a System WPAR:

After logging into the global environment, run the below command to start a system WPAR
# startwpar wpar_name

To start in a maintenance mode,
# startwpar –m wpar_name
Note: You cannot start WPARs that rely on NFS-mounted file systems in maintenance mode.

Stopping a System WPAR:
You can stop a WPAR from the global environment using the stopwpar command.
Stopping a system WPAR follows a similar paradigm to the shutdown command and the halt command for AIX®. For application WPARs, running the stopwpar command is equivalent to removing the WPAR with the rmwpar command.

To stop a system WPAR in the same way that the shutdown command stops a system, run the following command:
# stopwpar wpar_name

To stop a system WPAR quickly in the same way that the halt command stops a system, run the following command:
# stopwpar -F wpar_name

Software update in system WPARs:
When you install software in the global environment, it is not always automatically available for use within your system WPAR. You can use the syncwpar command or the syncroot command to make software available.

Application workload partitions share their file systems with the global environment and do not create new file systems. Therefore, the syncwpar command and the syncroot command are applicable only to system WPARs.

To make software available in one or more WPARs, run the following command in the global environment:

# syncwpar wpar_name1 wpar_name 2

The syncroot command performs the same function as the syncwpar command, but the syncroot command operates only within the WPAR where it is issued.

Listing WPARs:
You can list summary data for system WPARs and application WPARs using the lswpar command.

For example, to list the WPARs on a system with names that start with "mypar_", run the following command:
# lswpar 'mypar_*'

Listing WPAR identifiers:
You can list the identifiers for a WPAR using the lparstat command or the uname command using the ‘-W’ flag.

Logging into a WPAR:
After you configure and activate a system WPAR, you can log in to it locally using the clogin command.

To log in to a system WPAR and create a shell as the root user, run the following command:
# clogin wpar_name

To log in to a system WPAR and create a shell as a different user, run the following command:
# clogin -l username wpar_name

Note: You can also log into a system WPAR remotely using the a network-based login command, such as the rlogin command, the telnet command, or the rsh command.

Backing up WPARs:

You can back up a WPAR using the savewpar command, the mkcd command, or the mkdvd command.

The savewpar command uses the data created by the mkwpardata command to back up your WPAR. If these files are not already on your system, the savewpar command will call the mkwpardata command to create these files.

The image files contain the following information:
· A list of logical volumes and their sizes
· A list of file systems and their sizes
· A list of volume groups
· The WPAR name

To back up a WPAR to the default tape device, run the following command:
# savewpar wparname

To back up a WPAR to a file, run the following command:
# savewpar -f file wparname

You can also back up a WPAR to a CD device using the mkcd -W command or to a DVD device using the mkdvd -W command.

Restoring WPARs:
You can restore a WPAR using the restwpar command. You can restore a WPAR from a backup image created by the savewpar command, the mkcd command, or the mkdvd command.

To restore the backup image from the /dev/rmt1 device, run the following command:
restwpar -f/dev/rmt1

Removing WPARs:
You can remove a WPAR using the rmwpar command.

To remove a WPAR, it must be in the defined state, and you must provide the name of the WPAR.

To remove a WPAR, run the following command:
rmwpar wpar_name

To stop a WPAR before removing it, run the following rmwpar command with the -s flag:
rmwpar -s wpar_name