Monday, December 15, 2008

File System Snapshot

File system snapshots are available from AIX 5L v5.2 in JFS2 file systems. Snapshots had to be created into separate logical volumes. This is called “JFS2 External Snapshot”.

Starting from AIX V6.1 IBM offers the ability to create snapshots with the file system. This is called as “JFS2 Internal Snapshot”. These internal snapshots are stored under /fsmountpoint/.snapshot/snapshotname.

Both the internal and the external snapshots keep track of the changes to the snapped file system by saving the modified or deleted file blocks. Snapshots provide point-in-time (PIT) images of the source file system. Basically snapshots are used for taking PIT images (backup) of a file system during production runtime.

Advantages of Internal Snapshot:

a) No super user permissions are necessary to access data from a snapshot, since no initial mount operation is required.
b) No additional file system or logical volume needs to be maintained and monitored.
c) Snapshots are easily NFS exported, since they are in held in the same filesystem.

Management of Internal snapshots:

A JFS2 file system must be created with the new -a isnapshot=yes option. Existing file systems created without the isnapshot option cannot be used for internal snapshots. They have to be recreated or have to use external snapshots.

To create an internal snapshot:

# snapshot -o snapfrom=/oracle -n snap10
Snapshot "snap10" for file system /oracle created.

To create an external snapshot:
# snapshot -o snapfrom=/oralce /dev/snaporacle
This command creates a snapshot for the /oracle file system on the /dev/snaporacle logical volume, which already exists.

To list all snapshots for a file system:

# snapshot –q /oracle
Snapshots for /oracle
Current Name Time
* snap10 Mon Dec 15 09:17:51 CDT 2008

Files under this snapshot “snap10” are available under /oracle/.snapshot/snap10 directory. These files are read only; no modifications are allowed.

To delete an internal snapshot:

# snapshot –d –n snap10 /oracle


SMIT Screens:

To access smitty menu items, following the below steps

smitty> system storage management> file systems> add change show delete filesystems> enhanced journaled filesystems

shows the below options

List Snapshots for an Enhanced Journaled File System
Create Snapshot for an Enhanced Journaled File System
Mount Snapshot for an Enhanced Journaled File System
Remove Snapshot for an Enhanced Journaled File System
Unmount Snapshot for an Enhanced Journaled File System
Change Snapshot for an Enhanced Journaled File System
Rollback an Enhanced Journaled File System to a Snapshot

Some points on Internal snapshots:

1. A snapped file system can be mounted read only on previous AIX 5L versions. The snapshot itself cannot be accessed. The file system must be in a clean state; run the fsck command to ensure that this is true.

2. A file system created with the ability for internal snapshots can still have external snapshots.

3. Once a file system has been enabled to use internal snapshots, this cannot be undone.

4. If the fsck command has to modify the file system, any internal snapshots for the file system will be deleted by fsck.

5. Snapped file systems cannot be shrunk.

6. The defragfs command cannot be run on a file system with internal snapshots.


Some points on Internal and External snapshots:

1. A file system can use exclusively one type of snapshot at the same time.

2. External snapshots are persistent across a system reboot.

3. Typically, a snapshot will need two to six percent of the space needed for the snapped file system. For a highly active file system, 15 percent is estimated.

4. During the creation of a snapshots, only read access to the snapped file system is allowed.

5. There is reduced performance for write operations to a snapped file system. Read operations are not affected.

6. Snapshots are not replacement for backups. A snapshot depends always on the snapped file system, while backups have no dependencies on the source.

7. Neither the mksysb nor alt_disk_install commands will preserve snapshots.

8. A file system with snapshots cannot be managed by DMAPI. A file system being managed by DMAPI cannot create a snapshot.

File systems on CD-ROM and DVD disks

Normally we used to manually mount/unmount/eject media in CDs and DVDs. AIX offers an automatic management of cd/dvd media using cdromd daemon. This is done by a daemon called cdromd. Cdromd manages a set a cd/dvd drives.

You can manually mount a read/write UDFS with the following command:

# mount -V udfs DevName MtPt
where DevName is the name of the DVD drive and MtPt is the mount point for the file system.

To mount a media in cd0 which is managed by cdromd,
# cdmount cd0

To have the cdromd daemon enabled on each system startup, add the following line to /etc/inittab:

cdromd:23456789:wait:/usr/bin/startsrc -s cdromd

Use the below commands to start/sop/list cdromd daemon,

# startsrc -s cdromd

# stopsrc -s cdromd

# lssrc -s cdromd

Here are some examples to manage the media,

To eject a media from cd1,
# cdeject cd0

To ask cdromd if cd0 is managed,
# cdcheck –a cd0

To ask cdromd if a media is not present on cd0,
# cdcheck –e cd0

To unmount a file system on cd0,
# cdumount cd0

To suspend management of cd0 by cdromd, (this ejects the media)
# cdutil –s cd0

To suspend management of cd0 by cdromd without ejecting the media,
# cdutil –s –k cd0

To resume management of cd0 by cdromd,
# cdutil –r cd0

Thursday, December 11, 2008

WPAR

WPARs are a new term introduced within AIX 6.1. Prior to WPAR, we used logical partitions to isolate the operating environment. This is no longer necessary as we can have multiple WPARs running in a single LPAR. If you are Solaris or HP-UX admin, you would have seen zones in solaris and virtual machines in HP-UX. WPAR is more or less similar to them.

There are 2 types of wpars

a. System WPAR
System WPARs are autonomous virtual system environments with their own private file systems, users and groups, login, network space and administrative domain.
For system WPARs, local file system spaces, such as /home and /usr, are constructed from isolated sections of the file system space for the global environment. By default, these spaces are located in the /wpars directory of the global environment. All processes run within the WPAR appear in the base directory for the WPAR. For example, users in the WPAR “part1” would see the /wpars/part1/usr directory as the /usr directory.

b. Application WPAR
Application workload partitions (WPARs) provide an environment for the isolation of applications and their resources to enable checkpoint, restart, and relocation at the application level. These WPARs share the global environment's file system namespace. When an application WPAR is created, it has access to all mounts available to the global environment's file system.


Creating an application wpar:

You can create an application WPAR using the wparexec command.

You must supply the path to the application or command that you want to create an application WPAR for, and you must supply any command line arguments when you run the wparexec command. The application can either come from a specification file, or be specified on the command line. Unlike system WPARs, it is not necessary to assign an explicit name to an application WPAR. Although both WPAR types require a name, the names for application WPARs are generated based on the name of the application running in the WPAR.

Complete the following steps to create an application WPAR:

1. Log in as the root user to the system where you want to create and configure the workload partition. This login places you into the global environment.

2. To create and configure the workload partition, run the following command:
wparexec -n wparname -- /usr/bin/ps -ef > /ps.out

The output should look similar to the following:
wparexec: Verifying filesystems...
wparexec: Workload partition wparname created successfully.
startwpar: COMMAND START, ARGS: wparname
startwpar: Starting workload partition 'wparname'
startwpar: Mounting all workload partition file systems
startwpar: Loading workload partition
startwpar: Shutting down all workload partition processes
rmwpar: Removing workload partition firstapp
rmwpar: Return Status = SUCCESS
startwpar: Return Status = SUCCESS
You have now successfully created an application WPAR.
Application WPARs start as soon as the wparexec command is issued, and stop as soon as the application completes its operation. When the operation is complete, the configuration for the application WPAR is destroyed.

System WPAR:

To create a system WPAR:

Here is an example for the system WPAR creation,

# mkwpar -n system1
mkwpar: Creating file systems...
/
/home
/opt
/proc
/tmp
/usr
/var

<<>>

FILESET STATISTICS
------------------
241 Selected to be installed, of which:
241 Passed pre-installation verification
----
241 Total to be installed

+-----------------------------------------------------------------------------+
Installing Software...
+-----------------------------------------------------------------------------+


Filesets processed: 6 of 241 (Total time: 2 secs).

installp: APPLYING software for:
X11.base.smt 6.1.0.1
Filesets processed: 7 of 241 (Total time: 3 secs).
installp: APPLYING software for:
X11.help.EN_US.Dt.helpinfo 6.1.0.0
Filesets processed: 8 of 241 (Total time: 3 secs).
installp: APPLYING software for:
bos.acct 6.1.0.1
Filesets processed: 9 of 241 (Total time: 3 secs).
installp: APPLYING software for:
bos.acct 6.1.0.2
Filesets processed: 10 of 241 (Total time: 4 secs).
installp: APPLYING software for:
bos.adt.base 6.1.0.0
bos.adt.insttools 6.1.0.0
Filesets processed: 12 of 241 (Total time: 4 secs).
installp: APPLYING software for:
bos.compat.links 6.1.0.0
bos.compat.net 6.1.0.0
bos.compat.termcap 6.1.0.0

Workload partition devpayrollWPAR01 created successfully.
mkwpar: 0960-390 To start the workload partition, execute the
following as root: startwpar [-v] system1


It normally takes 2 to 4 minutes for the creation of a system WPAR.

By default, the file systems for a new system WPAR are located in the /wpars/wpar_name directory.

You can override the default location using the following command:
mkwpar -n wpar_name -d /newfs/wpar_name

To change the name of a system WPAR:
chwpar -n wpar_name

Configure networks for system WPARs:
You can configure the network for a system WPAR using the -h flag or the -N flag for the mkwpar command or the chwpar command.

If you do not specify any network information when you create a system WPAR, the name of the WPAR resolves to an OP address on the same network as any active global interface.

Here is an example to create a system WPAR and configure a IP address on it,

# mkwpar -n wpar_name -N interface=en0 address=224.128.9.3 \
netmask=255.255.255.0 broadcast=224.128.9.255

This creates a alias ip on the network interface 'en0' in the base Operating system.

Using the below command you can change the ip address later on

# chwpar -N address=224.128.9.3 netmask=255.255.255.128 \
broadcast=224.128.9.127 wpar_name

Changing the hostname in a system WPAR:
By default, the name for a system WPAR is used as its host name. You can use the -h flag with the mkwpar command or the chwpar command to change the host name for a system WPAR.
Example: # chwpar -h new_hostname wpar_name


Removing a network from a system WPAR:
You can remove a network from a system WPAR using the chwpar command with the -K flag.

Example: # chwpar -K -N address=124.128.9.3 wpar_name

Configuring domain resolution for system WPARs:
You can configure the domain resolution for system WPARs using the -r flag for the mkwpar command.

Below command copies the global environment’s domain resolution configuration into the system wpars,

# mkwpar -n wpar_name -r

Configuring system WPAR-specific routing:
You can configure a WPAR to use its own routing table using the -i flag and the -I flag for the mkwpar command, the wparexec command, or the chwpar command.

Configuring resource controls for system WPARs:
You can configure the resource controls to limit the physical resources a system WPAR has access to using the -R flag for the mkwpar command and chwpar command.
To initialize resource control settings, run the following mkwpar command:
mkwpar -n wpar_name -R active=yes CPU=10%-20%,50% totalProcesses=1024
In this example, the WPAR is entitled to the following system resources:

· A minimum of 10% of the global environment’s processors upon request
· A maximum of 20% of the global environment’s processors when there is contention
· A maximum of 50% of the global environment’s processors when there is no contention
· A maximum of 1024 processes at a time
To change resource control settings dynamically for an existing active or inactive application WPAR run the following chwpar command:
chwpar -R totalThreads=2048 shares_memory=100 wpar_name
Note: You can also use the -K flag for the chwpar command to remove individual attributes from the profile and restore those controls to their default, as follows:
chwpar -K -R totalProcesses shares_CPU wpar_name

Starting a System WPAR:

After logging into the global environment, run the below command to start a system WPAR
# startwpar wpar_name

To start in a maintenance mode,
# startwpar –m wpar_name
Note: You cannot start WPARs that rely on NFS-mounted file systems in maintenance mode.

Stopping a System WPAR:
You can stop a WPAR from the global environment using the stopwpar command.
Stopping a system WPAR follows a similar paradigm to the shutdown command and the halt command for AIX®. For application WPARs, running the stopwpar command is equivalent to removing the WPAR with the rmwpar command.

To stop a system WPAR in the same way that the shutdown command stops a system, run the following command:
# stopwpar wpar_name

To stop a system WPAR quickly in the same way that the halt command stops a system, run the following command:
# stopwpar -F wpar_name

Software update in system WPARs:
When you install software in the global environment, it is not always automatically available for use within your system WPAR. You can use the syncwpar command or the syncroot command to make software available.

Application workload partitions share their file systems with the global environment and do not create new file systems. Therefore, the syncwpar command and the syncroot command are applicable only to system WPARs.

To make software available in one or more WPARs, run the following command in the global environment:

# syncwpar wpar_name1 wpar_name 2

The syncroot command performs the same function as the syncwpar command, but the syncroot command operates only within the WPAR where it is issued.

Listing WPARs:
You can list summary data for system WPARs and application WPARs using the lswpar command.

For example, to list the WPARs on a system with names that start with "mypar_", run the following command:
# lswpar 'mypar_*'

Listing WPAR identifiers:
You can list the identifiers for a WPAR using the lparstat command or the uname command using the ‘-W’ flag.

Logging into a WPAR:
After you configure and activate a system WPAR, you can log in to it locally using the clogin command.

To log in to a system WPAR and create a shell as the root user, run the following command:
# clogin wpar_name

To log in to a system WPAR and create a shell as a different user, run the following command:
# clogin -l username wpar_name

Note: You can also log into a system WPAR remotely using the a network-based login command, such as the rlogin command, the telnet command, or the rsh command.

Backing up WPARs:

You can back up a WPAR using the savewpar command, the mkcd command, or the mkdvd command.

The savewpar command uses the data created by the mkwpardata command to back up your WPAR. If these files are not already on your system, the savewpar command will call the mkwpardata command to create these files.

The image files contain the following information:
· A list of logical volumes and their sizes
· A list of file systems and their sizes
· A list of volume groups
· The WPAR name

To back up a WPAR to the default tape device, run the following command:
# savewpar wparname

To back up a WPAR to a file, run the following command:
# savewpar -f file wparname

You can also back up a WPAR to a CD device using the mkcd -W command or to a DVD device using the mkdvd -W command.

Restoring WPARs:
You can restore a WPAR using the restwpar command. You can restore a WPAR from a backup image created by the savewpar command, the mkcd command, or the mkdvd command.

To restore the backup image from the /dev/rmt1 device, run the following command:
restwpar -f/dev/rmt1

Removing WPARs:
You can remove a WPAR using the rmwpar command.

To remove a WPAR, it must be in the defined state, and you must provide the name of the WPAR.

To remove a WPAR, run the following command:
rmwpar wpar_name

To stop a WPAR before removing it, run the following rmwpar command with the -s flag:
rmwpar -s wpar_name

Tuesday, November 11, 2008

AIX Commands – Part I

Volume Group Commands

Display all VGs:

# lsvg

 

Display all active VGs:

# lsvg –o

 

Display info about rootvg,

# lsvg rootvg

 

Display info about all LVs in all VGs,

# lsvg -o |lsvg –il

 

Display info about all PVs in rootvg

# lsvg -p rootvg

 

Create VG with name vgxx on hdisk1 with partition size 8MB,

# mkvg -s 8 hdisk1

 

Create VG with name sivg on hdisk1 with partition size 8MB,

# mkvg -s 8 -y sivg hdisk1

 

Create sivg on hdisk1 with PP size 4 and no of partions 2 * 1016,

# mkvg -s 4 -t 2 -y sivg hdisk1

 

To make VG newvg automatically activated at startup,

# chvg -a y newvg

 

To deactivate the automatic activation at startup,

# chvg -a n newvg

 

To change maximum no. of PP to 2032 on vg newvg,

# chvg -t 2 newvg

 

To disable quorum on VG newvg,

# chvg -Q n newvg

 

Reorganises PP allocation of VG newvg,

# reorgvg newvg

 

Add PV hdisk3 and hdisk4 to VG newvg,

# extendvg newvg hdisk3 hdisk4

 

Exports the VG newvg,

# exportvg newvg

 

Import the hdisk2 with name newvg, and assign major number 44,

# importvg -V 44 -y newvg hdisk2

 

Remove PV hdisk3 from VG newvg,

# reducevg newvg hdisk3

 

To deactviate VG newvg,

# varyoffvg newvg

 

To activate VG newvg,

# varyonvg newvg

 

To sync the mirrored LV in the VG sivg,

# syncvg -v sivg

 

To mirror LVs of sivg with hdisk2 (-m for exact mirror, -S forbackground mirror),

# mirrorvg -S -m sivg hdisk2

 

To remove the mirrored PV from the set,

# unmirrorvg sivg hdisk2

 

To synchronize ODM with LVM(VGDA) for datavg,

# synclvodm datavg

 

File System  Commands … In next part

Multibos

 

Is it possible to have 2 BOS instance in a single rootvg and do patching on only 1 instance.

Yes. It is possible with the introduction of multibos.

“multibos” command allows the administrator to create multiple instances of AIX on the same rootvg. multibos setup operation actually creates a standby BOS that boots from a distinct logical volume. This creates 2 bootable sets of BOS on a single rootvg. Administrator can boot from either instance of BOS by specifying the respective BLV as an argument to the bootlist command.

The multibos command allows the administrator to access, install maintenance and technology levels for, update, and customize the standby BOS either during setup or in subsequent customization operations. Installing maintenance and technology updates to the standby BOS does not change system files on the active BOS. This allows for concurrent update of the standby BOS, while the active BOS remains in production.

In addition, the multibos command has the ability to copy or share logical volumes and file systems. By default, the BOS file systems (currently /, /usr, /var, and /opt,) and the boot logical volume are copied. You can also make copies of other file systems using –L flag.

ll other file systems and logical volumes are shared between instances of BOS. Separate log device logical volumes (for example, those that are not contained within the file system) are not supported for copy and will be shared.

But there are some restrictions for implementing multibos.

  • The multibos command is supported on AIX 5L Version 5.3 with the 5300-03 Recommended Maintenance package and later.
  • The current rootvg must have enough space for each BOS object copy. BOS object copies are placed on the same disk or disks as the original.
  • The total number of copied logical volumes cannot exceed 128. The total number of copied logical volumes and shared logical volumes are subject to volume group limits.

 

Lets see some examples,

1. To preview the creation of a standby BOS,

# multibos –sXp

My recommendation: Always preview (-p flag) the setup operation becore proceeding with the actual operation

2. To create a standby BOS,

# multibos –sX

3. To mount a standby BOS,

# multibos –m

4. To unmount a standby BOS,

# multibos –u

5. To remove a standby BOS,

# multibos –R

6. To start a standby BOS shell,

# multibos –S

After starting a standby BOS shell, you can do patching or customize the BOS.

7. To setup bootlist for the standby BOS,

# bootlist pm normal hdisk0 blv=bos_hd5

whereas bos_hd5 is the BLV for the standby BOS which was created by multibos setup operation.

If you want to boot from the old BOS, please boot from hd5.

I started using multibos to avoid alt_disk_install during patching. This saves a lot of downtime for me. After the creation of standby BOS, we do patching in the standby bos which happens during production time with no downtime. We need downtime only to boot the machine in standby BLV. Its very simple. Have a try.

Man Page for multibos

Friday, November 7, 2008

IBM AIX - Features and Limits

Its a bit old table. But its a really good doc for people working in different versions of AIX.

AIX Quick Sheet

AIX Quick Sheet

How to get partition number on AIX Partitions

I hope you guys already know about "lparstat -l" command which shows the details about a partition like partition id, cpu, memory allocations,etc..

Here is another option to get the partition id. OR You can make use of this option to find whether yoru server is an LPAR or not.

How to Get the Partition Number on Aix Partitions

AIX 5.1 Installation Document -

Here is a nice doc on AIX 5.1 Installation. It got screen by screen slides

AIX Etherchannel Load Balancing Options

Here is a cool document about Etherchannel Load Balancng options in AIX.

Thursday, October 30, 2008

AIX Version 6.0 Study Materials

Here are some study materials from IBM





Below are AIX v6 materials from rootvg







Few tips from other bloggers

Few tips on AIX

1. AIX CPU deallocation: How would you replace failed CPU dynamically on aix

2. AIX 32 bit/64 bit Kernel Support

3. Directories to monitor in AIX

4. AIX/LVM Quick Reference

5. Change IP with one command

6. Display Slot information in AIX

7. Delete multiple gateways

8. How to set backspace in UNIX terminal

9. LVM information crash with ODM

10. Use "screen" to run programs in a dedicated session

Korn Shell Script Lessons 1-7

Korn Shell - Lesson 1 - Print statements and Comment


Korn Shell - Lesson 2a - Variables


Korn Shell - Lesson 2b - Reading input


Korn Shell - Lesson 3 - Debugging


Korn Shell - Lesson 4 - A Little About Arrays/Lists


Korn Shell - Lesson 5 - Basic Math


Korn Shell - Lesson 6a - if statement


Korn Shell - Lesson 6b - if statement - compound tests


Korn Shell - Lesson 6c - if/else statement


Korn Shell - Lesson 6d - if/then/elif statement

Korn Shell - Lesson 6e - if/then/elif/else statement

Korn Shell - LEsson 7a - String Tests w/ if

Bourne Shell Scripting (Part 61-76)

Bourne Shell Scripting - Part 61:


Bourne Shell Scripting - Part 62:


Bourne Shell Scripting - Part 63:


Bourne Shell Scripting - Part 64:


Bourne Shell Scripting - Part 65:


Bourne Shell Scripting - Part 66:


Bourne Shell Scripting - Part 67:


Bourne Shell Scripting - Part 68:


Bourne Shell Scripting - Part 69:


Bourne Shell Scripting - Part 70:


Bourne Shell Scripting - Part 71:


Bourne Shell Scripting - Part 72:


Bourne Shell Scripting - Part 73:
File not found

Bourne Shell Scripting - Part 74:



Bourne Shell Scripting - Part 75:
File not found

Bourne Shell Scripting - Part 76:

Bourne Shell Scripting (Part 41-60)

Bourne Shell Scripting - Part 41:


Bourne Shell Scripting - Part 42:


Bourne Shell Scripting - Part 43:


Bourne Shell Scripting - Part 44:


Bourne Shell Scripting - Part 45:


Bourne Shell Scripting - Part 46:


Bourne Shell Scripting - Part 47:


Bourne Shell Scripting - Part 48:
File not found

Bourne Shell Scripting - Part 49:


Bourne Shell Scripting - Part 50:


Bourne Shell Scripting - Part 51:


Bourne Shell Scripting - Part 52:


Bourne Shell Scripting - Part 53:


Bourne Shell Scripting - Part 54:


Bourne Shell Scripting - Part 55:


Bourne Shell Scripting - Part 56:


Bourne Shell Scripting - Part 57:


Bourne Shell Scripting - Part 58:


Bourne Shell Scripting - Part 59:


Bourne Shell Scripting - Part 60:

Bourne Shell Scripting (Part 21-40)

Bourne Shell Scripting - Part 21:


Bourne Shell Scripting - Part 22:


Bourne Shell Scripting - Part 23:


Bourne Shell Scripting - Part 24:


Bourne Shell Scripting - Part 25:


Bourne Shell Scripting - Part 26:


Bourne Shell Scripting - Part 27:


Bourne Shell Scripting - Part 28:


Bourne Shell Scripting - Part 29:


Bourne Shell Scripting - Part 30:


Bourne Shell Scripting - Part 31:


Bourne Shell Scripting - Part 32:


Bourne Shell Scripting - Part 33:


Bourne Shell Scripting - Part 34:


Bourne Shell Scripting - Part 35:


Bourne Shell Scripting - Part 36:


Bourne Shell Scripting - Part 37:


Bourne Shell Scripting - Part 38:


Bourne Shell Scripting - Part 39:


Bourne Shell Scripting - Part 40:

Bourne Shell Scripting (Part 1-20)

Bourne Shell Scripting - Part 1:

Bourne Shell Scripting - Part 2:

Bourne Shell Scripting - Part 3:

Bourne Shell Scripting - Part 4:


Bourne Shell Scripting - Part 5:


Bourne Shell Scripting - Part 6:


Bourne Shell Scripting - Part 7:


Bourne Shell Scripting - Part 8:
File not found

Bourne Shell Scripting - Part 9:
File not found

Bourne Shell Scripting - Part 10:


Bourne Shell Scripting - Part 11:


Bourne Shell Scripting - Part 12:


Bourne Shell Scripting - Part 13:

Bourne Shell Scripting - Part 14:File not found

Bourne Shell Scripting - Part 15:


Bourne Shell Scripting - Part 16:


Bourne Shell Scripting - Part 17:


Bourne Shell Scripting - Part 18:


Bourne Shell Scripting - Part 19:


Bourne Shell Scripting - Part 20:

(bash) Shell Script Variables

(bash) Shell Script Variables (Part 1):

(bash) Shell Script Variables (Part 2):


(bash) Shell Script Variables (Part 3):

Basic Work With Text Files (youtube video)

Basic Work With Text Files (part 1/8):


Basic Work With Text Files (part 2/8):


Basic Work With Text Files (part 3/8):


Basic Work With Text Files (part 4/8):


Basic Work With Text Files (part 5/8):


Basic Work With Text Files (part 6/8):


Basic Work With Text Files (part 7/8):


Basic Work With Text Files (part 8/8):

Introduction to VI editor (youtube Video)

Introduction to VI editor (Part 1):


Introduction to VI editor (Part 2):


Introduction to VI editor (Part 3):

Youtube video on Basic Unix/Linux File Permissions

Basic Unix/Linux File Permissions (Part 1):


Basic Unix/Linux File Permissions (Part 2):


Basic Unix/Linux File Permissions (Part 3):


Basic Unix/Linux File Permissions (Part 4):


Basic Unix/Linux File Permissions (Part 5):


Basic Unix/Linux File Permissions (Part 6):


Basic Unix/Linux File Permissions (Part 7):


Basic Unix/Linux File Permissions (Part 8):


Basic Unix/Linux File Permissions (Part 9):


Basic Unix/Linux File Permissions (Part 10):

Youtube video on Basic Unix/Linux Commands Arguments

Basic Unix/Linux Commands Arguments (Part 1/6):


Basic Unix/Linux Commands Arguments (Part 2/6):


Basic Unix/Linux Commands Arguments (Part 3/6):


Basic Unix/Linux Commands Arguments (Part 4/6):



Basic Unix/Linux Commands Arguments (Part 5/6):


Basic Unix/Linux Commands Arguments (Part 6/6):

Youtube video on basic unix/Liux commands

Basic Unix/Linux Commands (Part 1/4):


Basic Unix/Linux Commands (Part 2/4):


Basic Unix/Linux Commands (Part 3/4):


Basic Unix/Linux Commands (Part 4/4):

Some YOUTUBE videos on pSeries and AIX

How to boot a pSeries/LPAR to AIX from Service Processor:


Tour of the new p695 pSeries Server:


Creation of LPAR via HMC:


Introduction to HMCv7:


Dynamically changing LPARs:


Installing AIX from CDROM:


Installing AIX from NIM:


Ethernet Options with Virtual Ethernet:


Setting up Virtual Ethernet via the Virtual I/O Server: