Prakash Sawarkar: Kernel 3.8 Released, how to Compile in Redhat, CenOS and Fedora..

Kernel 3.8 Released, how to Compile in Redhat, CentOS and Fedora.

Sunday, 29 June 2008

RAID Concept in RHEL / CentOS (Redundant Array of Independent Disks)

  The basic idea behind RAID is to combine multiple small,
  inexpensive disk drives into an array to accomplish performance
  or redundancy goals not attainable with one large and expensive
  drive. This array of drives appears to the computer as a single
  logical storage unit or drive.
 What is RAID?
    RAID allows information to access several disks. RAID uses
      techniques such as disk striping (RAID
      Level 0), disk mirroring (RAID Level 1),
      and disk striping with parity (RAID Level
      5) to achieve redundancy, lower latency, increased bandwidth,
      and maximized ability to recover from hard disk crashes.
  
      RAID consistently distributes data across each drive in the
      array. RAID then breaks down the data into consistently-sized
      chunks (commonly 32K or 64k, although other values are
      acceptable). Each chunk is then written to a hard drive in the
      RAID array according to the RAID level employed. When the data
      is read, the process is reversed, giving the illusion that the
      multiple drives in the array are actually one large drive.

Who Should Use RAID?
System Administrators and others who manage large amounts of
data would benefit from using RAID technology. Primary reasons
 to deploy RAID include:
    * Enhances speed
       
    *Increases storage capacity using a single virtual disk
       
    *Minimizes disk failure
Hardware RAID versus Software RAID

There are two possible RAID approaches: Hardware RAID and Software RAID.

Hardware RAID:
The hardware-based array manages the RAID subsystem
 independently from the host. It presents a single disk per
 RAID array to the host.
     A Hardware RAID device connects to the SCSI controller and
 presents the RAID arrays as a single SCSI drive. An external
 RAID system moves all RAID handling "intelligence" into a
 controller located in the external disk subsystem. The whole
 subsystem is connected to the host via a normal SCSI
 controller and appears to the host as a single disk.
     RAID controller cards function like a SCSI controller to the
 operating system, and handle all the actual drive
 communications. The user plugs the drives into the RAID
 controller (just like a normal SCSI controller) and then adds
 them to the RAID controllers configuration, and the operating
 system won't know the difference.
      
Software RAID:
Software RAID implements the various RAID levels in the kernel
 disk (block device) code. It offers the cheapest possible
 solution, as expensive disk controller cards or hot-swap
 chassis 1 are not required. Software RAID also works with
 cheaper IDE disks as well as SCSI disks. With today's faster
 CPUs, Software RAID outperforms Hardware RAID.

 The Linux kernel contains an MD driver that allows the RAID
 solution to be completely hardware independent. The
 performance of a software-based array depends on the server
 CPU performance and load.
  To learn more about Software RAID, here are the key features:
    
          *Threaded rebuild process.
        
          *Kernel-based configuration.
        
          *Portability of arrays between Linux machines without reconstruction.
        
          *Backgrounded array reconstruction using idle system resources.
        
          *Hot-swappable drive support.
        
          *Automatic CPU detection to take advantage of certain CPU optimizations.

RAID Levels and Linear Support.

RAID supports various configurations, including levels 0, 1, 4,
      5, and linear. These RAID types are defined as follows:
      
       *  Level 0 — RAID level 0, often
         called "striping," is a performance-oriented striped data
         mapping technique. This means the data being written to the
         array is broken down into strips and written across the
         member disks of the array, allowing high I/O performance at
         low inherent cost but provides no redundancy. The storage
         capacity of a level 0 array is equal to the total capacity
         of the member disks in a Hardware RAID or the total capacity
         of member partitions in a Software RAID.
                   
      *   Level 1 — RAID level 1, or
         "mirroring," has been used longer than any other form of
         RAID. Level 1 provides redundancy by writing identical data
         to each member disk of the array, leaving a "mirrored" copy
         on each disk.  Mirroring remains popular due to its
         simplicity and high level of data availability. Level 1
         operates with two or more disks that may use parallel access
         for high data-transfer rates when reading but more commonly
         operate independently to provide high I/O transaction
         rates. Level 1 provides very good data reliability and
         improves performance for read-intensive applications but at
         a relatively high cost. 2 The storage capacity of the level 1 array is
         equal to the capacity of one of the mirrored hard disks in a
         Hardware RAID or one of the mirrored partitions in a
         Software RAID.
                  
      *   Level 4 — Level 4 uses parity
         3 concentrated on a single disk drive to protect
           data. It is better suited to transaction I/O rather than
           large file transfers. Because the dedicated parity disk
           represents an inherent bottleneck, level 4 is seldom used
           without accompanying technologies such as write-back
           caching. Although RAID level 4 is an option in some RAID
           partitioning schemes, it is not an option allowed in Red
           Hat Enterprise Linux RAID installations. 4 The storage capacity of Hardware RAID level 4
           is equal to the capacity of member disks, minus the
           capacity of one member disk. The storage capacity of
           Software RAID level 4 is equal to the capacity of the
           member partitions, minus the size of one of the partitions
           if they are of equal size.
              
        *   Level 5 — This is the most common
         type of RAID. By distributing parity across some or all of
         an array's member disk drives, RAID level 5 eliminates the
         write bottleneck inherent in level 4. The only performance
         bottleneck is the parity calculation process. With modern
         CPUs and Software RAID, that usually is not a very big
         problem. As with level 4, the result is asymmetrical
         performance, with reads substantially outperforming
         writes. Level 5 is often used with write-back caching to
         reduce the asymmetry. The storage capacity of Hardware RAID
         level 5 is equal to the capacity of member disks, minus the
         capacity of one member disk. The storage capacity of
         Software RAID level 5 is equal to the capacity of the member
         partitions, minus the size of one of the partitions if they
         are of equal size.
       
          
     *   RAID 0 (striping without parity)
          RAID 1 (disk mirroring)
          RAID 4 (parity)
          RAID 5 (disk stripping with parity)

   RAID 0:
   minimum-2 harddisks
   Maximum-32 harddisks
   Details written alternately and evenly into two or more disks
   read & write speed is fast
   fault tolerance not available

   RAID 1:
  minimum-2 harddisks
  maximum-2 harddisks
  Simaltaneously data will be written to two volumes on two different disks
  read speed is fast & write is slow
  fault tolerance available
  50% overhead

  RAID 4:
  minimum-3 harddisks
  maximum-32 harddisks
  Data is written alternately and evenly to two or more disks and a parity is also written on one      disk
  Read & write speed is fast
  fault tolerance is available

  RAID5:
  minimum-3 harddisks
  maximum-32 harddisks
  Data is written alternately and evenly to two disks and a parity is written in all disks
  read & write speed is fast
  fault tolerance is available
  also known as striped with parity
Steps to create  a Raid:
create multiple partitions


#  fdisk /dev/sda
  n
  +500M
  t
   fd
 #  partprobe


 to club raid partitions into a single array

 #  mdadm -C /dev/md0 -n3 /dev/sda{9,10,11} -l5


to display raid device
 # mdadm -D /dev/md0

to format

 # mke2fs -j /dev/md0

create a mount point

 # mkdir /raid

to mount the raid device

 # mount /dev/md0 /raid
 # cd /raid

to make a partition faulty

 #mdadm -f /dev/md0 /dev/sda10

to remove a faulty partition from array

 # mdadm -r /dev/md0 /dev/sda10


to add a new partition into raid array

 #mdadm -a /dev/md0 /dev/sda11

to watch data recovery

 # cat /proc/mdstat

to stop raid

 # mdadm -S /dev/md0

to Activate raid

 # mdadm -A /dev/md0 /dev/sda{9,10,11}

Friday, 9 May 2008

Catci on Redhat, CentOS Fedora(Network Monitoring Graphing Tool)

Cacti is a complete network graphing solution designed to harness the power of RRDTool's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices.

Wednesday, 6 February 2008

How to mount partition with ntfs file system and read write access

1. Introduction

Purpose of this article is to provide to reader step by step guide, how to mount partition with NTFS file system on the Linux operating system. This article consists of two parts:
mount NTFS file system read only access
mount NTFS file system with read write access

2. Mount NTFS file system with read only access

2.1. NTFS kernel support

Majority of current Linux distributions supports NTFS file system out of the box. To be more specific, support for NTFS file system is more feature of Linux kernel modules rather than Linux distributions. First verify if we have NTFS modules installed on our system.
#  ls /lib/modules/2.6.18-53-686/kernel/fs/ | grep ntfs 
  check for NTFS kernel support

NTFS module is presented. Let's identify NTFS partition.
2.2. Identifying partition with NTFS file system

One simple way to identify NTFS partition is:
#  fdisk -l | grep NTFS 
Identifying partition with NTFS file system 
There it is: /dev/sdb1
2.3. Mount NTFS partition

First create a mount point:
#  mkdir /mnt/ntfs 
Then simply use mount command to mount it:
#  mount -t ntfs /dev/sdb1 /mnt/ntfs 
Mount NTFS partition using linux
Now we can access NTFS partition and its files with read write access.

3. Mount NTFS file system with read write access

Mounting NTFS file system with read write access permissions is a bit more complicated. This involves installation of addition software such as fuse and ntfs-3g. In both cases you probably need to use your package management tool such as yum, apt-get, synaptic etc.. and install it from your standard distribution repository. Check for packages ntfs-3g and fuse. We take the other path which consists of manual compilation and installation fuse and ntfs-3g from source code.
3.1. Install addition software

3.1.1. Fuse Install

Download source code from: http://fuse.sourceforge.net/
#  wget http://easynews.dl.sourceforge.net/sourceforge/fuse/fuse-2.7.1.tar.gz 
Compile and install fuse source code:
Extract source file:
#  tar xzf fuse-2.7.1.tar.gz 
Compile and install
#  cd fuse-2.7.1
#  ./configure --exec-prefix=/; make; make install 
Compile and install fuse source code
3.1.2. ntfs-3g install

Download source code from: http://www.ntfs-3g.org/index.html#download
wget http://www.ntfs-3g.org/ntfs-3g-1.1120.tgz 
Extract source file:
#  tar xzf ntfs-3g-1.1120.tgz 
Compile and install ntfs-3g source code
NOTE: Make sure that you have pkg-config package installed, otherwise you get this error message:
checking for pkg-config... no
checking for FUSE_MODULE... configure: error: FUSE >= 2.6.0 was not found. Either it's not fully 
installed (e.g. fuse, fuse-utils, libfuse, libfuse2, libfuse-dev, etc packages) or files from an old
version are still present. See FUSE at http://fuse.sf.net/ 
#  cd ntfs-3g-1.1120
#  ./configure; make; make install 
Compile and install ntfs-3g source code
3.2. Mount ntfs partition with read write access

#  mount -t ntfs-3g /dev/sdb1 /mnt/ntfs/ 
NOTE: ntfs-3g recommends to have at least kernel version 2.6.20 and higher.
l#  mount -t ntfs-3g /dev/sdb1 /mnt/ntfs/
WARNING: Deficient Linux kernel detected. Some driver features are
         not available (swap file on NTFS, boot from NTFS by LILO), and
         unmount is not safe unless it's made sure the ntfs-3g process
         naturally terminates after calling 'umount'. If you wish this
         message to disappear then you should upgrade to at least kernel
         version 2.6.20, or request help from your distribution to fix
         the kernel problem. The below web page has more information:
         http://ntfs-3g.org/support.html#fuse26 

Saturday, 29 December 2007

Configure Nagios On RHEL,CentOS 5.1

Nagios is a powerful monitoring system that enables organizations to identify and resolve IT infrastructure problems before they affect critical business processes.
Designed with scalability and flexibility in mind, Nagios gives you the peace of mind that comes from knowing your organization's business processes won't be affected by unknown outages.
Nagios is a powerful tool that provides you with instant awareness of your organization's mission-critical IT infrastructure. Nagios allows you to detect and repair problems and mitigate future issues before they affect end-users and customers.

Tuesday, 20 November 2007

How to Lock / UnLock (Enable / Disable) Linux User Account

Before you remove an account from a system, is a good idea lock it for one week to make sure that no one use it.
To lock, you can use the follow command:
# passwd -l username (where username is the login id).
This option is used to lock the specified account and it is available to root only. The locking is performed by rendering the encrypted password into an invalid string (by prefixing the encrypted string with an !).
After that, if someone try to  loginusing  this account, the system will return:
# su - username
This account is currently not available.
To Unlock the same account
Following command re-enables an account by changing the password back to its previous value i.e. to value before using -l option.
# passwd -u username
This removes the '!' in front of the encrypted password