Prakash Sawarkar: Kernel 3.8 Released, how to Compile in Redhat, CenOS and Fedora..

Kernel 3.8 Released, how to Compile in Redhat, CentOS and Fedora.

Sunday, 28 December 2008

Basic MySQL performance tuning using Query Cache

The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.
The query cache can be useful in an environment where you have tables that do not change very often and for which the server receives many identical queries. This is a typical situation for many Web servers that generate many dynamic pages based on database content.
NOTE: The query cache always contains current and reliable data. Any insert, update, delete, or other modification to a table causes any relevant entries in the query cache to be flushed.
Query Cache Configuration:
To set the size of the query cache, set the query_cache_size system variable.
For the query cache to actually be able to hold any query results, its size must be set larger:
                   mysql> SET GLOBAL query_cache_size = 1000000;
If the query cache size is greater than 0, the query_cache_type variable influences how it works. This variable can be set to the following values:
0 or OFF prevents caching or retrieval of cached results.
1 or ON enables caching except of those statements that begin with SELECT SQL_NO_CACHE.
2 or DEMAND causes caching of only those statements that begin with SELECT SQL_CACHE.
                   mysql> SET SESSION query_cache_type = ON;
Using Select statement using Query Cache:
SQL_CACHE: The query result is cached if it is cacheable and the value of the query_cache_type system variable is ON or DEMAND.
SQL_NO_CACHE: The query result is not cached.
Examples:
                  SELECT SQL_CACHE id, name FROM customer;
                  SELECT SQL_NO_CACHE id, name FROM customer;
Setting the GLOBAL query_cache_type value determines query cache behavior for all clients that connect after the change is made. Individual clients can control cache behavior for their own connection by setting the SESSION query_cache_type value.

Sunday, 23 November 2008

How to Import / Export (Backup / Restore) MySQL Database

It is important to back up your databases so that you can recover your data and be up and running again in case problems occur. MySQL offers a variety of backup strategies from which you can choose the methods that best suit the requirements for your installation.
Export / Backup MySQL database: 
The mysqldump client can be used to dump a database or a collection of databases for backup or for transferring the data to another SQL server (not necessarily a MySQL server). The dump contains SQL statements to create the table and/or populate the table.
If you are doing a backup on the server, and your tables all are MyISAM tables, you could consider using the mysqlhotcopy instead since faster backups and faster restores can be accomplished with the latter
Here is the most simple way to export the database to a sql file
      # mysqldump -u USER -p DATABASE > FILENAME.sql
USER is the MySQL admin user
DATABASE is the name of the database that need to be exported
FILENAME.sql is the name of the file where your data will be exported
When you issue this command you will be prompted for the MySQL admin password. Enter that password and hit the Enter key. In the directory you issued the command you will now have a file with the FILENAME.sql file you then need to copy to your secure drive.
You can dump all databases by doing:
      # mysqldump -u root -p --all-databases > all_dbs.sql
Import/Restore MySQL database:
Below is the simple command through which you can restore / import the already exported MySQL database file (.sql)
     # mysql -u USER -p DATABASE < FILENAME.sql
USER is the MySQL admin user
DATABASE is the name of the database where data need to be imported / restore
FILENAME.sql is the dump that was exported.
You will be prompted for the MySQL administrator password.

Friday, 19 September 2008

iSCSI setup on Red Hat / CentOS 5.x

So you have huge volumes of storage on your iSCSI SAN storage array and you want to use the volumes on Red Hat Enterprise Linux 5.x or CentOS 5.x
First install the Open-iSCSI initiator utils:

# yum -y install iscsi-initiator-utils

Edit /etc/iscsi/iscsid.conf and set your username and password, only if you use CHAP authentication to your iSCSI service.
Most likely, you will have to allow access to the iSCSI volume on the array, so log into your NAS admin interface and authorize your Linux host either by username, IP, or initiator name. You can find your Linux host's initiator name in: 
/etc/iscsi/initiatorname.iscsi

Set iscsi to start on boot, and start it now:

# chkconfig iscsid on ; service iscsid start
# chkconfig iscsi on ; service iscsi start

Use iscsiadm to discover your iSCSI targets, replacing the IP with your own portal IP:

# iscsiadm -m discovery -t st -p 192.168.1.123

Once discovery tells you the target names, log into the one you want to work with:

# iscsiadm -m node -T iqn.2123-01.com:blah:blah:blah -p 192.168.1.123 -l

If you want to automatically login at boot:

# iscsiadm -m node -T iqn.2123-01.com:blah:blah:blah -p 192.168.1.123 -o update -n node.startup -v automatic

Now the iSCSI volume should be detected by your system as a block device. You can check what device it was detected as by tailing your log.

# tail -n 50 /var/log/messages

Let's assume that the block device was detected as /dev/sdd. We will have to partition and format our filesystem at this point. You can either use a straight Linux partition, or you can use Linux Volume Management. I prefer LVM because it allows for more flexibility, including easy volume growth. You must use LVM if your device will be over 2TB, which is a limitation of regular Linux partitions.
For LVM, we will Initialize the block device as a physical volume, create a volume group, create a logical volume and format it as ext3. Note that with LVM, you do not use fdisk/parted or create the sdd1 partition:

# pvcreate /dev/sdd
# vgcreate SANVolGroup01 /dev/sdd
# lvcreate --extents 100%VG --name SANLogVol01 SANVolGroup01
# mkfs -t ext3 -m 1 -L mysan1 -O dir_index,filetype,has_journal,sparse_super /dev/SANVolGroup01/SANLogVol01
# mount /dev/SANVolGroup01/SANLogVol01 /mnt/san01

For a simple Linux partition, instead of LVM, we will create a new partition on the block device and then format it as ext3:

# parted -s -- /dev/sdd mklabel gpt
# parted -s -- /dev/sdd mkpart primary ext3 1 -1
# mkfs -t ext3 -m 1 -L mysan1 -O  dir_index,filetype,has_journal,sparse_super /dev/sdd1
# mount /dev/sdd1 /mnt/san01

iSCSI fstab entries require the "_netdev" option, so there is not an attempt to mount until networking is enabled. Mounting by label is also a good option, as devices may be detected at boot in different orders.
 More info at
 http://www.open-iscsi.org/docs/README

Saturday, 23 August 2008

How to Change MySQL Storage Engines

MySQL 5.0 and higher offers nine storage engines and more are likely to be added in the future. The most commonly used are MyISAM, InnoDB, and Berkeley DB (BDB). Each storage engine offers special features and advantages. You can even use different formats for each table in your database, though it may be harder to manage a mixed format database. Better is to keep all tables in a database using the same storage engine, but use different engines for different databases.
To determine which storage engines your server supports, run following SHOW ENGINES; statement. The value in the Support column indicates whether an engine can be used. A value of YES, NO, or DEFAULT indicates that an engine is available, not available, or available and currently set as the default storage engine.
MyISAM: 
The default MySQL storage engine and the one that is used the most in Web, data warehousing, and other application environments. MyISAM is supported in all MySQL configurations, and is the default storage engine unless you have configured MySQL to use a different one by default. MyISAM is designed with the idea that your database is queried far more than its updated and as a result it performs very fast read operations.
InnoDB: 
A transaction-safe (ACID compliant) storage engine for MySQL that has commit, roll-back, and crash-recovery capabilities to protect user data. InnoDB row-level locking (without escalation to coarser granularity locks) and Oracle-style consistent non-locking reads increase multi-user concurrency and performance.
Convert from one type of engine to other.
When you create a new table, you can specify which storage engine to use by adding an ENGINE or TYPE table option to the CREATE TABLE statement:
                             > CREATE TABLE t (i INT) ENGINE = INNODB;
                             > CREATE TABLE t (i INT) TYPE = MYISAM;
To convert a table from one storage engine to another, use an ALTER TABLE statement that indicates the new engine:
                             > ALTER TABLE t ENGINE = MYISAM;
                             > ALTER TABLE t TYPE = INNODB;
The above statement would change your MySQL storage Engine.
NOTE: Dont’ forget that it takes a lot of computer resource to convert large tables.

Sunday, 29 June 2008

RAID Concept in RHEL / CentOS (Redundant Array of Independent Disks)

  The basic idea behind RAID is to combine multiple small,
  inexpensive disk drives into an array to accomplish performance
  or redundancy goals not attainable with one large and expensive
  drive. This array of drives appears to the computer as a single
  logical storage unit or drive.
 What is RAID?
    RAID allows information to access several disks. RAID uses
      techniques such as disk striping (RAID
      Level 0), disk mirroring (RAID Level 1),
      and disk striping with parity (RAID Level
      5) to achieve redundancy, lower latency, increased bandwidth,
      and maximized ability to recover from hard disk crashes.
  
      RAID consistently distributes data across each drive in the
      array. RAID then breaks down the data into consistently-sized
      chunks (commonly 32K or 64k, although other values are
      acceptable). Each chunk is then written to a hard drive in the
      RAID array according to the RAID level employed. When the data
      is read, the process is reversed, giving the illusion that the
      multiple drives in the array are actually one large drive.

Who Should Use RAID?
System Administrators and others who manage large amounts of
data would benefit from using RAID technology. Primary reasons
 to deploy RAID include:
    * Enhances speed
       
    *Increases storage capacity using a single virtual disk
       
    *Minimizes disk failure
Hardware RAID versus Software RAID

There are two possible RAID approaches: Hardware RAID and Software RAID.

Hardware RAID:
The hardware-based array manages the RAID subsystem
 independently from the host. It presents a single disk per
 RAID array to the host.
     A Hardware RAID device connects to the SCSI controller and
 presents the RAID arrays as a single SCSI drive. An external
 RAID system moves all RAID handling "intelligence" into a
 controller located in the external disk subsystem. The whole
 subsystem is connected to the host via a normal SCSI
 controller and appears to the host as a single disk.
     RAID controller cards function like a SCSI controller to the
 operating system, and handle all the actual drive
 communications. The user plugs the drives into the RAID
 controller (just like a normal SCSI controller) and then adds
 them to the RAID controllers configuration, and the operating
 system won't know the difference.
      
Software RAID:
Software RAID implements the various RAID levels in the kernel
 disk (block device) code. It offers the cheapest possible
 solution, as expensive disk controller cards or hot-swap
 chassis 1 are not required. Software RAID also works with
 cheaper IDE disks as well as SCSI disks. With today's faster
 CPUs, Software RAID outperforms Hardware RAID.

 The Linux kernel contains an MD driver that allows the RAID
 solution to be completely hardware independent. The
 performance of a software-based array depends on the server
 CPU performance and load.
  To learn more about Software RAID, here are the key features:
    
          *Threaded rebuild process.
        
          *Kernel-based configuration.
        
          *Portability of arrays between Linux machines without reconstruction.
        
          *Backgrounded array reconstruction using idle system resources.
        
          *Hot-swappable drive support.
        
          *Automatic CPU detection to take advantage of certain CPU optimizations.

RAID Levels and Linear Support.

RAID supports various configurations, including levels 0, 1, 4,
      5, and linear. These RAID types are defined as follows:
      
       *  Level 0 — RAID level 0, often
         called "striping," is a performance-oriented striped data
         mapping technique. This means the data being written to the
         array is broken down into strips and written across the
         member disks of the array, allowing high I/O performance at
         low inherent cost but provides no redundancy. The storage
         capacity of a level 0 array is equal to the total capacity
         of the member disks in a Hardware RAID or the total capacity
         of member partitions in a Software RAID.
                   
      *   Level 1 — RAID level 1, or
         "mirroring," has been used longer than any other form of
         RAID. Level 1 provides redundancy by writing identical data
         to each member disk of the array, leaving a "mirrored" copy
         on each disk.  Mirroring remains popular due to its
         simplicity and high level of data availability. Level 1
         operates with two or more disks that may use parallel access
         for high data-transfer rates when reading but more commonly
         operate independently to provide high I/O transaction
         rates. Level 1 provides very good data reliability and
         improves performance for read-intensive applications but at
         a relatively high cost. 2 The storage capacity of the level 1 array is
         equal to the capacity of one of the mirrored hard disks in a
         Hardware RAID or one of the mirrored partitions in a
         Software RAID.
                  
      *   Level 4 — Level 4 uses parity
         3 concentrated on a single disk drive to protect
           data. It is better suited to transaction I/O rather than
           large file transfers. Because the dedicated parity disk
           represents an inherent bottleneck, level 4 is seldom used
           without accompanying technologies such as write-back
           caching. Although RAID level 4 is an option in some RAID
           partitioning schemes, it is not an option allowed in Red
           Hat Enterprise Linux RAID installations. 4 The storage capacity of Hardware RAID level 4
           is equal to the capacity of member disks, minus the
           capacity of one member disk. The storage capacity of
           Software RAID level 4 is equal to the capacity of the
           member partitions, minus the size of one of the partitions
           if they are of equal size.
              
        *   Level 5 — This is the most common
         type of RAID. By distributing parity across some or all of
         an array's member disk drives, RAID level 5 eliminates the
         write bottleneck inherent in level 4. The only performance
         bottleneck is the parity calculation process. With modern
         CPUs and Software RAID, that usually is not a very big
         problem. As with level 4, the result is asymmetrical
         performance, with reads substantially outperforming
         writes. Level 5 is often used with write-back caching to
         reduce the asymmetry. The storage capacity of Hardware RAID
         level 5 is equal to the capacity of member disks, minus the
         capacity of one member disk. The storage capacity of
         Software RAID level 5 is equal to the capacity of the member
         partitions, minus the size of one of the partitions if they
         are of equal size.
       
          
     *   RAID 0 (striping without parity)
          RAID 1 (disk mirroring)
          RAID 4 (parity)
          RAID 5 (disk stripping with parity)

   RAID 0:
   minimum-2 harddisks
   Maximum-32 harddisks
   Details written alternately and evenly into two or more disks
   read & write speed is fast
   fault tolerance not available

   RAID 1:
  minimum-2 harddisks
  maximum-2 harddisks
  Simaltaneously data will be written to two volumes on two different disks
  read speed is fast & write is slow
  fault tolerance available
  50% overhead

  RAID 4:
  minimum-3 harddisks
  maximum-32 harddisks
  Data is written alternately and evenly to two or more disks and a parity is also written on one      disk
  Read & write speed is fast
  fault tolerance is available

  RAID5:
  minimum-3 harddisks
  maximum-32 harddisks
  Data is written alternately and evenly to two disks and a parity is written in all disks
  read & write speed is fast
  fault tolerance is available
  also known as striped with parity
Steps to create  a Raid:
create multiple partitions


#  fdisk /dev/sda
  n
  +500M
  t
   fd
 #  partprobe


 to club raid partitions into a single array

 #  mdadm -C /dev/md0 -n3 /dev/sda{9,10,11} -l5


to display raid device
 # mdadm -D /dev/md0

to format

 # mke2fs -j /dev/md0

create a mount point

 # mkdir /raid

to mount the raid device

 # mount /dev/md0 /raid
 # cd /raid

to make a partition faulty

 #mdadm -f /dev/md0 /dev/sda10

to remove a faulty partition from array

 # mdadm -r /dev/md0 /dev/sda10


to add a new partition into raid array

 #mdadm -a /dev/md0 /dev/sda11

to watch data recovery

 # cat /proc/mdstat

to stop raid

 # mdadm -S /dev/md0

to Activate raid

 # mdadm -A /dev/md0 /dev/sda{9,10,11}

Friday, 9 May 2008

Catci on Redhat, CentOS Fedora(Network Monitoring Graphing Tool)

Cacti is a complete network graphing solution designed to harness the power of RRDTool's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices.

Wednesday, 6 February 2008

How to mount partition with ntfs file system and read write access

1. Introduction

Purpose of this article is to provide to reader step by step guide, how to mount partition with NTFS file system on the Linux operating system. This article consists of two parts:
mount NTFS file system read only access
mount NTFS file system with read write access

2. Mount NTFS file system with read only access

2.1. NTFS kernel support

Majority of current Linux distributions supports NTFS file system out of the box. To be more specific, support for NTFS file system is more feature of Linux kernel modules rather than Linux distributions. First verify if we have NTFS modules installed on our system.
#  ls /lib/modules/2.6.18-53-686/kernel/fs/ | grep ntfs 
  check for NTFS kernel support

NTFS module is presented. Let's identify NTFS partition.
2.2. Identifying partition with NTFS file system

One simple way to identify NTFS partition is:
#  fdisk -l | grep NTFS 
Identifying partition with NTFS file system 
There it is: /dev/sdb1
2.3. Mount NTFS partition

First create a mount point:
#  mkdir /mnt/ntfs 
Then simply use mount command to mount it:
#  mount -t ntfs /dev/sdb1 /mnt/ntfs 
Mount NTFS partition using linux
Now we can access NTFS partition and its files with read write access.

3. Mount NTFS file system with read write access

Mounting NTFS file system with read write access permissions is a bit more complicated. This involves installation of addition software such as fuse and ntfs-3g. In both cases you probably need to use your package management tool such as yum, apt-get, synaptic etc.. and install it from your standard distribution repository. Check for packages ntfs-3g and fuse. We take the other path which consists of manual compilation and installation fuse and ntfs-3g from source code.
3.1. Install addition software

3.1.1. Fuse Install

Download source code from: http://fuse.sourceforge.net/
#  wget http://easynews.dl.sourceforge.net/sourceforge/fuse/fuse-2.7.1.tar.gz 
Compile and install fuse source code:
Extract source file:
#  tar xzf fuse-2.7.1.tar.gz 
Compile and install
#  cd fuse-2.7.1
#  ./configure --exec-prefix=/; make; make install 
Compile and install fuse source code
3.1.2. ntfs-3g install

Download source code from: http://www.ntfs-3g.org/index.html#download
wget http://www.ntfs-3g.org/ntfs-3g-1.1120.tgz 
Extract source file:
#  tar xzf ntfs-3g-1.1120.tgz 
Compile and install ntfs-3g source code
NOTE: Make sure that you have pkg-config package installed, otherwise you get this error message:
checking for pkg-config... no
checking for FUSE_MODULE... configure: error: FUSE >= 2.6.0 was not found. Either it's not fully 
installed (e.g. fuse, fuse-utils, libfuse, libfuse2, libfuse-dev, etc packages) or files from an old
version are still present. See FUSE at http://fuse.sf.net/ 
#  cd ntfs-3g-1.1120
#  ./configure; make; make install 
Compile and install ntfs-3g source code
3.2. Mount ntfs partition with read write access

#  mount -t ntfs-3g /dev/sdb1 /mnt/ntfs/ 
NOTE: ntfs-3g recommends to have at least kernel version 2.6.20 and higher.
l#  mount -t ntfs-3g /dev/sdb1 /mnt/ntfs/
WARNING: Deficient Linux kernel detected. Some driver features are
         not available (swap file on NTFS, boot from NTFS by LILO), and
         unmount is not safe unless it's made sure the ntfs-3g process
         naturally terminates after calling 'umount'. If you wish this
         message to disappear then you should upgrade to at least kernel
         version 2.6.20, or request help from your distribution to fix
         the kernel problem. The below web page has more information:
         http://ntfs-3g.org/support.html#fuse26