Monday, April 9, 2012
Logrotate in Linux server
Logs are important and usefull in security and administration. If a log grows too much, you compress it , back it up for some months or years and create a new log file. All this is done in a much better way by Logrotate. Logrotate enables you to compress the old logfile, and store it for as long as you want, to run every day, montly, or when you want, and if a file expires, it can send it to you by email or delete it, after for example, 2 years. Old logs can be stored in an alternative directory.
You set up logs configuring /etc/logrotate.conf or adding a new file in /etc/logrotate.d/
For example, a logrotate file for apache looks like this:
File: /etc/logrotate.d/httpd
-------8<------- /usr/local/apache/logs/*log { missingok notifempty sharedscripts postrotate /bin/kill -USR1 `cat /var/run/httpd.pid 2>/dev/null` 2> /dev/null || true
endscript
}
You set up logs configuring /etc/logrotate.conf or adding a new file in /etc/logrotate.d/
For example, a logrotate file for apache looks like this:
File: /etc/logrotate.d/httpd
-------8<------- /usr/local/apache/logs/*log { missingok notifempty sharedscripts postrotate /bin/kill -USR1 `cat /var/run/httpd.pid 2>/dev/null` 2> /dev/null || true
endscript
}
Suggestions to debug Segmentation Fault errors
Usually signal #11 (SIGSEGV) set, which is defined in the header file signal.h file. The default action for a program upon receiving SIGSEGV is abnormal termination. This action will end the process, but may generate a core file (also known as core dump) to aid debugging, or perform some other platform-dependent action. A core dump is the recorded state of the working memory of a computer program at a specific time, generally when the program has terminated abnormally.
Segmentation fault can also occur under following circumstances:
a) A buggy program / command, which can be only fixed by applying patch.
b) It can also appear when you try to access an array beyond the end of an array under C programming.
c) Inside a chrooted jail this can occur when critical shared libs, config file or /dev/ entry missing.
d) Sometime hardware or faulty memory or driver can also create problem.
e) Maintain suggested environment for all computer equipment (overheating can also generate this problem).
Suggestions to debug Segmentation Fault errors
To debug this kind of error try one or all of the following techniques :
Use gdb to track exact source of problem.
Make sure correct hardware installed and configured.
Always apply all patches and use updated system.
Make sure all dependencies installed inside jail.
Turn on core dumping for supported services such as Apache.
Use strace which is a useful diagnostic, instructional, and debugging tool.
Google and find out if there is a solution to problem.
Fix your C program for logical errors such as pointer, null pointer, arrays and so on.
Analyze core dump file generated by your system using gdb
Segmentation fault can also occur under following circumstances:
a) A buggy program / command, which can be only fixed by applying patch.
b) It can also appear when you try to access an array beyond the end of an array under C programming.
c) Inside a chrooted jail this can occur when critical shared libs, config file or /dev/ entry missing.
d) Sometime hardware or faulty memory or driver can also create problem.
e) Maintain suggested environment for all computer equipment (overheating can also generate this problem).
Suggestions to debug Segmentation Fault errors
To debug this kind of error try one or all of the following techniques :
Use gdb to track exact source of problem.
Make sure correct hardware installed and configured.
Always apply all patches and use updated system.
Make sure all dependencies installed inside jail.
Turn on core dumping for supported services such as Apache.
Use strace which is a useful diagnostic, instructional, and debugging tool.
Google and find out if there is a solution to problem.
Fix your C program for logical errors such as pointer, null pointer, arrays and so on.
Analyze core dump file generated by your system using gdb
Fix corrupted RPM database on CentOS 5 / Redhat enterprise Linux 5 / Fedora 7
If rpm / yum command hangs during operations or you see error messages - it means your rpm database corrupted. /var/lib/rpm/ stores rpm database just delete the same and rebuild rpm database:
Command to rebuild rpm database
rm -f /var/lib/rpm/__db*
rpm --rebuilddb
Rebuilding corrupted RPM database
One of our client reported that he is getting an error and RPM database is corrupted. He is using Red Hat Linux
Sometime it is possible to fix RPM database errors. I am surprised that many admins do not make back up of RPM DB (/var/lib/rpm).
Anyways if you ever messed up with RPM database, here is a quick guide to fix it (you must have rpmdb tools installed):
Take system to single user mode to avoid further damage and to make backup/restore process easy:
# init 1
Method # 1
Remove /var/lib/rpm/__db* files to avoid stale locks:
# cd /var/lib
# rm __db*
Rebuild RPM database:
# rpm --rebuilddb
# rpmdb_verify Packages
Method # 2
If you are still getting errors, then try your luck with following commands:
# mv Packages Packages-BAKUP
# db_dump Packages-BAKUP | db_load Packages
# rpm -qa
# rpm --rebuilddb
Segmentation Fault Occur on Linux
Usually signal #11 (SIGSEGV) set, which is defined in the header file signal.h file. The default action for a program upon receiving SIGSEGV is abnormal termination. This action will end the process, but may generate a core file (also known as core dump) to aid debugging, or perform some other platform-dependent action. A core dump is the recorded state of the working memory of a computer program at a specific time, generally when the program has terminated abnormally.
Segmentation fault can also occur under following circumstances:
a) A buggy program / command, which can be only fixed by applying patch.
b) It can also appear when you try to access an array beyond the end of an array under C programming.
c) Inside a chrooted jail this can occur when critical shared libs, config file or /dev/ entry missing.
d) Sometime hardware or faulty memory or driver can also create problem.
e) Maintain suggested environment for all computer equipment (overheating can also generate this problem).
Command to rebuild rpm database
rm -f /var/lib/rpm/__db*
rpm --rebuilddb
Rebuilding corrupted RPM database
One of our client reported that he is getting an error and RPM database is corrupted. He is using Red Hat Linux
Sometime it is possible to fix RPM database errors. I am surprised that many admins do not make back up of RPM DB (/var/lib/rpm).
Anyways if you ever messed up with RPM database, here is a quick guide to fix it (you must have rpmdb tools installed):
Take system to single user mode to avoid further damage and to make backup/restore process easy:
# init 1
Method # 1
Remove /var/lib/rpm/__db* files to avoid stale locks:
# cd /var/lib
# rm __db*
Rebuild RPM database:
# rpm --rebuilddb
# rpmdb_verify Packages
Method # 2
If you are still getting errors, then try your luck with following commands:
# mv Packages Packages-BAKUP
# db_dump Packages-BAKUP | db_load Packages
# rpm -qa
# rpm --rebuilddb
Segmentation Fault Occur on Linux
Usually signal #11 (SIGSEGV) set, which is defined in the header file signal.h file. The default action for a program upon receiving SIGSEGV is abnormal termination. This action will end the process, but may generate a core file (also known as core dump) to aid debugging, or perform some other platform-dependent action. A core dump is the recorded state of the working memory of a computer program at a specific time, generally when the program has terminated abnormally.
Segmentation fault can also occur under following circumstances:
a) A buggy program / command, which can be only fixed by applying patch.
b) It can also appear when you try to access an array beyond the end of an array under C programming.
c) Inside a chrooted jail this can occur when critical shared libs, config file or /dev/ entry missing.
d) Sometime hardware or faulty memory or driver can also create problem.
e) Maintain suggested environment for all computer equipment (overheating can also generate this problem).
Linux: TMOUT To Automatically Log Users Out
How do I auto Logout my shell user in Linux after certain minutes of inactivity?
Linux bash shell allows you to define the TMOUT environment variable. Set TMOUT to automatically log users out after a period of inactivity. The value is defined in seconds. For example,
export TMOUT=120
The above command will implement a 2 minute idle time-out for the default /bin/bash shell. You can edit your ~/.bash_profile or /etc/profile file as follows to define a 5 minute idle time out:
# set a 5 min timeout policy for bash shell
TMOUT=300
readonly TMOUT
export TMOUT
Save and close the file. The readonly command is used to make variables and functions readonly i.e. you user cannot change the value of variable called TMOUT.
How Do I Disable TMOUT?
To disable auto-logout, just set the TMOUT to zero or unset it as follows:
$ export TMOUT=0
or
$ unset TMOUT
Please note that readonly variable can only be disabled by root in /etc/profile or ~/.bash_profile
Linux bash shell allows you to define the TMOUT environment variable. Set TMOUT to automatically log users out after a period of inactivity. The value is defined in seconds. For example,
export TMOUT=120
The above command will implement a 2 minute idle time-out for the default /bin/bash shell. You can edit your ~/.bash_profile or /etc/profile file as follows to define a 5 minute idle time out:
# set a 5 min timeout policy for bash shell
TMOUT=300
readonly TMOUT
export TMOUT
Save and close the file. The readonly command is used to make variables and functions readonly i.e. you user cannot change the value of variable called TMOUT.
How Do I Disable TMOUT?
To disable auto-logout, just set the TMOUT to zero or unset it as follows:
$ export TMOUT=0
or
$ unset TMOUT
Please note that readonly variable can only be disabled by root in /etc/profile or ~/.bash_profile
How to Add a new yum repository to install software under CentOS / Redhat Linux
CentOS / Fedora Core / RHEL 5 uses yum for software management. Yum allows you to add a new repository as a source to install binary software.
Understanding yum repository
yum repository configured using /etc/yum.conf file. Additional configuration files are also read from the directories set by the reposdir option (default is /etc/yum.repos.d and /etc/yum/repos.d.
RPMforge repository
Usually repository carries extra and useful packages. RPMforge is one of such repository. You can easily configure RPMforge repository for RHEL5 just by running following single RPM command:
# rpm -Uhv http://apt.sw.be/packages/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
For 64 bit RHEL 5 Linux, enter:
# rpm -Uhv http://apt.sw.be/packages/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm
Now you can install software from RPMforge.
How do I install 3rd party repository manually?
Let us say you would like to install 3rd party repository from foo.nixcraft.com. Create a file called foo:
# cd /etc/yum.repos.d
# vi foo
Append following code:
[foo]
name=Foo for RHEL/ CentOS $releasever - $basearch
baseurl=http://foo.nixcraft.com/centos/$releasever/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://foo.nixcraft.com/RPM-GPG-KEY.txt
Save and close the file.
Where,
• [foo] : Repository name i.e. The [main] section must exist for yum to do anything.
• name=Foo for RHEL/ CentOS $releasever - $basearch : A human readable string describing the repository name
• baseurl=http://foo.nixcraft.com/centos/$releasever/$basearch/ : Must be a URL to the directory where the yum repository’s ‘repodata’ directory lives
• enabled=1 : Enabled or disabled repo. To disable the repository temporarily, set the enabled to 0
• gpgcheck=1 : Security feature, use GPG key
• gpgkey=http://foo.nixcraft.com/RPM-GPG-KEY.txt : GPL file location
Also you need to import the gpg key for the repository as follows:
# rpm --import http://foo.nixcraft.com/RPM-GPG-KEY.txt
Now you are ready to install software from foo repository. For further information refer to yum.conf man page:
$ man yum.conf
$ man yum
Hope this t will help you to configure repository as and when required
Understanding yum repository
yum repository configured using /etc/yum.conf file. Additional configuration files are also read from the directories set by the reposdir option (default is /etc/yum.repos.d and /etc/yum/repos.d.
RPMforge repository
Usually repository carries extra and useful packages. RPMforge is one of such repository. You can easily configure RPMforge repository for RHEL5 just by running following single RPM command:
# rpm -Uhv http://apt.sw.be/packages/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
For 64 bit RHEL 5 Linux, enter:
# rpm -Uhv http://apt.sw.be/packages/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm
Now you can install software from RPMforge.
How do I install 3rd party repository manually?
Let us say you would like to install 3rd party repository from foo.nixcraft.com. Create a file called foo:
# cd /etc/yum.repos.d
# vi foo
Append following code:
[foo]
name=Foo for RHEL/ CentOS $releasever - $basearch
baseurl=http://foo.nixcraft.com/centos/$releasever/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://foo.nixcraft.com/RPM-GPG-KEY.txt
Save and close the file.
Where,
• [foo] : Repository name i.e. The [main] section must exist for yum to do anything.
• name=Foo for RHEL/ CentOS $releasever - $basearch : A human readable string describing the repository name
• baseurl=http://foo.nixcraft.com/centos/$releasever/$basearch/ : Must be a URL to the directory where the yum repository’s ‘repodata’ directory lives
• enabled=1 : Enabled or disabled repo. To disable the repository temporarily, set the enabled to 0
• gpgcheck=1 : Security feature, use GPG key
• gpgkey=http://foo.nixcraft.com/RPM-GPG-KEY.txt : GPL file location
Also you need to import the gpg key for the repository as follows:
# rpm --import http://foo.nixcraft.com/RPM-GPG-KEY.txt
Now you are ready to install software from foo repository. For further information refer to yum.conf man page:
$ man yum.conf
$ man yum
Hope this t will help you to configure repository as and when required
ext4 Linux File System
ext4 Linux File System
The ext4 filesystem has more features and generally better performance than ext3, which is showing its age in the Linux filesystem world.
Features include:
Delayed allocation & mballoc allocator for better on-disk allocation
* Sub-second timestamps
* Space preallocation
* Journal checksumming
* Large (>2T) file support
* Large (>16T) filesystem support
* Defragmentation support
The ext4 filesystem has more features and generally better performance than ext3, which is showing its age in the Linux filesystem world.
Features include:
Delayed allocation & mballoc allocator for better on-disk allocation
* Sub-second timestamps
* Space preallocation
* Journal checksumming
* Large (>2T) file support
* Large (>16T) filesystem support
* Defragmentation support
PORT FORWARDING with IPTABLES in LINUX
These are the Iptable rules required for port forwarding xxx.xxx.xxx.xxx:8888 to 192.168.0.2:80
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 -d xxx.xxx.xxx.xxx
–dport 8888 -j DNAT –to 192.168.0.2:80
/sbin/iptables -A FORWARD -p tcp -i eth0 -d 192.168.0.2 –dport 80 -j ACCEPT
# iptables -t nat -L
Here rdp 75.144.218.185:13389 will forward to 192.168.1.5 port 3389 here 3389 is rdp port
/etc/sysconfig/iptables
-A PREROUTING -d 75.144.218.185 -i eth1 -p tcp -m tcp –dport 13389 -j DNAT –to-destination 192.168.1.5:3389
-A PREROUTING -d 75.144.218.185 -i eth1 -p tcp -m tcp –dport 80 -j DNAT –to-destination 192.168.1.5:8
Howto disable the iptables firewall in Linux
Task: Disable / Turn off Linux Firewall (Red hat/CentOS/Fedora Core)
Type the following two commands (you must login as the root user):
# /etc/init.d/iptables save
# /etc/init.d/iptables stop
Task: Enable / Turn on Linux Firewall (Red hat/CentOS/Fedora Core)
Type the following command to turn on iptables firewall:
# /etc/init.d/iptables start
Other Linux distribution
If you are using other Linux distribution such as Debian / Ubuntu / Suse Linux etc, try following generic procedure.
Save firewall rules
# iptables-save > /root/firewall.rules
OR
$ sudo iptables-save > /root/firewall.rules
Now type the following commands (login as root):
# iptables -X
# iptables -t nat -F
# iptables -t nat -X
# iptables -t mangle -F
# iptables -t mangle -X
# iptables -P INPUT ACCEPT
# iptables -P FORWARD ACCEPT
# iptables -P OUTPUT ACCEPT
To restore or turn on firewall type the following command:
# iptables-restore < /root/firewall.rules
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 -d xxx.xxx.xxx.xxx
–dport 8888 -j DNAT –to 192.168.0.2:80
/sbin/iptables -A FORWARD -p tcp -i eth0 -d 192.168.0.2 –dport 80 -j ACCEPT
# iptables -t nat -L
Here rdp 75.144.218.185:13389 will forward to 192.168.1.5 port 3389 here 3389 is rdp port
/etc/sysconfig/iptables
-A PREROUTING -d 75.144.218.185 -i eth1 -p tcp -m tcp –dport 13389 -j DNAT –to-destination 192.168.1.5:3389
-A PREROUTING -d 75.144.218.185 -i eth1 -p tcp -m tcp –dport 80 -j DNAT –to-destination 192.168.1.5:8
Howto disable the iptables firewall in Linux
Task: Disable / Turn off Linux Firewall (Red hat/CentOS/Fedora Core)
Type the following two commands (you must login as the root user):
# /etc/init.d/iptables save
# /etc/init.d/iptables stop
Task: Enable / Turn on Linux Firewall (Red hat/CentOS/Fedora Core)
Type the following command to turn on iptables firewall:
# /etc/init.d/iptables start
Other Linux distribution
If you are using other Linux distribution such as Debian / Ubuntu / Suse Linux etc, try following generic procedure.
Save firewall rules
# iptables-save > /root/firewall.rules
OR
$ sudo iptables-save > /root/firewall.rules
Now type the following commands (login as root):
# iptables -X
# iptables -t nat -F
# iptables -t nat -X
# iptables -t mangle -F
# iptables -t mangle -X
# iptables -P INPUT ACCEPT
# iptables -P FORWARD ACCEPT
# iptables -P OUTPUT ACCEPT
To restore or turn on firewall type the following command:
# iptables-restore < /root/firewall.rules
Quit from shell without saving into history
There are many instances when we want to quit from shell without saving any command in history. We might have run by mistake some rookie command and you dont want to disclose it to others.
kill -9 $$ will do the needful as $$ will provide the PID of the current shell
How to know the status of all the running services
There are many commands like netstat -plant,ps -aux but when you want to know all the services which are running presently into your RHEL box, service --status-all command is very handy . It shows all the running services into your box.
kill -9 $$ will do the needful as $$ will provide the PID of the current shell
How to know the status of all the running services
There are many commands like netstat -plant,ps -aux but when you want to know all the services which are running presently into your RHEL box, service --status-all command is very handy . It shows all the running services into your box.
Local port range sysctl tuning for high bandwidth Linux servers
Most of the Linux distributions specify local port range from 16384 to 65536 and this may be too low for very high bandwidth and busy boxes, let’s say SMTP, Hosting, POP3/Imap and Proxy servers.
You can adjust this setting by editing /etc/sysctl.conf file and replacing the default:
net.ipv4.ip_local_port_range = 16384 65536
with
net.ipv4.ip_local_port_range = 1024 65536
You can adjust this setting by editing /etc/sysctl.conf file and replacing the default:
net.ipv4.ip_local_port_range = 16384 65536
with
net.ipv4.ip_local_port_range = 1024 65536
Enabling Root User Login On VSFTP
As you know ftp servers normally wont allow to login as root user or any of the local user (Example : daemon,bin, sys, nobody…etc) due to security and preventing the ftp servers from ftp brute force scanner attacks. If you still want to enable root user login on vsFTP for some reasons, here is a short tutorial which allows you to do that.
Enabling Root User Login On VSFTP
SSH your server as root and then search for the files ftpusers, vsftpd.users (or) user_list (on Centos the locations should be under the /etc/vsftpd or under /etc). Edit the files on your favorit editor and remove the ” root ” from the list of users. Now edit the /etc/vsftpd.conf file and enable/uncomment the following two lines :
# vi /etc/vsftpd.conf
local_enable=YES
userlist_file=/etc/vsftpd/vsftpd.users (if exist)
Restart the vsftpd server to load with the new configuration.
# /etc/init.d/vsftpd restart
Now try login as root via ftp and see how it goes.
Enabling Root User Login On VSFTP
SSH your server as root and then search for the files ftpusers, vsftpd.users (or) user_list (on Centos the locations should be under the /etc/vsftpd or under /etc). Edit the files on your favorit editor and remove the ” root ” from the list of users. Now edit the /etc/vsftpd.conf file and enable/uncomment the following two lines :
# vi /etc/vsftpd.conf
local_enable=YES
userlist_file=/etc/vsftpd/vsftpd.users (if exist)
Restart the vsftpd server to load with the new configuration.
# /etc/init.d/vsftpd restart
Now try login as root via ftp and see how it goes.
Clearing dmesg logs
What is dmesg?
The main purpose of dmesg is to display kernel messages. dmesg can provide helpful information in case of hardware problems or problems with loading a module into the kernel. In addition, with dmesg, you can determine what hardware is installed on your server. During every boot, Linux checks your hardware and logs information about it. You can view these logs using the command /bin/dmesg.
Clearing the kernel ring buffer
If you want you can backup the logs using dmesg > filename before clearing it. Just execute the following command to clear and frest start the ring buffer loggin (make sure you have logged in as root).
# dmesg -c
Execute the command dmesg to make sure the logs are cleared. Check man dmesg for more help.
Disabling USB ports
If you administrating a small or large workstations running with Linux Desktops and want to disable the USB ports for security so that no one can copy the data via pen drive, try the following steps to disable the USB port(s).
Edit the grub.conf and add the following lines(you need to login as root).
# vi /boot/grub/grub.conf
Then add the following lines on the right kernel version
kernel /vmlinuz rhgb quiet nousb
Save and exit the file and reboot the system to disable the USB ports and the boot time.
Root user login on VSFTP
As you know ftp servers normally wont allow to login as root user or any of the local user (Example : daemon,bin, sys, nobody…etc) due to security and preventing the ftp servers from ftp brute force scanner attacks. If you still want to enable root user login on vsFTP for some reasons, here is a short tutorial which allows you to do that.
The main purpose of dmesg is to display kernel messages. dmesg can provide helpful information in case of hardware problems or problems with loading a module into the kernel. In addition, with dmesg, you can determine what hardware is installed on your server. During every boot, Linux checks your hardware and logs information about it. You can view these logs using the command /bin/dmesg.
Clearing the kernel ring buffer
If you want you can backup the logs using dmesg > filename before clearing it. Just execute the following command to clear and frest start the ring buffer loggin (make sure you have logged in as root).
# dmesg -c
Execute the command dmesg to make sure the logs are cleared. Check man dmesg for more help.
Disabling USB ports
If you administrating a small or large workstations running with Linux Desktops and want to disable the USB ports for security so that no one can copy the data via pen drive, try the following steps to disable the USB port(s).
Edit the grub.conf and add the following lines(you need to login as root).
# vi /boot/grub/grub.conf
Then add the following lines on the right kernel version
kernel /vmlinuz
Save and exit the file and reboot the system to disable the USB ports and the boot time.
Root user login on VSFTP
As you know ftp servers normally wont allow to login as root user or any of the local user (Example : daemon,bin, sys, nobody…etc) due to security and preventing the ftp servers from ftp brute force scanner attacks. If you still want to enable root user login on vsFTP for some reasons, here is a short tutorial which allows you to do that.
Disable CTRL+ALT+Del keys
Open /etc/inittab file, enter:
# vi /etc/inittab
Search for line that read as follows:
ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now
And remove the line or comment out the above line by putting a hash mark (#) in front of it:
# ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now
Save the file and exit to shell promot. Reboot system to take effect or type command:
# init q
# vi /etc/inittab
Search for line that read as follows:
ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now
And remove the line or comment out the above line by putting a hash mark (#) in front of it:
# ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now
Save the file and exit to shell promot. Reboot system to take effect or type command:
# init q
Linux File System
Linux File System
Use 'ext3' file system in Linux.
- It is enhanced version of ext2
- With journaling capability - high level of data integrity (in event of unclean shutdown)
- It does not need to check disks on unclean shutdown and reboot (time consuming)
- Faster write - ext3 journaling optimizes hard drive head motion
# mke2fs -j -b 2048 -i 4096 /dev/sda
mke2fs 1.32 (09-Nov-2002)
/dev/sda is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=2048 (log=1)
Fragment size=2048 (log=1)
13107200 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
1600 block groups
16384 blocks per group, 16384 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
16384, 49152, 81920, 114688, 147456, 409600, 442368, 802816, 1327104,
2048000, 3981312, 5619712, 10240000, 11943936
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Use 'noatime' File System Mount Option
Use 'noatime' option in the file system boot-up configuration file 'fstab'. Edit the fstab file under /etc. This option works the best if external storage is used, for example, SAN:
# more /etc/fstab
LABEL=/ / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/sdc2 swap swap defaults 0 0
/dev/cdrom /mnt/cdrom udf,iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
/dev/sda /database ext3 defaults,noatime 1 2
/dev/sdb /logs ext3 defaults,noatime 1 2
/dev/sdc /multimediafiles ext3 defaults,noatime 1 2
Tune the Elevator Algorithm in Linux Kernel for Disk I/O
After choosing the file system, there are several kernel and mounting options that can affect it. One such kernel setting is the elevator algorithm. Tuning the elevator algorithm helps the system balance the need for low latency with the need to collect enough data to efficiently organize batches of read and write requests to the disk. The elevator algorithm can be adjusted with the following command:
# elvtune -r 1024 -w 2048 /dev/sda
/dev/sda elevator ID 2
read_latency: 1024
write_latency: 2048
max_bomb_segments: 6
The parameters are: read latency (-r), write latency (-w) and the device affected.
Red Hat recommends using a read latency half the size of the write latency (as shown).
As usual, to make this setting permanent, add the 'elvtune' command to the
/etc/rc.d/rc.local script.
Others
Disable Unnecessary Daemons (They Take up Memory and CPU)
There are daemons (background services) running on every server that are probably not needed. Disabling these daemons frees memory, decreases startup time, and decreases the number of processes that the CPU has to handle. A side benefit to this is increased security of the server because fewer daemons mean fewer exploitable processes.
Some example Linux daemons running by default (and should be disabled). Use command:
#/sbin/chkconfig --levels 2345 sendmail off
#/sbin/chkconfig sendmail off
Daemon Description
apmd Advanced power management daemon
autofs Automatically mounts file systems on demand (i.e.: mounts a CD-ROM automatically)
cups Common UNIX� Printing System
hpoj HP OfficeJet support
isdn ISDN modem support
netfs Used in support of exporting NFS shares
nfslock Used for file locking with NFS
pcmcia PCMCIA support on a server
rhnsd Red Hat Network update service for checking for updates and security errata
sendmail Mail Transport Agent
xfs Font server for X Windows
Shutdown GUI
Normally, there is no need for a GUI on a Linux server. All administration tasks can be achieved by the command line, redirecting the X display or through a Web browser interface. Modify the 'inittab' file to set boot level as 3:
To set the initial runlevel (3 instead of 5) of a machine at boot,
modify the /etc/inittab fi
Disable the Ctrl-Alt-Delete shutdown keys in Linux
On a production system it is recommended that you disable the [Ctrl]-[Alt]-[Delete] shutdown. It is configured using /etc/inittab (used by sysv-compatible init process) file. The inittab file describes which processes are started at bootup and during normal operation. You need to open this file and remove (or comment it) ctrlaltdel entry.
Ctrlaltdel specifies the process that will be executed when init receives the SIGINT signal. SIGINT is the symbolic name for the signal thrown by computer programs when a user wishes to interrupt the process, for example reboot/shutdown system using [Ctrl]-[Alt]-[Del].). This means that someone on the system console has pressed the CTRL-ALT-DEL key combination. Typically one wants to execute some sort of shutdown either to get into single-user level or to reboot the machine.
Use 'ext3' file system in Linux.
- It is enhanced version of ext2
- With journaling capability - high level of data integrity (in event of unclean shutdown)
- It does not need to check disks on unclean shutdown and reboot (time consuming)
- Faster write - ext3 journaling optimizes hard drive head motion
# mke2fs -j -b 2048 -i 4096 /dev/sda
mke2fs 1.32 (09-Nov-2002)
/dev/sda is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=2048 (log=1)
Fragment size=2048 (log=1)
13107200 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
1600 block groups
16384 blocks per group, 16384 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
16384, 49152, 81920, 114688, 147456, 409600, 442368, 802816, 1327104,
2048000, 3981312, 5619712, 10240000, 11943936
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Use 'noatime' File System Mount Option
Use 'noatime' option in the file system boot-up configuration file 'fstab'. Edit the fstab file under /etc. This option works the best if external storage is used, for example, SAN:
# more /etc/fstab
LABEL=/ / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/sdc2 swap swap defaults 0 0
/dev/cdrom /mnt/cdrom udf,iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
/dev/sda /database ext3 defaults,noatime 1 2
/dev/sdb /logs ext3 defaults,noatime 1 2
/dev/sdc /multimediafiles ext3 defaults,noatime 1 2
Tune the Elevator Algorithm in Linux Kernel for Disk I/O
After choosing the file system, there are several kernel and mounting options that can affect it. One such kernel setting is the elevator algorithm. Tuning the elevator algorithm helps the system balance the need for low latency with the need to collect enough data to efficiently organize batches of read and write requests to the disk. The elevator algorithm can be adjusted with the following command:
# elvtune -r 1024 -w 2048 /dev/sda
/dev/sda elevator ID 2
read_latency: 1024
write_latency: 2048
max_bomb_segments: 6
The parameters are: read latency (-r), write latency (-w) and the device affected.
Red Hat recommends using a read latency half the size of the write latency (as shown).
As usual, to make this setting permanent, add the 'elvtune' command to the
/etc/rc.d/rc.local script.
Others
Disable Unnecessary Daemons (They Take up Memory and CPU)
There are daemons (background services) running on every server that are probably not needed. Disabling these daemons frees memory, decreases startup time, and decreases the number of processes that the CPU has to handle. A side benefit to this is increased security of the server because fewer daemons mean fewer exploitable processes.
Some example Linux daemons running by default (and should be disabled). Use command:
#/sbin/chkconfig --levels 2345 sendmail off
#/sbin/chkconfig sendmail off
Daemon Description
apmd Advanced power management daemon
autofs Automatically mounts file systems on demand (i.e.: mounts a CD-ROM automatically)
cups Common UNIX� Printing System
hpoj HP OfficeJet support
isdn ISDN modem support
netfs Used in support of exporting NFS shares
nfslock Used for file locking with NFS
pcmcia PCMCIA support on a server
rhnsd Red Hat Network update service for checking for updates and security errata
sendmail Mail Transport Agent
xfs Font server for X Windows
Shutdown GUI
Normally, there is no need for a GUI on a Linux server. All administration tasks can be achieved by the command line, redirecting the X display or through a Web browser interface. Modify the 'inittab' file to set boot level as 3:
To set the initial runlevel (3 instead of 5) of a machine at boot,
modify the /etc/inittab fi
Disable the Ctrl-Alt-Delete shutdown keys in Linux
On a production system it is recommended that you disable the [Ctrl]-[Alt]-[Delete] shutdown. It is configured using /etc/inittab (used by sysv-compatible init process) file. The inittab file describes which processes are started at bootup and during normal operation. You need to open this file and remove (or comment it) ctrlaltdel entry.
Ctrlaltdel specifies the process that will be executed when init receives the SIGINT signal. SIGINT is the symbolic name for the signal thrown by computer programs when a user wishes to interrupt the process, for example reboot/shutdown system using [Ctrl]-[Alt]-[Del].). This means that someone on the system console has pressed the CTRL-ALT-DEL key combination. Typically one wants to execute some sort of shutdown either to get into single-user level or to reboot the machine.
Linux Tuning Parameters
Kernel
To successfully run enterprise applications, such as a database server, on your Linux distribution, you may be required to update some of the default kernel parameter settings. For example, the 2.4.x series kernel message queue parameter msgmni has a default value (for example, shared memory, or shmmax is only 33,554,432 bytes on Red Hat Linux by default) that allows only a limited number of simultaneous connections to a database. Here are some recommended values (by the IBM DB2 Support Web site) for database servers to run optimally:
- kernel.shmmax=268435456 for 32-bit
- kernel.shmmax=1073741824 for 64-bit
- kernel.msgmni=1024
- fs.file-max=8192
- kernel.sem="250 32000 32 1024"
Shared Memory
To view current settings, run command:
# more /proc/sys/kernel/shmmax
To set it to a new value for this running session, which takes effect immediately, run command:
# echo 268435456 > /proc/sys/kernel/shmmax
To set it to a new value permanently (so it survives reboots), modify the sysctl.conf file:
...
kernel.shmmax = 268435456
...
Semaphores
To view current settings, run command:
# more /proc/sys/kernel/sem
250 32000 32 1024
To set it to a new value for this running session, which takes effect immediately, run command:
# echo 500 512000 64 2048 > /proc/sys/kernel/sem
Parameters meaning:
SEMMSL - semaphores per ID
SEMMNS - (SEMMNI*SEMMSL) max semaphores in system
SEMOPM - max operations per semop call
SEMMNI - max semaphore identifiers
ulimits
To view current settings, run command:
# ulimit -a
To set it to a new value for this running session, which takes effect immediately, run command:
# ulimit -n 8800
# ulimit -n -1 // for unlimited; recommended if server isn't shared
Alternatively, if you want the changes to survive reboot, do the following:
- Exit all shell sessions for the user you want to change limits on.
- As root, edit the file /etc/security/limits.conf and add these two lines toward the end:
user1 soft nofile 16000
user1 hard nofile 20000
** the two lines above changes the max number of file handles - nofile - to new settings.
- Save the file.
- Login as the user1 again. The new changes will be in effect.
Message queues
To view current settings, run command:
# more /proc/sys/kernel/msgmni
# more /proc/sys/kernel/msgmax
To set it to a new value for this running session, which takes effect immediately, run command:
# echo 2048 > /proc/sys/kernel/msgmni
# echo 64000 > /proc/sys/kernel/msgmax
Network
Gigabit-based network interfaces have many performance-related parameters inside of their device driver such as CPU affinity. Also, the TCP protocol can be tuned to increase network throughput for connection-hungry applications.
Tune TCP
To view current TCP settings, run command:
# sysctl net.ipv4.tcp_keepalive_time
net.ipv4.tcp_keepalive_time = 7200 // 2 hours
where net.ipv4.tcp_keepalive_time is a TCP tuning parameter.
To set a TCP parameter to a value, run command:
# sysctl -w net.ipv4.tcp_keepalive_time=1800
A list of recommended TCP parameters, values, and their meanings:
Tuning Parameter Tuning Value Description of impact
------------------------------------------------------------------------------
net.ipv4.tcp_tw_reuse
net.ipv4.tcp_tw_recycle 1 Reuse sockets in the time-wait state
---
net.core.wmem_max 8388608 Increase the maximum write buffer queue size
---
net.core.rmem_max 8388608 Increase the maximum read buffer queue size
---
net.ipv4.tcp_rmem 4096 87380 8388608 Set the minimum, initial, and maximum sizes for the read buffer. Note that this maximum should be less than or equal to the value set in net.core.rmem_max.
---
net.ipv4.tcp_wmem 4096 87380 8388608 Set the minimum, initial, and maximum sizes for the write buffer. Note that this maximum should be less than or equal to the value set in net.core.wmem_max.
---
timeout_timewait echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
Determines the time that must elapse before TCP/IP can release a closed connection and reuse its resources. This interval between closure and release is known as the TIME_WAIT state or twice the maximum segment lifetime (2MSL) state. During this time, reopening the connection to the client and server cost less than establishing a new connection. By reducing the value of this entry, TCP/IP can release closed connections faster, providing more resources for new connections. Adjust this parameter if the running application requires rapid release, the creation of new connections, and a low throughput due to many connections sitting in the TIME_WAIT state.
Disk I/O
To successfully run enterprise applications, such as a database server, on your Linux distribution, you may be required to update some of the default kernel parameter settings. For example, the 2.4.x series kernel message queue parameter msgmni has a default value (for example, shared memory, or shmmax is only 33,554,432 bytes on Red Hat Linux by default) that allows only a limited number of simultaneous connections to a database. Here are some recommended values (by the IBM DB2 Support Web site) for database servers to run optimally:
- kernel.shmmax=268435456 for 32-bit
- kernel.shmmax=1073741824 for 64-bit
- kernel.msgmni=1024
- fs.file-max=8192
- kernel.sem="250 32000 32 1024"
Shared Memory
To view current settings, run command:
# more /proc/sys/kernel/shmmax
To set it to a new value for this running session, which takes effect immediately, run command:
# echo 268435456 > /proc/sys/kernel/shmmax
To set it to a new value permanently (so it survives reboots), modify the sysctl.conf file:
...
kernel.shmmax = 268435456
...
Semaphores
To view current settings, run command:
# more /proc/sys/kernel/sem
250 32000 32 1024
To set it to a new value for this running session, which takes effect immediately, run command:
# echo 500 512000 64 2048 > /proc/sys/kernel/sem
Parameters meaning:
SEMMSL - semaphores per ID
SEMMNS - (SEMMNI*SEMMSL) max semaphores in system
SEMOPM - max operations per semop call
SEMMNI - max semaphore identifiers
ulimits
To view current settings, run command:
# ulimit -a
To set it to a new value for this running session, which takes effect immediately, run command:
# ulimit -n 8800
# ulimit -n -1 // for unlimited; recommended if server isn't shared
Alternatively, if you want the changes to survive reboot, do the following:
- Exit all shell sessions for the user you want to change limits on.
- As root, edit the file /etc/security/limits.conf and add these two lines toward the end:
user1 soft nofile 16000
user1 hard nofile 20000
** the two lines above changes the max number of file handles - nofile - to new settings.
- Save the file.
- Login as the user1 again. The new changes will be in effect.
Message queues
To view current settings, run command:
# more /proc/sys/kernel/msgmni
# more /proc/sys/kernel/msgmax
To set it to a new value for this running session, which takes effect immediately, run command:
# echo 2048 > /proc/sys/kernel/msgmni
# echo 64000 > /proc/sys/kernel/msgmax
Network
Gigabit-based network interfaces have many performance-related parameters inside of their device driver such as CPU affinity. Also, the TCP protocol can be tuned to increase network throughput for connection-hungry applications.
Tune TCP
To view current TCP settings, run command:
# sysctl net.ipv4.tcp_keepalive_time
net.ipv4.tcp_keepalive_time = 7200 // 2 hours
where net.ipv4.tcp_keepalive_time is a TCP tuning parameter.
To set a TCP parameter to a value, run command:
# sysctl -w net.ipv4.tcp_keepalive_time=1800
A list of recommended TCP parameters, values, and their meanings:
Tuning Parameter Tuning Value Description of impact
------------------------------------------------------------------------------
net.ipv4.tcp_tw_reuse
net.ipv4.tcp_tw_recycle 1 Reuse sockets in the time-wait state
---
net.core.wmem_max 8388608 Increase the maximum write buffer queue size
---
net.core.rmem_max 8388608 Increase the maximum read buffer queue size
---
net.ipv4.tcp_rmem 4096 87380 8388608 Set the minimum, initial, and maximum sizes for the read buffer. Note that this maximum should be less than or equal to the value set in net.core.rmem_max.
---
net.ipv4.tcp_wmem 4096 87380 8388608 Set the minimum, initial, and maximum sizes for the write buffer. Note that this maximum should be less than or equal to the value set in net.core.wmem_max.
---
timeout_timewait echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
Determines the time that must elapse before TCP/IP can release a closed connection and reuse its resources. This interval between closure and release is known as the TIME_WAIT state or twice the maximum segment lifetime (2MSL) state. During this time, reopening the connection to the client and server cost less than establishing a new connection. By reducing the value of this entry, TCP/IP can release closed connections faster, providing more resources for new connections. Adjust this parameter if the running application requires rapid release, the creation of new connections, and a low throughput due to many connections sitting in the TIME_WAIT state.
Disk I/O
Immutable Files in Linux
Recently I came across a situation. I was trying to delete a configuration file in Linux and got error “rm: cannot remove `path/filename’: Operation not permitted”. I was logged in as root but even though I was neither able to change the contents of file nor able to delete it. I checked the ownership and permissions on the file and found that the file is owned by root user and permissions are 644 which are the default permission when you create a new file.
[root@vcsnode1 ~]# ls -l /etc/configfile
-rw-r–r– 1 root root 0 Jan 26 08:45 /etc/configfile
[root@vcsnode1 ~]#
After little troubleshooting, I found that Immutable Flag was set on the file.
What is Immutable Flag :
Immutable flag is an additional file attribute which can be set on file so that anyone should not be able to delete/tamper with the file. It is very useful to setup this flag on Production Servers where changes to configuration files are rare. This attribute can be set on a Linux second extended file system only.
Who can set immutable flag on a file:
Either root user or any process having CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
How to check whether immutable flag is set on a file
lsattr command can be used to check whether an immutable flag is set on a file.
Syntax : lsattr filename
Example :
[root@vcsnode1 ~]# lsattr /etc/configfile
—-i——– /etc/configfile
[root@vcsnode1 ~]#
How to Set/Unset Immutable Flag
Immutable flag can be set/unset using the chattr command.
To set the flag use the + sign with chattr command and to unset use the – sign with chattr command
Syntax : chattr +or- i filename
Example
[root@vcsnode1 ~]# chattr +i /etc/configfile
[root@vcsnode1 ~]# lsattr /etc/configfile
—-i——– /etc/configfile
[root@vcsnode1 ~]# chattr -i /etc/configfile
[root@vcsnode1 ~]# lsattr /etc/configfile
————- /etc/configfile
[root@vcsnode1 ~]#
There are many other file attributes which can be set on a file on Linux second extended file system. A couple of attributes are mentioned below :
1. append only (a) : – File with this attribute can be opened in append mode only. One has to be root or a process having CAP_LINUX_Immutable capability to set/unset this flat.
2. compressed (c) : - File with this attribute keep the file in compressed state on the disk by the kernel. A read to this file always
3. no dump (d) :- File with this attribute set, would not be a candidate for backup when the dump program executes.
4. data journalling (j) :- File with this attribute set writes all it’s data to journal before writing the data to the file if the file system is mounted with ordered or writeback journaling options. If the file system is mounted with “journal” journaling option, this flag has no effect as the “journal” journaling option would provide similar functionality for all the files stored on the file system.
5. secure deletion (s) :- If the file with this attribute set is deleted, all the data blocks for the file are zeroed and written back to the disk.
All the above attributes can be set/unset using the chattr command.
Syntax : chattr + or – flag filename.
To set an attribute use “+” sign with chattr command followed by the flag mentioned above in “()”.
To unset an attribute use “-” sign with chattr command followed by the flag mentioned above in “()”.
References : Man page for lsattr and chattr
[root@vcsnode1 ~]# ls -l /etc/configfile
-rw-r–r– 1 root root 0 Jan 26 08:45 /etc/configfile
[root@vcsnode1 ~]#
After little troubleshooting, I found that Immutable Flag was set on the file.
What is Immutable Flag :
Immutable flag is an additional file attribute which can be set on file so that anyone should not be able to delete/tamper with the file. It is very useful to setup this flag on Production Servers where changes to configuration files are rare. This attribute can be set on a Linux second extended file system only.
Who can set immutable flag on a file:
Either root user or any process having CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
How to check whether immutable flag is set on a file
lsattr command can be used to check whether an immutable flag is set on a file.
Syntax : lsattr filename
Example :
[root@vcsnode1 ~]# lsattr /etc/configfile
—-i——– /etc/configfile
[root@vcsnode1 ~]#
How to Set/Unset Immutable Flag
Immutable flag can be set/unset using the chattr command.
To set the flag use the + sign with chattr command and to unset use the – sign with chattr command
Syntax : chattr +or- i filename
Example
[root@vcsnode1 ~]# chattr +i /etc/configfile
[root@vcsnode1 ~]# lsattr /etc/configfile
—-i——– /etc/configfile
[root@vcsnode1 ~]# chattr -i /etc/configfile
[root@vcsnode1 ~]# lsattr /etc/configfile
————- /etc/configfile
[root@vcsnode1 ~]#
There are many other file attributes which can be set on a file on Linux second extended file system. A couple of attributes are mentioned below :
1. append only (a) : – File with this attribute can be opened in append mode only. One has to be root or a process having CAP_LINUX_Immutable capability to set/unset this flat.
2. compressed (c) : - File with this attribute keep the file in compressed state on the disk by the kernel. A read to this file always
3. no dump (d) :- File with this attribute set, would not be a candidate for backup when the dump program executes.
4. data journalling (j) :- File with this attribute set writes all it’s data to journal before writing the data to the file if the file system is mounted with ordered or writeback journaling options. If the file system is mounted with “journal” journaling option, this flag has no effect as the “journal” journaling option would provide similar functionality for all the files stored on the file system.
5. secure deletion (s) :- If the file with this attribute set is deleted, all the data blocks for the file are zeroed and written back to the disk.
All the above attributes can be set/unset using the chattr command.
Syntax : chattr + or – flag filename.
To set an attribute use “+” sign with chattr command followed by the flag mentioned above in “()”.
To unset an attribute use “-” sign with chattr command followed by the flag mentioned above in “()”.
References : Man page for lsattr and chattr
Apache 2.2 Installation on Linux step by step
Download and Install Apache 2.2
Download apache from the Apache 2.2 download page.
Look for the section with the phrase "best available version" like "Apache HTTP Server (httpd) 2.2.x is the best available version". At the time of writing this tutorial Apache 2.2.16 is the official best available version.
Click on the link for "httpd-2.2.16.tar.gz" and download the installer. Once the file is copied to the folder. You need to remote copy the file to the Linux server. There are many tools available to ftp the file.
Once the file is copied on the Linux server (example: /usr/local/install).
1. Use the following command to extract the tar file.
cd /usr/local/install
tar -xzf httpd-2.2.16.tar.gz
A directory will be created "httpd-2.2.16"
2. Now, Let`s execute the configuration script:
cd /usr/local/install/httpd-2.2.16
./configure --prefix=/usr/local/install/apache --enable-mods-shared=all --enable-proxy --enable-expires --enable-vhost-alias
3. The following steps will compile Apache based upon the configuration defined:
make
4. The following step will install the Apache build:
make install
5. Use the following commands to control the Apache Web Server.
/usr/local/install/apache/bin/apachectl -k stop
/usr/local/install/apache/bin/apachectl -k start
6. Go to the internet browser and try the url http://host:80/.
You should see, It Works!
This means, the Apache webserver installation went successful.
Download apache from the Apache 2.2 download page.
Look for the section with the phrase "best available version" like "Apache HTTP Server (httpd) 2.2.x is the best available version". At the time of writing this tutorial Apache 2.2.16 is the official best available version.
Click on the link for "httpd-2.2.16.tar.gz" and download the installer. Once the file is copied to the folder. You need to remote copy the file to the Linux server. There are many tools available to ftp the file.
Once the file is copied on the Linux server (example: /usr/local/install).
1. Use the following command to extract the tar file.
cd /usr/local/install
tar -xzf httpd-2.2.16.tar.gz
A directory will be created "httpd-2.2.16"
2. Now, Let`s execute the configuration script:
cd /usr/local/install/httpd-2.2.16
./configure --prefix=/usr/local/install/apache --enable-mods-shared=all --enable-proxy --enable-expires --enable-vhost-alias
3. The following steps will compile Apache based upon the configuration defined:
make
4. The following step will install the Apache build:
make install
5. Use the following commands to control the Apache Web Server.
/usr/local/install/apache/bin/apachectl -k stop
/usr/local/install/apache/bin/apachectl -k start
6. Go to the internet browser and try the url http://host:80/.
You should see, It Works!
This means, the Apache webserver installation went successful.
Unix/Linux FAQs
Q: How to find if Operating system in 32 bit or 64 bit ?
For solaris use command
isainfo -v
If you see out put like
32-bit sparc applications
That means your O.S. is only 32 bit
but if you see output like
64-bit sparcv9 applications
32-bit sparc applications
above means your o.s. is 64 bit & can support both 32 & 64 bit applications
Q: How to find if any service is listening on particular port or not ?
netstat -an | grep {port no}
For example if you know that OID is running on 389 port so to check if OID services is listening or not then use
netstat -an | grep 389
Q: How to delete files older than N number of days ? (Useful in delete log, trace, tmp file )
find . -name ‘*.*’ -mtime +[N in days] -exec rm {} \; ( This command will delete files older then N days in that directory, always good to use it when you are in applcsf/ tmp,out,log directory)
Q: How to list files modified in last N days
find . -mtime - -exec ls -lt {} \;
So to find files modified in last 3 days
find . -mtime -3 -exec ls -lt {} \;
Q: How to sort files based on Size of file ? ( useful to find large files in log directory to delete in case disk is full )
ls -l | sort -nrk 5 | more
Q: How to find files changed in last N days (Solaris)
find -mtime -N -print
Q: How to extract cpio file
cpio -idmv < file_name (Don’t forget to use sign < before file name) Q: How to find CPU & Memory detail of linux cat /proc/cpuinfo (CPU) cat /proc/meminfo (Memory) Q: How to find Process ID (PID) associated with any port ? This command is useful if any service is running on a particular port (389, 1521..) and that is run away process which you wish to terminate using kill command lsof | grep {port no.} (lsof should be installed and in path) Q: How to change a particular pattern in a file ? Open file using vi or any other editor, go in escape mode (by pressing escape) and use :1,$s/old_pattern/new_parameter/gc ( g will change globally, c will ask for confirmation before changing ) Q: How to find a pattern in some file in a directory ? grep pattern file_name ( to find pattern in particular file ) grep pattern * ( in all files in that directory ) If you know how to find a pattern in files in that directory recursively please answer that as comment Q: How to create symbolic link to a file ? ln -s pointing_to symbolic_name e.g. If you want to create symbolic link from a -> b
ln -s b a
(Condition:you should have file b in that directory & there should not be any file with name a)
Q: How to setup cronjob (cronjob is used to schedule job in Unix at O.s. Level )
crontab -l( list current jobs in cron)
crontab -e ( edit current jobs in cron)
_1_ _2_ _3_ _4_ _5_ executable_or_job
Where
1 - Minutes (0-59)
2 - Hours ( 0-24)
3 - day of month ( 1- 31 )
4 - Month ( 1-12)
5 - A day of week ( 0- 6 ) 0 -> sunday 1-> monday
e.g. 0 3 * * 6 Means run job at 3AM every saturday
Apache Http Status Code
Similar to any other web server, Apache writes or records it`s activities into a Log File, where it logs every request it processes and the error messages or abnormal conditions during the request processing.
A user can look at the Http status code for the information about the activity in the logs files. In this section, we are going to see the list of "Http Status Code" with the information on what these status code means.
Status Code Information
100 To Continue
101 Protocol Switching
200 Things are "OK"
201 Created
202 Accepted
203 Information without authorization
204 No Content
205 Content Reset
206 Partial Content
300 Multiple Choices
301 Permanently Moved
302 Found
303 Look for Others
304 Not Modified
305 Use Proxy
307 Temporary Redirection
400 Bad Request
401 You are not Authorized.
402 Payment Required
403 Forbidden
404 Not Found
405 Method Not Allowed
406 Not acceptable
407 Proxy authentication required
408 Request Timeout
409 Conflict between requests
410 Gone
411 Length is Required
412 Pre-condition Failure
413 Request is Large
414 Request URI is Long
415 Media type is Unsupported
416 Range requested cannot be satisfied
417 Failed Expectations
500 Internal Server Error
501 Not implemented
502 Bad Gateway
503 Service unavailable
504 Gateway timeout
505 HTTP version not supported
For solaris use command
isainfo -v
If you see out put like
32-bit sparc applications
That means your O.S. is only 32 bit
but if you see output like
64-bit sparcv9 applications
32-bit sparc applications
above means your o.s. is 64 bit & can support both 32 & 64 bit applications
Q: How to find if any service is listening on particular port or not ?
netstat -an | grep {port no}
For example if you know that OID is running on 389 port so to check if OID services is listening or not then use
netstat -an | grep 389
Q: How to delete files older than N number of days ? (Useful in delete log, trace, tmp file )
find . -name ‘*.*’ -mtime +[N in days] -exec rm {} \; ( This command will delete files older then N days in that directory, always good to use it when you are in applcsf/ tmp,out,log directory)
Q: How to list files modified in last N days
find . -mtime -
So to find files modified in last 3 days
find . -mtime -3 -exec ls -lt {} \;
Q: How to sort files based on Size of file ? ( useful to find large files in log directory to delete in case disk is full )
ls -l | sort -nrk 5 | more
Q: How to find files changed in last N days (Solaris)
find
Q: How to extract cpio file
cpio -idmv < file_name (Don’t forget to use sign < before file name) Q: How to find CPU & Memory detail of linux cat /proc/cpuinfo (CPU) cat /proc/meminfo (Memory) Q: How to find Process ID (PID) associated with any port ? This command is useful if any service is running on a particular port (389, 1521..) and that is run away process which you wish to terminate using kill command lsof | grep {port no.} (lsof should be installed and in path) Q: How to change a particular pattern in a file ? Open file using vi or any other editor, go in escape mode (by pressing escape) and use :1,$s/old_pattern/new_parameter/gc ( g will change globally, c will ask for confirmation before changing ) Q: How to find a pattern in some file in a directory ? grep pattern file_name ( to find pattern in particular file ) grep pattern * ( in all files in that directory ) If you know how to find a pattern in files in that directory recursively please answer that as comment Q: How to create symbolic link to a file ? ln -s pointing_to symbolic_name e.g. If you want to create symbolic link from a -> b
ln -s b a
(Condition:you should have file b in that directory & there should not be any file with name a)
Q: How to setup cronjob (cronjob is used to schedule job in Unix at O.s. Level )
crontab -l( list current jobs in cron)
crontab -e ( edit current jobs in cron)
_1_ _2_ _3_ _4_ _5_ executable_or_job
Where
1 - Minutes (0-59)
2 - Hours ( 0-24)
3 - day of month ( 1- 31 )
4 - Month ( 1-12)
5 - A day of week ( 0- 6 ) 0 -> sunday 1-> monday
e.g. 0 3 * * 6 Means run job at 3AM every saturday
Apache Http Status Code
Similar to any other web server, Apache writes or records it`s activities into a Log File, where it logs every request it processes and the error messages or abnormal conditions during the request processing.
A user can look at the Http status code for the information about the activity in the logs files. In this section, we are going to see the list of "Http Status Code" with the information on what these status code means.
Status Code Information
100 To Continue
101 Protocol Switching
200 Things are "OK"
201 Created
202 Accepted
203 Information without authorization
204 No Content
205 Content Reset
206 Partial Content
300 Multiple Choices
301 Permanently Moved
302 Found
303 Look for Others
304 Not Modified
305 Use Proxy
307 Temporary Redirection
400 Bad Request
401 You are not Authorized.
402 Payment Required
403 Forbidden
404 Not Found
405 Method Not Allowed
406 Not acceptable
407 Proxy authentication required
408 Request Timeout
409 Conflict between requests
410 Gone
411 Length is Required
412 Pre-condition Failure
413 Request is Large
414 Request URI is Long
415 Media type is Unsupported
416 Range requested cannot be satisfied
417 Failed Expectations
500 Internal Server Error
501 Not implemented
502 Bad Gateway
503 Service unavailable
504 Gateway timeout
505 HTTP version not supported
How to setup url or website monitoring in nagios server
First of all create a configuration directory for writing the rules. You can also create the rules in localhost.cfg but I recommend to create a separate directory and create the files in it.
#mkdir /etc/nagios/monitor_websites
and cd to this directory
And create file host.cfg in this directory for setting the urls.
#vi host.cfg
Suppose I want to monitor three sites
www.abc.com, www.xyz.com, www.pqr.com
Configure host.cfg as below.
#vi host.cfg
define host{
host_name abc.com
alias abc
address www.abc.com
use generic-host
}
define host{
host_name xyz.com
alias xyz
address www.xyz.com
use generic-host
}
define host{
host_name pqr.com
alias pqr
address www.pqr.com
use generic-host
}
#Defining group of urls - you should add this if you want to set up an HTTP check service.
define hostgroup {
hostgroup_name monitor_websites
alias monitor_urls
members www.abc.com, www.xyz.com, www.pqr.com
}
:wq #save it
And now create the file services.cfg for setting the service ( http_check )
#vi services.cfg
## Hostgroups services ##
define service {
hostgroup_name monitor_websites
service_description HTTP
check_command check_http
use generic-service
notification_interval 0
}
Now give the permissions for directory and configuration files.
#chown -R nagios:nagios monitor_websites
List and check.
[root@mail nagios]# ll monitor_websites
total 16
-rw-r--r-- 1 nagios nagios 669 Apr 25 23:13 host.cfg
-rw-r--r-- 1 nagios nagios 253 Apr 25 23:15 services.cfg
[root@mail nagios]#
Now give the configuration directory path in main nagios configuration file.
#vi /etc/nagios/nagios.cfg
cfg_dir=/etc/nagios/monitor_websites
:wq
Now restart the nagios service.
#service nagios restart
Thats it. Check the nagios site. You are done.............
#mkdir /etc/nagios/monitor_websites
and cd to this directory
And create file host.cfg in this directory for setting the urls.
#vi host.cfg
Suppose I want to monitor three sites
www.abc.com, www.xyz.com, www.pqr.com
Configure host.cfg as below.
#vi host.cfg
define host{
host_name abc.com
alias abc
address www.abc.com
use generic-host
}
define host{
host_name xyz.com
alias xyz
address www.xyz.com
use generic-host
}
define host{
host_name pqr.com
alias pqr
address www.pqr.com
use generic-host
}
#Defining group of urls - you should add this if you want to set up an HTTP check service.
define hostgroup {
hostgroup_name monitor_websites
alias monitor_urls
members www.abc.com, www.xyz.com, www.pqr.com
}
:wq #save it
And now create the file services.cfg for setting the service ( http_check )
#vi services.cfg
## Hostgroups services ##
define service {
hostgroup_name monitor_websites
service_description HTTP
check_command check_http
use generic-service
notification_interval 0
}
Now give the permissions for directory and configuration files.
#chown -R nagios:nagios monitor_websites
List and check.
[root@mail nagios]# ll monitor_websites
total 16
-rw-r--r-- 1 nagios nagios 669 Apr 25 23:13 host.cfg
-rw-r--r-- 1 nagios nagios 253 Apr 25 23:15 services.cfg
[root@mail nagios]#
Now give the configuration directory path in main nagios configuration file.
#vi /etc/nagios/nagios.cfg
cfg_dir=/etc/nagios/monitor_websites
:wq
Now restart the nagios service.
#service nagios restart
Thats it. Check the nagios site. You are done.............
Configuring A High Availability Cluster (Heartbeat) On RHEL/CentOS
This section shows how you can set up a two node, high-availability HTTP cluster with heartbeat on CentOS. Both nodes use the Apache web server to serve the same content.
Pre-Configuration Requirements
1. Assign hostname cluster1 to primary node with IP address 192.168.1.4 to eth0.
2. Assign hostname cluster2 to slave node with IP address 192.168.1.5
Note: on cluster1
# uname -n
must return cluster1
On cluster2
# uname -n
must return cluster2
192.168.1.6 is the virtual IP address that will be used for our Apache webserver (i.e., Apache will listen on that address).
Configuration
Step 1. Download and install the heartbeat package. In our case we are using CentOS so we will install heartbeat with yum:
# yum install heartbeat
or download these packages:
heartbeat-2.08
heartbeat-pils-2.08
heartbeat-stonith-2.08
Step 2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:
authkeys
ha.cf
haresources
Step 3. Now moving to our configuration. But there is one more thing to do, that is to copy these files to the /etc/ha.d directory. In our case we copy these files as given below:
cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
Step 4. Now let's start configuring heartbeat.First we will deal with the authkeys file, we will use authentication method 2 (sha1). For this we will make changes in the authkeys file as below.
vi /etc/ha.d/authkeys
Then add the following lines:
auth 2
2 sha1 test-ha
Change the permission of the authkeys file:
chmod 600 /etc/ha.d/authkeys
Step 5. Moving to our second file (ha.cf) which is the most important. So edit the ha.cf file with vi:
vi /etc/ha.d/ha.cf
Add the following lines in the ha.cf file:
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node cluster1
node cluster2
Note: cluster1 and cluster2 is the output generated by
# uname -n
Step 6. The final piece of work in our configuration is to edit the haresources file. This file contains the information about resources which we want to highly enable. In our case we want the webserver (httpd) highly available:
# vi /etc/ha.d/haresources
Add the following line:
cluster1 192.168.1.6 httpd
Step 7. Copy the /etc/ha.d/ directory from cluster1 to cluster2:
# scp -r /etc/ha.d/ root@cluster2:/etc/
Step 8. As we want httpd highly enabled let's start configuring httpd:
# vi /etc/httpd/conf/httpd.conf
Add this line in httpd.conf:
Listen 192.168.1.6:80
Step 9. Copy the /etc/httpd/conf/httpd.conf file to cluster2:
# scp /etc/httpd/conf/httpd.conf root@cluster2:/etc/httpd/conf/
Step 10. Create the file index.html on both nodes (cluster1 & cluster2):
On cluster1:
echo "Cluster1 apache Web server Test Page " > /var/www/html/index.html
On Cluster2:
echo "cluster2 Apache test server Test page too" > /var/www/html/index.html
Step 11. Now start heartbeat on the primary cluster1 and slave cluster2:
# /etc/init.d/heartbeat start
Step 12. Open web-browser and type in the URL:
http://192.168.1.6
It will show “Cluster1 apache Web server Test Page”
Step 13. Now stop the hearbeat daemon on Cluster1:
# /etc/init.d/heartbeat stop
In your browser type in the URL http://192.168.1.6 and press enter.
It will show “cluster2 Apache test server Test page too”
Step 14. We don't need to create a virtual network interface and assign an IP address (192.168.1.6) to it. Heartbeat will do this for you, and start the service (httpd) itself. So don't worry about this.
Don't use the IP addresses 192.168.1.4 and 192.168.1.5 for services. These addresses are used by heartbeat for communication between Cluster1 and Cluster2. When any of them will be used for services/resources, it will disturb hearbeat and will not work.
Pre-Configuration Requirements
1. Assign hostname cluster1 to primary node with IP address 192.168.1.4 to eth0.
2. Assign hostname cluster2 to slave node with IP address 192.168.1.5
Note: on cluster1
# uname -n
must return cluster1
On cluster2
# uname -n
must return cluster2
192.168.1.6 is the virtual IP address that will be used for our Apache webserver (i.e., Apache will listen on that address).
Configuration
Step 1. Download and install the heartbeat package. In our case we are using CentOS so we will install heartbeat with yum:
# yum install heartbeat
or download these packages:
heartbeat-2.08
heartbeat-pils-2.08
heartbeat-stonith-2.08
Step 2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:
authkeys
ha.cf
haresources
Step 3. Now moving to our configuration. But there is one more thing to do, that is to copy these files to the /etc/ha.d directory. In our case we copy these files as given below:
cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
Step 4. Now let's start configuring heartbeat.First we will deal with the authkeys file, we will use authentication method 2 (sha1). For this we will make changes in the authkeys file as below.
vi /etc/ha.d/authkeys
Then add the following lines:
auth 2
2 sha1 test-ha
Change the permission of the authkeys file:
chmod 600 /etc/ha.d/authkeys
Step 5. Moving to our second file (ha.cf) which is the most important. So edit the ha.cf file with vi:
vi /etc/ha.d/ha.cf
Add the following lines in the ha.cf file:
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node cluster1
node cluster2
Note: cluster1 and cluster2 is the output generated by
# uname -n
Step 6. The final piece of work in our configuration is to edit the haresources file. This file contains the information about resources which we want to highly enable. In our case we want the webserver (httpd) highly available:
# vi /etc/ha.d/haresources
Add the following line:
cluster1 192.168.1.6 httpd
Step 7. Copy the /etc/ha.d/ directory from cluster1 to cluster2:
# scp -r /etc/ha.d/ root@cluster2:/etc/
Step 8. As we want httpd highly enabled let's start configuring httpd:
# vi /etc/httpd/conf/httpd.conf
Add this line in httpd.conf:
Listen 192.168.1.6:80
Step 9. Copy the /etc/httpd/conf/httpd.conf file to cluster2:
# scp /etc/httpd/conf/httpd.conf root@cluster2:/etc/httpd/conf/
Step 10. Create the file index.html on both nodes (cluster1 & cluster2):
On cluster1:
echo "Cluster1 apache Web server Test Page " > /var/www/html/index.html
On Cluster2:
echo "cluster2 Apache test server Test page too" > /var/www/html/index.html
Step 11. Now start heartbeat on the primary cluster1 and slave cluster2:
# /etc/init.d/heartbeat start
Step 12. Open web-browser and type in the URL:
http://192.168.1.6
It will show “Cluster1 apache Web server Test Page”
Step 13. Now stop the hearbeat daemon on Cluster1:
# /etc/init.d/heartbeat stop
In your browser type in the URL http://192.168.1.6 and press enter.
It will show “cluster2 Apache test server Test page too”
Step 14. We don't need to create a virtual network interface and assign an IP address (192.168.1.6) to it. Heartbeat will do this for you, and start the service (httpd) itself. So don't worry about this.
Don't use the IP addresses 192.168.1.4 and 192.168.1.5 for services. These addresses are used by heartbeat for communication between Cluster1 and Cluster2. When any of them will be used for services/resources, it will disturb hearbeat and will not work.
6 Stages of Linux Boot Process
Press the power button on your system, and after few moments you see the Linux login prompt.
Have you ever wondered what happens behind the scenes from the time you press the power button until the Linux login prompt appears?
The following are the 6 high level stages of a typical Linux boot process.
1. BIOS
BIOS stands for Basic Input/Output System
Performs some system integrity checks
Searches, loads, and executes the boot loader program.
It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically F12 of F2, but it depends on your system) during the BIOS startup to change the boot sequence.
Once the boot loader program is detected and loaded into the memory, BIOS gives the control to it.
So, in simple terms BIOS loads and executes the MBR boot loader.
2. MBR
MBR stands for Master Boot Record.
It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda
MBR is less than 512 bytes in size. This has three components 1) primary boot loader info in 1st 446 bytes 2) partition table info in next 64 bytes 3) mbr validation check in last 2 bytes.
It contains information about GRUB (or LILO in old systems).
So, in simple terms MBR loads and executes the GRUB boot loader.
3. GRUB
GRUB stands for Grand Unified Bootloader.
If you have multiple kernel images installed on your system, you can choose which one to be executed.
GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.
GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand filesystem).
Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is sample grub.conf of CentOS.
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-194.el5PAE)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/
initrd /boot/initrd-2.6.18-194.el5PAE.img
As you notice from the above info, it contains kernel and initrd image.
So, in simple terms GRUB just loads and executes Kernel and initrd images.
4. Kernel
Mounts the root file system as specified in the “root=” in grub.conf
Kernel executes the /sbin/init program
Since init was the 1st program to be executed by Linux Kernel, it has the process id (PID) of 1. Do a ‘ps -ef | grep init’ and check the pid.
initrd stands for Initial RAM Disk.
initrd is used by kernel as temporary root file system until kernel is booted and the real root file system is mounted. It also contains necessary drivers compiled inside, which helps it to access the hard drive partitions, and other hardware.
5. Init
Looks at the /etc/inittab file to decide the Linux run level.
Following are the available run levels
0 – halt
1 – Single user mode
2 – Multiuser, without NFS
3 – Full multiuser mode
4 – unused
5 – X11
6 – reboot
Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate program.
Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level
If you want to get into trouble, you can set the default run level to 0 or 6. Since you know what 0 and 6 means, probably you might not do that.
Typically you would set the default run level to either 3 or 5.
6. Runlevel programs
When the Linux system is booting up, you might see various services getting started. For example, it might say “starting sendmail …. OK”. Those are the runlevel programs, executed from the run level directory as defined by your run level.
Depending on your default init level setting, the system will execute the programs from one of the following directories.
Run level 0 – /etc/rc.d/rc0.d/
Run level 1 – /etc/rc.d/rc1.d/
Run level 2 – /etc/rc.d/rc2.d/
Run level 3 – /etc/rc.d/rc3.d/
Run level 4 – /etc/rc.d/rc4.d/
Run level 5 – /etc/rc.d/rc5.d/
Run level 6 – /etc/rc.d/rc6.d/
Please note that there are also symbolic links available for these directory under /etc directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.
Under the /etc/rc.d/rc*.d/ direcotiries, you would see programs that start with S and K.
Programs starts with S are used during startup. S for startup.
Programs starts with K are used during shutdown. K for kill.
There are numbers right next to S and K in the program names. Those are the sequence number in which the programs should be started or killed.
For example, S12syslog is to start the syslog deamon, which has the sequence number of 12. S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So, syslog program will be started before sendmail.
There you have it. That is what happens during the Linux boot process.
Have you ever wondered what happens behind the scenes from the time you press the power button until the Linux login prompt appears?
The following are the 6 high level stages of a typical Linux boot process.
1. BIOS
BIOS stands for Basic Input/Output System
Performs some system integrity checks
Searches, loads, and executes the boot loader program.
It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically F12 of F2, but it depends on your system) during the BIOS startup to change the boot sequence.
Once the boot loader program is detected and loaded into the memory, BIOS gives the control to it.
So, in simple terms BIOS loads and executes the MBR boot loader.
2. MBR
MBR stands for Master Boot Record.
It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda
MBR is less than 512 bytes in size. This has three components 1) primary boot loader info in 1st 446 bytes 2) partition table info in next 64 bytes 3) mbr validation check in last 2 bytes.
It contains information about GRUB (or LILO in old systems).
So, in simple terms MBR loads and executes the GRUB boot loader.
3. GRUB
GRUB stands for Grand Unified Bootloader.
If you have multiple kernel images installed on your system, you can choose which one to be executed.
GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.
GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand filesystem).
Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is sample grub.conf of CentOS.
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-194.el5PAE)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/
initrd /boot/initrd-2.6.18-194.el5PAE.img
As you notice from the above info, it contains kernel and initrd image.
So, in simple terms GRUB just loads and executes Kernel and initrd images.
4. Kernel
Mounts the root file system as specified in the “root=” in grub.conf
Kernel executes the /sbin/init program
Since init was the 1st program to be executed by Linux Kernel, it has the process id (PID) of 1. Do a ‘ps -ef | grep init’ and check the pid.
initrd stands for Initial RAM Disk.
initrd is used by kernel as temporary root file system until kernel is booted and the real root file system is mounted. It also contains necessary drivers compiled inside, which helps it to access the hard drive partitions, and other hardware.
5. Init
Looks at the /etc/inittab file to decide the Linux run level.
Following are the available run levels
0 – halt
1 – Single user mode
2 – Multiuser, without NFS
3 – Full multiuser mode
4 – unused
5 – X11
6 – reboot
Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate program.
Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level
If you want to get into trouble, you can set the default run level to 0 or 6. Since you know what 0 and 6 means, probably you might not do that.
Typically you would set the default run level to either 3 or 5.
6. Runlevel programs
When the Linux system is booting up, you might see various services getting started. For example, it might say “starting sendmail …. OK”. Those are the runlevel programs, executed from the run level directory as defined by your run level.
Depending on your default init level setting, the system will execute the programs from one of the following directories.
Run level 0 – /etc/rc.d/rc0.d/
Run level 1 – /etc/rc.d/rc1.d/
Run level 2 – /etc/rc.d/rc2.d/
Run level 3 – /etc/rc.d/rc3.d/
Run level 4 – /etc/rc.d/rc4.d/
Run level 5 – /etc/rc.d/rc5.d/
Run level 6 – /etc/rc.d/rc6.d/
Please note that there are also symbolic links available for these directory under /etc directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.
Under the /etc/rc.d/rc*.d/ direcotiries, you would see programs that start with S and K.
Programs starts with S are used during startup. S for startup.
Programs starts with K are used during shutdown. K for kill.
There are numbers right next to S and K in the program names. Those are the sequence number in which the programs should be started or killed.
For example, S12syslog is to start the syslog deamon, which has the sequence number of 12. S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So, syslog program will be started before sendmail.
There you have it. That is what happens during the Linux boot process.
Install Tomcat 6 on linux step by step
************************************************************************************************
Install Tomcat 6 on linux step by step
Download and Install JAVA
Download j2sdk-1.4.2 from Sun Download center Sun Developer Network (SDN) Downloads Here I have used j2sdk-1_4_2_18-linux-i586-rpm.bin which will install j2sdk using RPMS and set the Path of JAVA_HOME automatically
Quote:
#chmod +x j2sdk-1_4_2_09-linux-i586.bin
#./j2sdk-1_4_2_09-linux-i586.bin
Now Check if Java is installed on the server using command java -version
Quote:
[root@vps907 ~]# java -version
java version "1.6.0_07"
Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
Java HotSpot(TM) Client VM (build 10.0-b23, mixed mode, sharing)
Download Tomcat
Quote:
#cd /usr/local/
#wget Blue Dog Fan Blogs
#tar -zxvf apache-tomcat-6.0.18.tar.gz
Create Symlink for the Tomcat Folder
Quote:
#ln -s /usr/local/apache-tomcat-6.0.18 /usr/local/apache/tomcat
Install Tomcat
Quote:
#cd apache-tomcat-6.0.18
#cd bin
#tar xvfz jsvc.tar.gz
#cd jsvc-src
#chmod +x configure
#./configure
#make
#cp jsvc ..
#cd ..
Start Tomcat
Use Following script to start Tomcat Service on the Server
Quote:
#/usr/local/apache/tomcat/bin/startup.sh
Running Tomcat as non root user
Due to security reasons always run tomcat as non-root user i.e. tomcat. Following are the steps to run tomcat as non-root user
Quote:
#chown tomcat.tomcat /usr/local/apache-tomcat-6.0.18 -R
Now Tomcat can be stopped and started under user tomcat using following commands:
Quote:
#su -l tomcat -c /usr/local/apache/tomcat/bin/startup.sh
#su -l tomcat -c /usr/local/apache/tomcat/bin/shutdown.sh
Test Tomcat installation
open a browser and browse website http://xx.xx.xx.xx:8080 where xx.xx.xx.xx will be your Server IP and If you get following output than Tomcat has been installed properly on the Server.
Script to start, stop and restart Tomcat
The above installation step will not create tomcat service so that user can restart tomcat using command service tomcat restart. Create a new file in /etc/init.d as tomcat and copy following contenents into it.
#vi /etc/init.d/tomcat
#!/bin/bash
#
# Startup script for Tomcat
#
# chkconfig: 345 84 16
# description: Tomcat jakarta JSP server
#Necessary environment variables
export CATALINA_HOME="/usr/local/tomcat"
if [ ! -f $CATALINA_HOME/bin/catalina.sh ]
then
echo "Tomcat not available..."
exit
fi
start() {
echo -n -e '\E[0;0m'"\033[1;32mStarting Tomcat: \033[0m \n"
su -l tomcat -c $CATALINA_HOME/bin/startup.sh
echo
touch /var/lock/subsys/tomcatd
sleep 3
}
stop() {
echo -n -e '\E[0;0m'"\033[1;31mShutting down Tomcat: \033[m \n"
su -l tomcat -c $CATALINA_HOME/bin/shutdown.sh
rm -f /var/lock/subsys/tomcatd
echo
}
status() {
ps ax --width=1000 | grep "[o]rg.apache.catalina.startup.Bootstrap start" | awk '{printf $1 " "}' | wc | awk '{print $2}' > /tmp/tomcat_process_count.txt
read line < /tmp/tomcat_process_count.txt if [ $line -gt 0 ]; then echo -n "tomcatd ( pid " ps ax --width=1000 | grep "[o]rg.apache.catalina.startup.Bootstrap start" | awk '{printf $1 " "}' echo -n ") is running..." echo else echo "Tomcat is stopped" fi } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 3 start ;; status) status ;; *) echo "Usage: tomcatd {start|stop|restart|status}" exit 1 esac Save and exit from the file. Now assign executable permission to this file Quote: #chown 755 /etc/init.d/tomcat Add and Enable tomcat for all the Run-levels Quote: #chkconfig --add tomcat #chkconfig tomcat on Now you can restart tomcat service using following commands (you can check the screenshot too) #service tomcat restart <<< To restart tomcat #service tomcat stop <<< To stop Tomcat #service tomcat start <<< To start Tomcat #service tomcat Status <<< to check the status of Tomcat Add username/Password for tomcat manager: How to add a new user name to tomcat manager: just add line number 3 and 6 to /install/apache-tomcat-5.5.29/conf/tomcat-users.xml 1)
2)
3)
4)
5)
6)
7)
8)
9)
10)
Restart tomcat server.
By using service tomcat restar
Install Tomcat 6 on linux step by step
Download and Install JAVA
Download j2sdk-1.4.2 from Sun Download center Sun Developer Network (SDN) Downloads Here I have used j2sdk-1_4_2_18-linux-i586-rpm.bin which will install j2sdk using RPMS and set the Path of JAVA_HOME automatically
Quote:
#chmod +x j2sdk-1_4_2_09-linux-i586.bin
#./j2sdk-1_4_2_09-linux-i586.bin
Now Check if Java is installed on the server using command java -version
Quote:
[root@vps907 ~]# java -version
java version "1.6.0_07"
Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
Java HotSpot(TM) Client VM (build 10.0-b23, mixed mode, sharing)
Download Tomcat
Quote:
#cd /usr/local/
#wget Blue Dog Fan Blogs
#tar -zxvf apache-tomcat-6.0.18.tar.gz
Create Symlink for the Tomcat Folder
Quote:
#ln -s /usr/local/apache-tomcat-6.0.18 /usr/local/apache/tomcat
Install Tomcat
Quote:
#cd apache-tomcat-6.0.18
#cd bin
#tar xvfz jsvc.tar.gz
#cd jsvc-src
#chmod +x configure
#./configure
#make
#cp jsvc ..
#cd ..
Start Tomcat
Use Following script to start Tomcat Service on the Server
Quote:
#/usr/local/apache/tomcat/bin/startup.sh
Running Tomcat as non root user
Due to security reasons always run tomcat as non-root user i.e. tomcat. Following are the steps to run tomcat as non-root user
Quote:
#chown tomcat.tomcat /usr/local/apache-tomcat-6.0.18 -R
Now Tomcat can be stopped and started under user tomcat using following commands:
Quote:
#su -l tomcat -c /usr/local/apache/tomcat/bin/startup.sh
#su -l tomcat -c /usr/local/apache/tomcat/bin/shutdown.sh
Test Tomcat installation
open a browser and browse website http://xx.xx.xx.xx:8080 where xx.xx.xx.xx will be your Server IP and If you get following output than Tomcat has been installed properly on the Server.
Script to start, stop and restart Tomcat
The above installation step will not create tomcat service so that user can restart tomcat using command service tomcat restart. Create a new file in /etc/init.d as tomcat and copy following contenents into it.
#vi /etc/init.d/tomcat
#!/bin/bash
#
# Startup script for Tomcat
#
# chkconfig: 345 84 16
# description: Tomcat jakarta JSP server
#Necessary environment variables
export CATALINA_HOME="/usr/local/tomcat"
if [ ! -f $CATALINA_HOME/bin/catalina.sh ]
then
echo "Tomcat not available..."
exit
fi
start() {
echo -n -e '\E[0;0m'"\033[1;32mStarting Tomcat: \033[0m \n"
su -l tomcat -c $CATALINA_HOME/bin/startup.sh
echo
touch /var/lock/subsys/tomcatd
sleep 3
}
stop() {
echo -n -e '\E[0;0m'"\033[1;31mShutting down Tomcat: \033[m \n"
su -l tomcat -c $CATALINA_HOME/bin/shutdown.sh
rm -f /var/lock/subsys/tomcatd
echo
}
status() {
ps ax --width=1000 | grep "[o]rg.apache.catalina.startup.Bootstrap start" | awk '{printf $1 " "}' | wc | awk '{print $2}' > /tmp/tomcat_process_count.txt
read line < /tmp/tomcat_process_count.txt if [ $line -gt 0 ]; then echo -n "tomcatd ( pid " ps ax --width=1000 | grep "[o]rg.apache.catalina.startup.Bootstrap start" | awk '{printf $1 " "}' echo -n ") is running..." echo else echo "Tomcat is stopped" fi } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 3 start ;; status) status ;; *) echo "Usage: tomcatd {start|stop|restart|status}" exit 1 esac Save and exit from the file. Now assign executable permission to this file Quote: #chown 755 /etc/init.d/tomcat Add and Enable tomcat for all the Run-levels Quote: #chkconfig --add tomcat #chkconfig tomcat on Now you can restart tomcat service using following commands (you can check the screenshot too) #service tomcat restart <<< To restart tomcat #service tomcat stop <<< To stop Tomcat #service tomcat start <<< To start Tomcat #service tomcat Status <<< to check the status of Tomcat Add username/Password for tomcat manager: How to add a new user name to tomcat manager: just add line number 3 and 6 to /install/apache-tomcat-5.5.29/conf/tomcat-users.xml 1)
2)
3)
4)
5)
6)
7)
8)
9)
10)
Restart tomcat server.
By using service tomcat restar
Change the speed and duplex settings of an Ethernet card.
Task: Change the speed and duplex settings
Setup eth0 negotiated speed with mii-tool
Disable autonegotiation, and force the MII to either 100baseTx-FD, 100baseTx-HD, 10baseT-FD, or 10baseT-HD:# mii-tool -F 100baseTx-HD
# mii-tool -F 10baseT-HDSetup eth0 negotiated speed with ethtool# ethtool -s eth0 speed 100 duplex full
# ethtool -s eth0 speed 10 duplex halfTo make these settings permanent you need to create a shell script and call from /etc/rc.local (Red Hat) or if you are using Debian create a script into the directory /etc/init.d/ directory and run update-rc.d command to update the script.
Setup eth0 negotiated speed with mii-tool
Disable autonegotiation, and force the MII to either 100baseTx-FD, 100baseTx-HD, 10baseT-FD, or 10baseT-HD:# mii-tool -F 100baseTx-HD
# mii-tool -F 10baseT-HDSetup eth0 negotiated speed with ethtool# ethtool -s eth0 speed 100 duplex full
# ethtool -s eth0 speed 10 duplex halfTo make these settings permanent you need to create a shell script and call from /etc/rc.local (Red Hat) or if you are using Debian create a script into the directory /etc/init.d/ directory and run update-rc.d command to update the script.
Difference between Xen PV, Xen KVM and HVM?
Xen supported virtualization types
Xen supports running two different types of guests. Xen guests are often called as domUs (unprivileged domains). Both guest types (PV, HVM) can be used at the same time on a single Xen system.
Xen Paravirtualization (PV)
Paravirtualization is an efficient and lightweight virtualization technique introduced by Xen, later adopted also by other virtualization solutions. Paravirtualization doesn't require virtualization extensions from the host CPU. However paravirtualized guests require special kernel that is ported to run natively on Xen, so the guests are aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware. Xen PV guest kernels exist for Linux, NetBSD, FreeBSD, OpenSolaris and Novell Netware operating systems.
PV guests don't have any kind of virtual emulated hardware, but graphical console is still possible using guest pvfb (paravirtual framebuffer). PV guest graphical console can be viewed using VNC client, or Redhat's virt-viewer. There's a separate VNC server in dom0 for each guest's PVFB.
Upstream kernel.org Linux kernels since Linux 2.6.24 include Xen PV guest (domU) support based on the Linux pvops framework, so every upstream Linux kernel can be automatically used as Xen PV guest kernel without any additional patches or modifications.
See XenParavirtOps wiki page for more information about Linux pvops Xen support.
Xen Full virtualization (HVM)
Fully virtualized aka HVM (Hardware Virtual Machine) guests require CPU virtualization extensions from the host CPU (Intel VT, AMD-V). Xen uses modified version of Qemu to emulate full PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc for HVM guests. CPU virtualization extensions are used to boost performance of the emulation. Fully virtualized guests don't require special kernel, so for example Windows operating systems can be used as Xen HVM guest. Fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.
To boost performance fully virtualized HVM guests can use special paravirtual device drivers to bypass the emulation for disk and network IO. Xen Windows HVM guests can use the opensource GPLPV drivers. See XenLinuxPVonHVMdrivers wiki page for more information about Xen PV-on-HVM drivers for Linux HVM guests.
KVM is not Xen at all, it is another technology, where KVM is a Linux native kernel module and not an additional kernel, like Xen. Which makes KVM a better design. the downside here is that KVM is newer than Xen, so it might be lacking some of the features
-------------------------------------------------------------------------------------------------------------------------------
Read following for more clearance
In order to be full virtualization, an entire hardware system needs to be transformed into software. Every action and nuance of the original hardware needs to move over to the virtual system. Since this is such a large undertaking, and some system manufacturers take steps to discourage it, full virtualization is somewhat rare. It is much more common to find partial virtualization, where all the necessary system bits are present, but the physical hardware system handles much of the low-level calculations and functions.
Subscribe to:
Posts (Atom)
kubernetes Pod Scheduling
=================== Deployment ================= 1.) Deployment without any nodeName or nodeSelector, pod will spread among all of the av...
-
# Ansible module and some advance options Modules: Command: This module help you to execute commands to the target host, it may be comma...
-
# Using Facts, in ansible playbook [root@ansimaster:~/ansiroot/playbook]# vi playbook1_remote_facts.yml --- - name: Playbook that wil...
-
Create user using vault [ansible@ansiblemaster playbooks]$ cat inventory [webserver] web1.mylinuxfriends.blogspot.com [dbserver]...