Linux create your own GnuPG private and public key

GNU gpg is encryption and signing tool. The GNU Privacy Guard (GnuPG or GPG) is a free software replacement for the PGP suite of cryptographic software. GnuPG encrypts messages using asymmetric keypairs individually generated by GnuPG users. The resulting public keys can be exchanged with other users in a variety of ways, such as Internet key servers. They must always be exchanged carefully to prevent identity spoofing by corrupting public key ↔ ‘owner’ identity correspondences. It is also possible to add a cryptographic digital signature to a message, so the message integrity and sender can be verified, if a particular correspondence relied upon has not been corrupted.

How do I create my own GnuPG private and public key

  1. Login to your shell account
  2. Use gpg command to create the keys
  3. $ gpg --gen-key


    gpg (GnuPG) 1.4.1; Copyright (C) 2005 Free Software Foundation, Inc.
    This program comes with ABSOLUTELY NO WARRANTY.
    This is free software, and you are welcome to redistribute it
    under certain conditions. See the file COPYING for details.
    gpg: directory `/home/sheron/.gnupg' created
    gpg: new configuration file `/home/sheron/.gnupg/gpg.conf' created
    gpg: WARNING: options in `/home/sheron/.gnupg/gpg.conf' are not yet active during this run
    gpg: keyring `/home/sheron/.gnupg/secring.gpg' created
    gpg: keyring `/home/sheron/.gnupg/pubring.gpg' created
    Please select what kind of key you want:
       (1) DSA and Elgamal (default)
       (2) DSA (sign only)
       (5) RSA (sign only)
    Your selection? Press [Enter] Key
    DSA keypair will have 1024 bits.
    ELG-E keys may be between 1024 and 4096 bits long.
    What keysize do you want? (2048) Press [Enter] Key
    Requested keysize is 2048 bits
    Please specify how long the key should be valid.
             0 = key does not expire
            = key expires in n days
          w = key expires in n weeks
          m = key expires in n months
          y = key expires in n years
    Key is valid for? (0) Press [Enter] Key
    Key does not expire at all
    Is this correct? (y/N) y
    You need a user ID to identify your key; the software constructs the user ID
    from the Real Name, Comment and Email Address in this form:
        "Heinrich Heine (Der Dichter) "
    Real name: sheron
    Email address:
    Comment:[Enter] key
    You selected this USER-ID:
    Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
    You need a Passphrase to protect your secret key.
    Enter passphrase: [Enter password twice]
    We need to generate a lot of random bytes. It is a good idea to perform
    some other action (type on the keyboard, move the mouse, utilize the
    disks) during the prime generation; this gives the random number
    generator a better chance to gain enough entropy.
    gpg: /home/sheron/.gnupg/trustdb.gpg: trustdb created
    gpg: key 8E19F126 marked as ultimately trusted
    public and secret key created and signed.
    gpg: checking the trustdb
    gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
    gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
    pub   1024D/8E19F126 2007-02-10
          Key fingerprint = A7AF E25D 3E8D 6946 37CC  8CCE 12C4 8DC1 8E19 F126
    uid                  sheron 
    sub   2048g/032824B9 2007-02-10
  4. Now keys generated, you can list your own key using:
  5. $ gpg -K


    $ gpg --list-keys


    pub   1024D/CA7A8402 2007-02-10
    uid    sheron
    sub   2048g/0A7B4F93 2007-02-10

    Let us try to understand the line

    pub 1024D/CA7A8402 2007-02-10:

    pub : Public key
    1024D : The number of bits in the key
    CA7A8402 : The key ID
    2007-02-10 : The date of key creation
    sheron : The user real name
    <> :The email id

    Most important is the key ID i.e. CA7A8402.

    Make sure you use powerful passphrase to protect keys and not the easy one.

  6. To list secret key, type the command:
  7. $ gpg --list-secret-keys


    sec   1024D/CA7A8402 2007-02-10
    uid                 sheron
    ssb   2048g/0A7B4F93 2007-02-10

The Beginner’s Guide to Linux Disk Utilities

Knowing how to check the condition of your hard disk is useful to determine when to replace your hard disk. In today’s article, we will show you some Linux disk utilities to diagnose the health of your hard disk.

Image by Scoobay

S.M.A.R.T System

Most modern ATA and SCSI hard disks have a Self-Monitoring, Analysis, and Reporting Technology (SMART) system. SMART hard disks internally monitor their own health and performance.

The SMART tool assesses the condition of your hard disk based on: the throughput of the hard disk, the seek errors rate of the magnetic heads, and other attributes that your hard disk manufacturer built into their hard disk.

Most implementations of SMART systems allow users to perform self-tests to monitor the performance and reliability of their hard disks. The simplest way to perform a SMART system test with Ubuntu is using the ‘Disk Utility’ under the ‘System’ > ‘Administration’ menu.

The disk utility lets you see the model, serial number, firmware, and the overall health assessment of the hard disk, as well as whether a SMART system is enabled on the hard disk.

The ‘SMART data’ button lets you see the SMART features of your hard disk.

The ‘Run Self-test’ button lets you initiate a short,extended, or a conveyance self-test on the hard disk.

When you execute these tests, you’ll see a progress meter, letting you see how far through the test is and what the estimated time of completion is.

The ‘Attributed section’ lets you see the errors and self-test information.

File System Check

There some other tools, beside the Disk Utility GUI, that we can use to diagnose the health of our hard disk. The File System Check (FSCK), that only comes as a command line tool, is one of the tools that we often use to check the condition of our hard disk.

You can use the ‘Check Filesystem’ feature of the ‘Disk Utility’ to perform the same check,if you are not a command line geek like us.

Of course, there are some situations where we have to use the command line tool to check our file system. For example when we are using a headless system, when our Linux box fails to boot, or when we simply want to show off our command line Kungfu skills to our friends.

At first, the FSCK command line tool looks like something that only a computer geek can handle; But you will find that FSCK is a very easy tool to use. There is one thing to note before you run FSCK; You need to unmount the file system using the ‘umount’ command. Fixing a mounted file system with FSCK could end up creating more damage than the original problem.

sudo umount /dev/sdb

The FSCK command is pretty straightforward:

sudo fsck -t ext4 /dev/sdb 

This command checks an ext4 file system (/dev/sdb) for inconsistencies. You should replace /dev/sdb with your own partition. You can run the ‘fdisk’ command to find out your system partitions:

sudo fdisk -l

Scheduled File System Checks

If you’re using Ubuntu, you will notice that Ubuntu runs an FSCK session when you boot your system from time to time. If you find this scheduled check annoying, you can re-schedule the scan using the ‘tune2fs’ command. Here’s how it typically looks like:

The mount count parameter tells us that Ubuntu scans our hard disk after 33 disk mounts.

We can configure the mount count using the ‘-c’ option:

sudo tune2fs -c 35 /dev/sda1

This command will re-configure Ubuntu to scan our hard disk after 35 hard disk mounts when the system boots.

Note: change ‘/dev/sda1/’ with your own partition

Bad Blocks

A bad sector is a sector on a computer’s disk drive that cannot be used due to permanent damage (or an OS inability to successfully access it), such as physical damage to the disk surface.

There are two ways to detect bad sectors in Linux: you can use the Disk Utility GUI, or if you are a command line geek like us, you can use the badblocks command to check your hard disk for bad sectors:

sudo badblocks -v /dev/sdb1

Badblock will give us the number of bad sectors in our hard disk.

zainul@zainul-laptop:~$ sudo badblocks -v /dev/sdb1
Checking blocks 0 to 97683200
Checking for bad blocks (read-only test): 3134528 done, 3:27 elapsed
3134560 done, 8:33 elapsed
3134561 done, 10:15 elapsed
3134562 done, 11:57 elapsed
3134563 done, 13:39 elapsed
Pass completed, 5 bad blocks found.

You have two options when you see bad blocks. You can either look for a new hard disk, or mark these bad blocks as unusable hard disk sectors. This involves two steps:

First we have to write the location of the bad sectors into a flat file.

sudo badblocks /dev/sdb > /home/zainul/bad-blocks

After that, we need to feed the flat file into the FSCK command to mark these bad sectors as ‘unusable’ sectors.

sudo fsck -l bad-blocks /dev/sdb

FSCK, Badblocks, and Disk Utility are some of the disk utilities that we often use to scan our hard disks. Do share with the other fellow readers if you know other Linux disk utilities to scan hard disks.

Searching filesystem from command line

There are several commands available on the command line to locate files and folders on the file system. This article reviews three of them, viz whereis ,locate. find.

1) whereis
This command can search for  the binary, source, and manual page files for a comand

$ whereis  whereis
whereis: /usr/bin/whereis /usr/share/man/man1/whereis.1.gz

2) locate:   locate uses a database created by an updatedb to efficiently locate files. Works great, assuming your database is updated often enough to be reasonable upto date. Most boxes using locate have the updatedb occuring in cron.  On my ubuntu box, I got a long list of files when I tried to locate  command.   RTFM locate

$locate locate

3)  find: find is perhaps one of the most powerful commands there is.   However, find is slow compared to locate as it  recursively search the paths supplied to  it.

The syntax of find is specified like this.

find path-list expression

It may look rather cryptic. 
Even though the man page lists only three  parts for the command as above, 
for simplicity  we can imagine  that  find  syntax  is havng  four fields.               
1 2 3 4
find starting point find which files action on result

You can formulate your find command based on the above table. For example,
if you want to find all  avi files in a folder named movies

1 2 3 4
find movies -name “*.avi” -print
$find  movies   -name "*.avi"  -print

Here are some examples you can try

a) to find all directories on the system whose permissions of 777

$        find / \( -type d -a -perm -777 \)     -print

b) find all core files in home directories and remove them

$         find /home -name core -exec rm {}     \;


c) find all files owned by a particular user no matter whose home directory they are in:

$       find /home -user      -print

d) find all files that have been modified (or had their modification time changed) in the last 30 days:

$      find / -mtime -30 -print

e) find all tmp files older than 30 days and remove

$ find /dirpath \( -name \*.tmp -a -mtime     +30 \) -exec rm {} \;

The man page of find has several other option that you can try.

How to password-protect GRUB

Password-protecting the bootloader is one method you may employ to enhance the physical security profile of your computer. GRUB, the GRand Unified Bootloader, is the default bootloader on virtually all Linux distributions, but on a significant number, the installer does not have support for setting a GRUB password. This article presents the step involved in password-enabling GRUB – on a running system.

Before we go through the steps involved in setting a password for GRUB, it’s best to understand why this is even necessary. Principally, we password-enable GRUB to:

  1. Prevent Access To Single User Mode — If an attacker can boot into single user mode, he becomes the root user.
  2. Prevent Access To the GRUB Console — If the machine uses GRUB as its boot loader, an attacker can use the edit the command’s interface to change its configuration or to gather information using the cat command.

If your distribution’s installer has support for setting a GRUB password, the process involved should be similar to the one shown in the image below, which was taken from a similar Fedora 13 tutorial. Just check “Use a boot loader password” and the installer will prompt for a password.

GRUB password

Specifying boot loader password

If your distribution’s installer does not have support for setting a password for GRUB, you can still do it after installation. The process involved in this exercise is the same across distributions. However, for this article, an installation of Fedora 13 was used. Here are the steps involved:

  1. From a shell terminal, run the grub-md5-crypt command. The password that’s requested will be the one that’ll be used to protect GRUB. It should not be the same as that of any user account on the system, certainly not the same as the root password. Note the md5 hash generated. You will need it in the next step.

    GRUB hashGenerate the md5 hash for password-protecting GRUB

  2. Edit /etc/grub.conf as shown in the image. Just add another line below the “timeout” line and type in password –md5 (md5 hash generated from step 1) as shown in the image. Save the file. Reboot and try to access other features of GRUB by pressing the “p” key. Did it work?


  3. Edit grub.conf

Complete this simple process, and you would have taken a small but significant towards enhancing the physical security profile of your computer

Reclaim Deleted Files and Repair Filesystems on Linux

Linux is as solid an operating system as you’ll ever use — but that doesn’t mean that the hardware you’re running it on is equally solid. Hard drives are as prone to errors as are file systems. And no matter how stable an OS is, it can’t prevent you from accidentally deleting files and/or folders. But don’t despair: Linux is equipped with a number of tools that can help you repair filesystem errors and reclaim deleted files.

Which tools? To start, e2fsck, scalpel, and lsof will get you the farthest. Let’s take a look at how each of these can be used to help your file systems be free of errors and your files be freed from accidental deletion.

Checking Ext2/Ext3/Ext4 Filesystems with e2fsck

The e2fsck utility takes after the original UNIX fsck utility, but is used to check the Ext2/Ext3/Ext4 family of filesystems. It’s used to check, and repair, filesystems that have been shut down uncleanly or otherwise developed errors.

One problem most users face is that the e2fsck tool can only work on unmounted partitions. This can cause a problem if the file system you need to check is also the one you are working on. Many suggest switching your running system to run level 1 with the command (run as the administrative user):

init 1

However, I recommend you take this one step further and use a Live distribution like Knoppix or Puppy Linux or your distribution’s live CD if it has one. By booting into a Live distribution your disks will not be mounted and can safely be checked for errors. If, however you do not want to use the Live distribution you will need to make sure you do switch to run level 1 and then unmount the partition you want to check. Say, for instance, you want to check partition /dev/sdb1. To do this you would first switch to run level 1 (command shown above) and then run the command:

umount /dev/sdb1

With the target partition unmounted, you are ready to begin running the check. To do this enter the command:

e2fsck -y /dev/sdb1

The -y option assumes the answer is “yes” for all of the questions the command will present to you. Depending upon the size of your drive and the amount errors on your drive, this repair can take quite some time. Once the repair is complete, you can always run the command again to check if any errors were missed. When the drive comes up clean you can reboot into your normal system (If you used a live CD to run e2fsck, remember to remove the live disk upon reboot) or remount the unmounted partition.

Recover Deleted Files

Now let’s take a look at the process of recovering deleted files. The reason this is even possible is that a file is actually just a link to an inode on a disk. This inode contains all of the information for the file. When you delete a file you really only break the link to the inode so the file can really only not be found. The actual inode itself will remain on your disk… but only temporarily. As long as a process has that deleted file open that inode is not made available for writing. So, this method actually has a time limit, and a fairly short time period at that. The key to this recovery is the /proc directory. Every process on your system has a directory, within /proc, listed by its name. If you run the command ls /proc you will see a bunch of directories with numeric names as well as directories/files that have names that should look familiar. The most important directories are the numerically named directories. Those numbers are process IDs (PIDs) of running applications. You can always use the ps command to find the PID of the application you are looking for.

Once you have located the correct process in /proc you can then grab the data from the correct directory and save it again. File recovered. Let’s take a look at the whole process. I will demonstrate this with a fairly simplistic example which you can expand upon fairly easily.

Let’s create a file (say it’s a Bash script or configuration file) called test_file. Create that file with the command:

echo "this is my test file" > ~/test_file

Now you have a file called “test_file” that contains the single line “this is my test file”. Let’s delete that file and recover it. To do this we will view the contents of the file with the less command and then zombie that process so the data is being held. Here’s the steps:

Step 1: Let’s view the contents of that file with the command less ~/test_file.

Step 2: With that file open in your terminal window, hit the key combination Ctrl-z to zombie the process.

Step 3: Let’s make sure our test file still exists. If you issue the command ls -l ~/test_file you will see still your file there. So far so good.

Step 4: At the same command prompt issue the command rm ~/test_file to delete the file.

Step 5: Check to see if the file is there with the command ls ~/test_file. You should not see that file listed now. Because the command we used to view the file (less ~/test_file) was zombied, the data has been held. Let’s recover it.

Step 6: Issue the command lsof | grep test_file. This command will take some time to run, but will eventually spit out the needed information which will look similar to:

less 14675 zombie 4r REG 8,1 21 5127399 /home/zombie/test_file (deleted)

What we need is the PID of the file (Which is in the second column — in this example it is 14675) and the file descriptor (Which is in the fourth column — in this example it’s the number 4).

Step 7: Time for the actual recovery. With the information you have now you can recover that file with the command:

cp /proc/14675/fd/4 ~/recovered_file

Step 8: Let’s verify that the contents of the file are intact. Issue the command:

less ~/recovered_file

You should see that the contents are, in fact, the same. The contents of ~/recovered_file should be identical to that of the original ~/test_file. If so, you have successfully recovered a deleted file from your Linux system.

Recovery with Scalpel

There is actually an easier way to recover specific file types on a Linux system using the tool Scalpel. Here are the steps for installing and recovering using this simple tool.

Step 1: Installation. I will demonstrate on a Ubuntu 10.10 desktop. To install scalpel, open up a terminal window and issue the command:

sudo apt-get install scalpel

You will need to enter your sudo password and accept any dependencies (if necessary).

Step 2: Edit the config file. Issue the command:

sudo nano /etc/scalpel/scalpel.conf

Now take a look through the file. You will find lines that correspond to file types (such as .pdf, .doc, .png, etc). When you see a file type you want to attempt to recover just uncomment that line (remove the “#” character). When you have finished editing this file, save and close it.

Step 3: Recover your files. The first thing you must do is create a directory that will hold your recovered files. Let’s use the target folder ~/RECOVERED. With that folder in place, issue the command:

sudo scalpel /dev/sdX -o ~/RECOVERED

(Where X is the exact partition you want to scan).

Step 4: Wait. This process will take quite some time. Don’t even bother to watch it run or you’ll be watching for an hour or so (depending upon the size of your drive and how many file types you are attempting to recover.)

Step 5: Check the recovery folder. Once the command has completed, check the ~/RECOVERED folder (there should be sub-folders created on a per-file-type basis) to see if scalpel has recovered your files. If so, congratulations. If not, you can always try to run the command again (or on a different partition).

Final Thoughts

No one wants to have to deal with a corrupted file system or lost files. And although Linux has plenty of tools to help you when this occurs, nothing is perfect. Some times a file system or files can be too far gone to recover. But with a little care and intelligent use, you should be able to avoid having to use these tools all together.

Strace – A very powerful troubleshooting tool

Many times I have come across seemingly hopeless situations where a program when compiled and installed in GNU/Linux just fails to run. In such situations after I have tried every trick in the book like searching on the net and posting questions to Linux forums, and still failed to resolve the problem, I turn to the last resort which is trace the output of the misbehaving program. Tracing the output of a program throws up a lot of data which is not usually available when the program is run normally. And in many instances, sifting through this volume of data has proved fruitful in pin pointing the cause of error.
For tracing the system calls of a program, we have a very good tool in strace. What is unique about strace is that, when it is run in conjunction with a program, it outputs all the calls made to the kernel by the program. In many cases, a program may fail because it is unable to open a file or because of insufficient memory. And tracing the output of the program will clearly show the cause of either problem.

The use of strace is quite simple and takes the following form:

$ strace <name of the program>

For example, I can run a trace on ‘ls’ as follows :

$ strace ls

And this will output a great amount of data on to the screen. If it is hard to keep track of the scrolling mass of data, then there is an option to write the output of strace to a file instead which is done using the -o option. For example,

$ strace -o strace_ls_output.txt ls
… will write all the tracing output of ‘ls’ to the ‘strace_ls_output.txt’ file. Now all it requires is to open the file in a text editor and analyze the output to get the necessary clues.
It is common to find a lot of system function calls in the strace output. The most common of them being open(),write(),read(),close() and so on. But the function calls are not limited to these four as you will find many others too.
For example, if you look in the strace output of ls, you will find the following line:
open("/lib/", O_RDONLY)  = 3
This means that some aspect of ls requires the library module to be present in the /lib folder. And if the library is missing or in a different path, then that aspect of ls which depends on this library will fail to function. The line of code signifies that the opening of the library is successful.

Here I will share my experience in using strace to solve a particular problem I faced. I had installed all the multimedia codecs including the libdvdcss which allowed me to play encrypted DVDs in Ubuntu Linux which I use on a daily basis. But after installing all the necessary codecs, when I tried playing a DVD movie, totem gave me an error saying that it was unable to play the movie (see the picture below). But since I knew that I had already installed libdvdcss on my machine, I was at a loss what to do.

Fig: Totem showing error saying that it cannot find libdvdcss

Then I ran strace on totem as follows :

$ strace -o strace.totem totem
… and then opened the file strace.totem in a text editor and searched for the string libdvdcss . And not surprisingly I came across this line of output as shown in the listing below.
# Output of strace on totem
open("/etc/", O_RDONLY)      = 26
fstat64(26, {st_mode=S_IFREG|0644, st_size=58317, ...}) = 0
old_mmap(NULL, 58317, PROT_READ, MAP_PRIVATE, 26, 0) = 0xb645e000
access("/etc/", F_OK)      = -1 ENOENT (No such file or directory)
open("/lib/tls/i686/cmov/", O_RDONLY) = -1 ENOENT (No such file or directory)
stat64("/lib/tls/i686/cmov", {st_mode=S_IFDIR|0755, st_size=1560, ...}) = 0
stat64("/lib/i486-linux-gnu", 0xbfab4770) = -1 ENOENT (No such file or directory)
munmap(0xb645e000, 58317)               = 0
open("/usr/lib/xine/plugins/1.1.1/", O_RDONLY) = 26
read(26, "\177ELF\1\1\1\3\3\1\320\27"..., 512) = 512
fstat64(26, {st_mode=S_IFREG|0644, st_size=40412, ...}) = 0
In the above listing which I have truncated for clarity, the line in bold clearly shows that totem is trying to find the library in, among other places, the ‘/lib/tls/i686/cmov/’ directory and the return value of -1 shows that it has failed to find it. So I realized that for totem to correctly play the encrypted DVD, it has to find the file in the path it is searching.

Then I used the find command to locate the library and then copy it to the directory /lib/tls/i686/cmov/. Once I accomplished this, I tried playing the DVD again in totem and it started playing without a hitch.

Fig: Totem playing an encrypted DVD Movie
Just to make sure, I took another trace of totem and it showed that the error was rectified as shown by the bold line of output below.
# Output of the second strace on totem
open("/etc/", O_RDONLY)      = 26
fstat64(26, {st_mode=S_IFREG|0644, st_size=58317, ...}) = 0
old_mmap(NULL, 58317, PROT_READ, MAP_PRIVATE, 26, 0) = 0xb644d000
close(26)                               = 0
access("/etc/", F_OK)      = -1 ENOENT (No such file or directory)
open("/lib/tls/i686/cmov/", O_RDONLY) = 26
stat64("/lib/tls/i686/sse2", 0xbffa4020) = -1 ENOENT (No such file or directory)
munmap(0xb645e000, 58317)               = 0
open("/usr/lib/xine/plugins/1.1.1/", O_RDONLY) = 26
read(26, "\177ELF\1\1\1\3\3\1\360\20"..., 512) = 512
fstat64(26, {st_mode=S_IFREG|0644, st_size=28736, ...}) = 0
Opening the man page of strace, one will find scores of options. For example, if you use the option -t, then strace will prefix each line of the trace with the time of day. One can even specify the system call functions to trace using the -e option. For example, to trace only open() and close() function system calls, one can use the command as follows:
$ strace -o strace.totem -e trace=open,close totem
The ubiquitous strace should not be confused with DTrace that ships with Sun Solaris. strace is just a single tool which takes care of a small part which is tracing a single program. Where as Sun’s DTrace toolkit is much more powerful and consists of a collection of scripts which can track, tune and aid the user in troubleshooting ones system in real time. More over, dtrace is a scripting language with close semblance to C/C++ and awk. Put another way, strace tool in GNU/Linux provides only one of the many functions provided by DTrace in Sun Solaris. That being said, strace plays an important part in aiding the user to troubleshoot ones programs by providing a view of the system calls that the program makes to the Linux kernel.

pgrep command for listing PID’s

pgrep is a bash command that looks through the currently running processes and lists the process IDs which matches the selection criteria to stdout.  All the criteria given as parameters to pgrep have to matched.

$ pgrep fi
— List the PID of process names match with “fi”.
$ pgrep -l fi
— Same as above and process name also get listed.
$ pgrep -vl fi
— List all processes, which name not match with “fi”.
$ pgrep -xl fi
— List all processes, which name exactly match with “fi”.
$ pgrep -f sbin
–List all processes, which are running from  some sbin folder. It check the command line also.
$pgrep -c fi
–Show the count of matching processes.
$ pgrep -d, fi
— List the matching process id in CSV format.
$ pgrep -t tty1
— List the process controlling the term tty1.
$ pgrep -u root, jo
— List all processes owned by root and jo.
$ pgrep -u jo gnome
— List the process called sshd and owned  by jo.
$ pgrep -n fi
— Show only the newest process.
$ pgrep -o fi
— Show only the oldest process.