Wednesday, January 13, 2010

Deleting duplicate mails in Evolution

Found this useful blog on web regarding how to delete duplicate mails in Evolution.
You need to install evolution-dev package (sudo apt-get install evolution-dev) before building the Evolution plugin else it will not configure.
Tried and tested on ubuntu-8.10.

Introduction to "Kernel Panic" aka "OOPS"

My two cents on Linux Kernel(2.6) crashing based on my little experience and understanding of loosely coupled online materials and kernel docs.

Here I'm trying to track down the reason(s) for the crash of a test kernel module named ERROR_MODULE.

=== Decoding the Kernel Panic message or OOPS dump ===

Unable to handle kernel NULL pointer dereference at virtual address 00000000

/* All addresses below PAGE_SIZE are trapped by the CPU's MMU as probable NULL pointer dereferences. The PAGE_SIZE is the size of a virtual page on a CPU. Obtaining the value to which a pointer refers is called dereferencing the pointer. A null-pointer dereference takes place when a pointer with a value of NULL is used as though it pointed to a valid memory area */

printing eip: /* Instruction pointer */

c01cc141 /* Address of instruction the CPU was executing at the time it died */

*pde = 00000000

/* Page Directory Entry, points to a page table and also contain bit flags about the relevant region of memory. On the Intel x86 architecture starting from 386, the PDE is a 32 bit entry in the PDT that contains a pointer to the page table describing a region of 4MB in the process address space. The offset of these 4MB in the address space is determined according to the offset of the PDE in the PDT. Bits 0-11 of a PDE contain the same bit flags that a Page Table Entry contains. The rest of the bits, 12-31, define the physical address of the page table for that PDE, which needs to be valid or not according to the bit flags. */

OOPS: 0002 [#1]

/* The OOPS part can be decoded on x86 as
 * bit 0 == 0 means no page found, 1 means protection fault
 * bit 1 == 0 means read, 1 means write
 * bit 2 == 0 means kernel, 1 means user-mode
 * [#n], it is the nth OOPS message because one OOP can lead to other OOP(s)

Modules linked in: ERROR_MODULE pcmcia ... ... ...   /* Module dependencies */

CPU:   0      /* The CPU it occurred on '0' on a single processor machine */

EIP: 0060:[] Tainted: PF VLI
 * EIP shows the Code segment and Instruction address.
 * Tainted module:
 * P --> Proprietary module
 * F --> Module was force loaded

EFLAGS: 00010296 (2.6.x)
/* Linux-2.6.x kernel, Extended flags architecture dependent */

EIP is at memset+0x11/0x30
/* function + offset/total_length [module_name]

eax: defc1c04   ebx: 0000005c   ecx: defc1d4c ... ... ...
/* General purpose CPU register dump */

Process pccardd (pid: 3407, threadinfo=defc0000 task=de958a60)
/* The process context kernel was using */

Stack: 0000000a ce3a0000 00000000 e01fd66a defc1c04 ... ... ...

/* Finally, the system gives you a stack dump: A stack dump, the contents of stack, is often displayed when an error occurs that includes hardware registers or a reserved amount of memory used for arithmetic calculations, local variables or to keep track of internal operations (the sequence of routines called in a program) or interrupts until they can be serviced. */

Call Trace: [e01fd66a] ERROR_FUNCTION+0x631/0xa4f [ERROR_MODULE]
[c0114ccf] scheduler_tick+0x1df/0x460
[c01262eb] __rcu_process_callbacks+0x4b/0xf0
... ... ...

Code: a5 f6 c2 01 74 01 a4 89 ... ... ...

/* Call trace dump together with some of the code at the death point. A call trace, which is, basically, a list of functions that the process was in at the moment of the OOPS. The actual numeric values are the almost completely useless, because they depend on your particular kernel. Only somebody who has access to the corresponding symbol map for that kernel can identify the actual names of the functions.*/

Bad EIP value
/* Bad EIP value is a hardware issue, e.g. a pci add-on controller issue. */

Finally the kernel panics or will continue running with compromised reliability, as it is unable to detect or use the piece of hardware correctly using the base kernel modules

=== OOPS tracing in the source code ===

1. Software used is GDB.

2. Steps taken during Kernel module source(ERROR_FILE.c etc) compilation:
(a) Compile the modules to be debugged using -g option, which adds debugging information in the compiled modules.
(b) Also use the -g option while compiling the source file containing the error function (decoded from stack dump) i.e ERROR_FUNCTION().
(c) Do not strip the modules to be debugged.

3. Reproduce the problem and get the stack dump:
In the call trace section, find the last call on the stack for the function and module which caused the failure. Its a stack dump so the topmost function is the function we are looking for.
NOTE: Escape the kernel defined functions like memset etc because this document is limited to track the problem to user defined test module only.
 So we get:

4. Now run the following GDB commands.

GNU gdb Red Hat Linux (6.1post-1.20040607.43.0.1rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-redhat-linux-gnu"...Using host libthread_db library "/lib/tls/".

/* Now we will load the symbol table from the object file in which ERROR_FUNCTION is defined i.e ERROR_FILE.c, so load the object file ERROR_FILE.o for symbols. */

(gdb) add-symbol-table ERROR_FILE.o
add symbol table from file "ERROR_FILE.o" at
(y or n) y
Reading symbols from ERROR_FILE.o...done.

(gdb) disassemble ERROR_FUNCTION
Dump of assembler code for function :
0x0000094d   :      inc       %esp
0x0000094e   :      and      $0x24,%al
0x00000950   :      add      0x28(%esp),%eax
... ... ...

/* Now we can get the instruction addresses map the equivalent assembly code to the C code. the starting address of ERROR_FUNCTION is 0x0000094d. From step 3, we can get the offset address of the instruction in ERROR_FUNCTION that is causing the problem i.e. 0x631. We can map this offset and get the instruction which is causing the crash at:
0x0000094d + 0x00000631 address i.e. 0x00000f7e
Since, we have got the address of instruction we can now list down the code segment. */

(gdb) list *0x00000f7e
... ... ... ...
RESULT: /* You will get the code segment along with the line number or the operation corresponding to 0xf7e offset causing the ERROR_MODULE to crash. */