When TI’s CDCE62002 fails to lock

I really banged my head on this one: I was sure I had set up all registers correctly, and still I got complete garbage at the output. Or, as some investigation showed, everything worked OK, only the PLL didn’t seem to do anything: The VCO was stuck at its lowest possible frequency (which depended on whether I picked the one for higher or lower frequencies).

At first I thought that there was something wrong with my reference clock. But it was OK.

Only after a while did I realize that the PLL needs to be recalibrated after the registers are set. The CDCE62002 wasn’t intended to be programmed after powerup, like the CDCE906. The by-design use is to program the EEPROM once, and then power it up. Doing it this way, the correct values go into the RAM from EEPROM, after which calibration takes place with the correct parameters.

Solution: Power down the device by clearing bit 7 in register #2, write the desired values in registers #0 and #1, and then power up again by setting bit 7 (and and bit 8, regardless) in register #2. This way the device wakes up as is the registers were loaded from EEPROM, and runs its calibration routine correctly.

What I still don’t understand, is why I have to do this twice. The VCO seems to go to its highest frequency now, unless I repeat the ritual mentioned above again 100 ms after the first time. If I do this within microseconds it’s no good.

I’ve written a Verilog module to handle this. Basically, send_data should be asserted during one clock cycle, and the parameter inputs should be held steady for some 256 clock cycles afterwards. As I’m using this module, there are constant values there.

This is not an example of best Verilog coding techniques, but since I didn’t care about either slice count or timing here, I went for the quickest solution, even if it’s a bit dirty. And it works.

Note that the module’s clock frequency should not exceed 40 MHz, since the maximal SPI clock allowed by spec is 20 MHz. And again, for this to really work, send_data has to be asserted twice, with some 100 ms or so between assertions. I’ll check with TI about this.

module cdce62002
  (
   input          clk, // Maximum 40 MHz
   input 	  reset, // Active high

   output 	  busy,

   input          send_data,
   output reg     spi_clk, spi_le, spi_mosi,
   input          spi_miso,  // Never used

   // The names below match those used in pages 22-24 of the datasheet

   input  INBUFSELX,
   input  INBUFSELY,
   input  REFSEL,
   input  AUXSEL,
   input  ACDCSEL,
   input  TERMSEL,
   input [3:0]  REFDIVIDE,
   input [1:0]  LOCKW,
   input [3:0]  OUT0DIVRSEL,
   input [3:0]  OUT1DIVRSEL,
   input   HIPERFORMANCE,
   input   OUTBUFSEL0X,
   input   OUTBUFSEL0Y,
   input   OUTBUFSEL1X,
   input   OUTBUFSEL1Y,

   input  SELVCO,
   input [7:0]  SELINDIV,
   input [1:0]  SELPRESC,
   input [7:0]  SELFBDIV,
   input [2:0]  SELBPDIV,
   input [3:0]  LFRCSEL
   );

   reg [7:0] 	  out_pointer;
   reg 		  active;

   wire [255:0]   data_out;
   wire [255:0]   le_out;
   wire [27:0] 	  word0, word1, word2, word3;
   wire [27:0] 	  ones = 28'hfff_ffff;

   // synthesis attribute IOB of spi_clk is true;
   // synthesis attribute IOB of spi_le is true;
   // synthesis attribute IOB of spi_mosi is true;

   // synthesis attribute init of out_pointer is 0 ;
   // synthesis attribute init of active is 0 ;
   // synthesis attribute init of spi_le is 1;

   assign 	  busy = (out_pointer != 0);

   // "active" is necessary because we don't rely on getting a proper
   // reset signal, and out_pointer is subject to munching by the
   // synthesizer, which may result in nasty things during wakeup

   always @(posedge clk or posedge reset)
     if (reset)
       begin
	  out_pointer <= 0;
	  active <= 0;
       end
     else if (send_data)
       begin
	  out_pointer <= 1;
	  active <= 1;
       end
     else if ((spi_clk) && busy)
       out_pointer <= out_pointer + 1;

   always @(posedge clk)
     begin
	if (spi_clk)
	  begin
	     spi_mosi <= data_out[out_pointer];
	     spi_le <= !(le_out[out_pointer] && active);
	  end
	spi_clk <= !spi_clk;
     end 

   assign data_out = { word3, 4'd2, 2'd0, // To register #2 again.
		       64'd0, // Dwell a bit in power down
		       word1, 4'd1, 2'd0,
		       word0, 4'd0, 2'd0,
		       word2, 4'd2, 4'd0
		       };

   assign le_out = { ones[27:0], ones[3:0], 2'd0,
		     64'd0, // Dwell a bit in power down
		     ones[27:0], ones[3:0], 2'd0,
		     ones[27:0], ones[3:0], 2'd0,
		     ones[27:0], ones[3:0], 4'd0 };

   assign word0[0] = INBUFSELX;
   assign word0[1] = INBUFSELY;
   assign word0[2] = REFSEL;
   assign word0[3] = AUXSEL;
   assign word0[4] = ACDCSEL;
   assign word0[5] = TERMSEL;
   assign word0[9:6] = REFDIVIDE;
   assign word0[10] = 0; // TI trashed external feedback
   assign word0[12:11] = 0; // TI's test bits
   assign word0[14:13] = LOCKW;
   assign word0[18:15] = OUT0DIVRSEL;
   assign word0[22:19] = OUT1DIVRSEL;
   assign word0[23] = HIPERFORMANCE;
   assign word0[24] = OUTBUFSEL0X;
   assign word0[25] = OUTBUFSEL0Y;
   assign word0[26] = OUTBUFSEL1X;
   assign word0[27] = OUTBUFSEL1Y;

   assign word1[0] = SELVCO;
   assign word1[8:1] = SELINDIV;
   assign word1[10:9] = SELPRESC;
   assign word1[18:11] = SELFBDIV;
   assign word1[21:19] = SELBPDIV;
   assign word1[25:22] = LFRCSEL;
   assign word1[27:26] = 2'b10; // Read only bits   

   // word2 and word3 are both sent to register #2 in order to
   // restart the PLL calibration after registers are set.

   assign word2 = 28'h000_0100; // Power down
   assign word3 = 28'h000_0180; // Exit powerdown
endmodule

Fedora 12: Creating an initramfs image for self-compiled kernel

So I compiled the kernel I downloaded from kernel.org like I’ve always done, but the system wouldn’t boot, and it had good reasons not to: My root filesystem is both encrypted and RAID-5′ed, which requires, at least, a password to be entered. That job has to be done by some script which runs before my root filesystem is mounted. So obviously, a clever environment is necessary at that stage.

Before trying to fiddle with the existing image, I figured out there must be a script creating that image. And so it did. It’s called dracut and seems to be what Fedora uses in its distribution kernels.

So, go as root (or an internal ldconfig call will fail, not that I know what effect that has)

# dracut initramfs-2.6.35.4-ELI1.img 2.6.35.4-ELI1

This created a 92 Mbyte compressed image file. Viewing the image (uncompressed and opened into files), it turns out that 308 MB out of the 324 MB this image occupies are for kernel modules. Nice, but way too much. And also causes the stage between leaving GRUB and until prompted for password take something like two minutes (!) during which a blank screen is shown. But eventually the system booted up, and ran normally.

So this initramfs is definitely sufficient, but it includes too much junk on the way. Solution: Using the -H flag, so that only the modules necessary for this certain computer are loaded. This is maybe dangerous in case of a hardware change but it reduced the kernel size to 16 MBytes, which is slightly larger than the  distribution initramfs (12MBytes). Which I couldn’t care less about, in particular since the RAM is freed when the real root file system is mounted.

I ran dracut again while the target kernel was running. I don’t know if this has any significance.

# dracut -H initramfs-2.6.35.4-ELI1.img 2.6.35.4-ELI1

Dissection

And as a final note, I’d just mention, that if you want to know what’s inside that image, there’s always

$ lsinitrd initramfs.img

or open the image: Get yourself to a directory which you don’t care about filling with junk, and go:

$ zcat initramfs-2.6.35.4-ELI1 | cpio -i -d -H newc --no-absolute-filenames

To build an image, go from the root of the directory to pack

$ find . -print0 | cpio --null -ov --format=newc | gzip -9 > ../initramfs.img

(note the verbose flag, so all files are printed out)

This may not be necessary, as recent versions of dracut supports injecting custom files and other tweaks.

Setting up an encrypted ext4 disk image with dm_crypt

This script turns the file given as argument to an image of an encrypted and ext4-formatted disk image file.

After this, you can do something like:

losetup /dev/loop0 /storage/diskimages/thefile
cryptsetup luksOpen /dev/loop0 myfakedisk
mount /dev/mapper/myfakedisk /path/to/mountpoint

And then close with

umount /path/to/mountpoint
cryptsetup luksClose myfakedisk
losetup -d /dev/loop0

The operation above and the script below must be run as root. This means that you can mess up things heavily, including wiping your disk if you don’t know what you’re doing, or because of a mistake of mine. Be sure you’ve proofread the script below, and that you know what you’re doing. Don’t blame me, even if I got the script wrong.

If you’ll ever think about modifying this script, please note that the most dangerous point is that the script will, for some reason, not be able to bind the image file to the loop device, because it’s bound to something else, but will go on anyhow. In that case, it will really wipe important data without any warning. Note the first “if” statement. That’s where the pudding lies.

#!/bin/bash

# Usage (as root!): make_enc_ext4.sh imagefile

myloop=`losetup -f`
mymapper=temporary_$$

if losetup $myloop $1 ; then
  echo Using loop device $myloop
  echo ALL DATA IN $1 WILL BE LOST  

  if ! cryptsetup luksFormat $myloop ; then
    echo Did not set up LUKS on image
    losetup -d $myloop
    exit 1;
  fi

  echo Now mapping the encrypted loop device. Enter the same passphrase

  if ! cryptsetup luksOpen $myloop $mymapper ; then
    echo Failed to map the image. Probably you entered the passphrase
    echo wrong. Just run this script again.
    losetup -d $myloop
    exit 1;
  fi

  echo $myloop is now mapped to $mymapper

  if ! mkfs.ext4 /dev/mapper/$mymapper ; then
    echo Failed to create an ext4 filesystem on the image

    cryptsetup luksClose $mymapper
    losetup -d $myloop
    exit 1;
  fi

  if ! tune2fs -c 0 -i 0 /dev/mapper/$mymapper ; then
    echo Failed to cancel automatic fsck on the disk
  fi

  cryptsetup luksClose $mymapper

  echo Done. You should now be able to do something like
  echo losetup $myloop $1
  echo cryptsetup luksOpen $myloop myfakedisk
  echo mount /dev/mapper/myfakedisk /path/to/mountpoint
  echo Then close with
  echo cryptsetup luksClose myfakedisk
  echo losetup -d $myloop
else
  echo Failed to set up loop device for file \"$1\"
  exit 1;
fi

losetup -d $myloop

Cinelerra: When YUV4MPEG fails with mencoder

Somewhere in the region of version r31061-4.4.3, a bug in mplayer and mencoder made the program not cache properly. This causes issues when trying to play streams, or when rendering an edited video in Cinelerra in YUV4MPEG-to-pipe mode.

The common behavior during rendering is that mencoder quits immediately or very soon, because it thinks end-of-file is reached, and very soon Cinelerra gets a SIGPIPE telling it nobody is listening to its output pipe. In the error log one gets

YUVStream::write_frame(utint8_t**): write_frame() failed: system error (failed/write)

A simple way to check if mencoder is to blame, is to try

$ mplayer < clip.avi

and then

$ cat clip.avi | mplayer - -cache 8192

If the first works, and the second doesn’t, you have the buggy mplayer.

This issue was solved on May 26th 2010, so just upgrade your mplayer/mencoder suite. For example, with version r31628-4.4.4 all works fine again.

Gnome workaround: Downloading a MOV file from Canon 500D

One of the things I love about fancy GUI interfaces, is that they work as long as things are easy, and always fail at the critical moments.

Downloading a 4 GB video clip from my Canon 500D to a Fedora 12, using the File Manager (nautilus?) was no different. As usual, when I plugged in the camera, I got the nice camera icon on the desktop. Browse my way to the right folder, copy the images into my disk just by dragging and dropping. How easy, how sweet. Too bad it didn’t work for the video clip.

Solution: Good old command-line utilities. That’s the way it always ends.

First unmount the Camera from the desktop (right-click the icon, pick Unmount). Otherwise, you get

[eli@desk videotests]$ gphoto2 -L
*** Error ***
An error occurred in the io-library ('Could not lock the device'): Camera is already in use.
*** Error (-60: 'Could not lock the device') ***

Then, in the command line window, let’s list the file available for download:

[eli@desk videotests]$ gphoto2 -L 

There is no file in folder '/'.
There is no file in folder '/store_00020001'.
There is no file in folder '/store_00020001/DCIM'.
There are 378 files in folder '/store_00020001/DCIM/100CANON'.
#1     IMG_6335.JPG               rd  3123 KB 4752x3168 image/jpeg
#2     IMG_6336.JPG               rd  3896 KB 4752x3168 image/jpeg
#3     IMG_6337.JPG               rd  3809 KB 4752x3168 image/jpeg
#4     IMG_6338.JPG               rd  3863 KB 4752x3168 image/jpeg
#5     IMG_6339.JPG               rd  2815 KB 4752x3168 image/jpeg
...
#372   MVI_6729.MOV               rd 67651 KB video/quicktime
#373   MVI_6730.MOV               rd 126006 KB video/quicktime
#374   MVI_6731.MOV               rd 81930 KB video/quicktime
#375   MVI_6732.MOV               rd 101169 KB video/quicktime
#376   MVI_6733.MOV               rd 105895 KB video/quicktime
#377   MVI_6734.MOV               rd 92356 KB video/quicktime
#378   MVI_6739.MOV               rd 4181560 KB video/quicktime
There is no file in folder '/store_00020001/MISC'.

It’s the last file, number 378, that I want. So:

[eli@desk videotests]$ gphoto2 -p 378
Downloading 'MVI_6739.MOV' from folder '/store_00020001/DCIM/100CANON'...
Saving file as MVI_6739.MOV
[eli@desk videotests]$ ls -lh
total 4.0G
-rw-rw-r--. 1 eli eli 4.0G 2010-09-06 12:30 MVI_6739.MOV

Which took some 9 minutes (for a 20 minutes 1280x720 clip).

And if we’re at it, here’s the command I used to convert it to a DivX cinelerra likes to work with (give or take MPEG4 glitches here and there) :

[eli@desk videotests]$ ffmpeg -i MVI_6739.MOV -acodec pcm_s16le -b 5000k -vcodec mpeg4 -vtag XVID was_4gb.avi

I know, I lost some quality there, but my Cinelerra version still doesn’t handle the sowt audio codec thing well. And the target file was 820MB instead.

Or, if you really want work seriously, and the size of the file doesn’t matter, convert it to the safest choice, MJPEG:

[eli@desk videotests]$ ffmpeg -i MVI_6739.MOV -acodec pcm_s16le -b 50000k -vcodec mjpeg -vtag MJPG mjpeg.avi

UNISIM and command-line simulation with the Xilinx simulator

I simulate models outside of the Xilinx’ IDE (known as ISE), since the simulation is textual anyhow. Besides, running regression tests without being sure the simulation settings are repeated exactly is a good way to waste time every time the mouse clicks without our full awareness.

Anyhow, my problem was that I instantiated a Xilinx synthesis primitive within one of my modules (a block RAM to be precise) and for some reason, the tools didn’t like it. Here’s my little war story. Spoiler: I won.

This is the original makefile:

SIMNAME=simulation
PLDIRECTORY=../src/PLverilog/

PLSOURCES=bits2alpha trajectory modulator transmitter dualrom67
SOURCES=glbl test_tx $(addprefix $(PLDIRECTORY), $(PLSOURCES))

TOPLEVEL=test_tx 

VERILOGS=$(addsuffix .v, $(SOURCES))
#LIBS=$(addsuffix _lib, $(SOURCES))
#LIBINARG=$(foreach source, $(SOURCES), -lib $(source)_lib)

all:	clean
	vlogcomp $(VERILOGS)
	fuse -top $(TOPLEVEL) -top glbl -o $(SIMNAME).exe
	$(SIMNAME).exe -tclbatch simcommands.tcl

clean:
	rm -f `find . -name "*~"`
	rm -rf isim isim.tmp_save isimwavedata.xwv
	rm -f isim.log $(SIMNAME).exe simulate_dofile.lo*
	rm -f out.*

Running a compilation, all Verilog compilations run properly, but when it’s time for fuse (linker?) I got:

fuse -top test_tx  -top glbl -o simulation.exe
Release 9.2.03i - ISE Simulator Fuse J.39

Copyright (c) 1995-2007 Xilinx, Inc.  All rights reserved.

ERROR:HDLParsers:3482 - Could not resolve instantiated unit RAMB16_S18_S18 in
   Verilog module work/dualrom67 in any library

ERROR:Simulator:198 - Failed when handling dependencies for module test_tx
make: *** [all] Error 2

And yes, I did instantiate a block RAM in one of the modules.  RAMB16_S18_S18 explicitly. Somehow I got the idea that I need to use the unisim library, so I added the “-lib unisim” option to fuse, and got this instead:

fuse -lib unisim -top test_tx  -top glbl -o simulation.exe
Release 9.2.03i - ISE Simulator Fuse J.39

Copyright (c) 1995-2007 Xilinx, Inc.  All rights reserved.

ERROR:Simulator:170 - unisim/VPKG is not compiled properly. Please recompile
   unisim/VPKG in file "" without -incremental option.

ERROR:Simulator:198 - Failed when handling dependencies for module test_tx
make: *** [all] Error 2

What now? There is a VPKG module installed ( {ISE install directory}/vhdl/src/unisims/unisim_VPKG.vhd, namely) but it’s in VHDL. I could compile that one. But I found it much cooler to copy {ISE install directory}/verilog/src/unisims/RAMB16_S18_S18.v into my home directory, and add RAMB16_S18_S18 to the SOURCES in the makefile above (and remove the -lib unisim, of course).

And that did the job.

Lesson learned: Don’t listen to recommendations on error messages (as if that was new). Just copy the model you need.

GIMP Curves: Cleaning up old settings

Each and every time you use the Curves function in GIMP 2.6, it saves that setting, and labels it with the time it was used. The same color curve can then be used again, just by recognizing the time in the Presets drop-down menu within the Curves dialog box. This is a great feature, since it’s common to want to repeat a good curve setting, even if it wasn’t clear it’s so good when it was done.

Anyhow, there’s a little problem: The list gets very large after a while. Presets saved by name will most likely appear last, making them effectively unavailable.

The presets are stored as LISP code in a file called ~/.gimp-2.6/tool-options/gimp-curves-tool.settings (the tilde means “your home directory”). This is great news, because editing or clearing this file (possible deleting it) allows you to clean up this list.

But even better, it looks like these LISP expressions can be copied into a script, to repeat a Curves operation. I’ve discussed GIMP scripts here, so you may want to give it a try. I haven’t tried to adopt these curve settings in a script yet, because I haven’t had the need. If you’re successful with this, please leave a comment below.

Canonizing PCAD netlist files

OK, so the board designer just sent me an updated schematics of the design. Are there any changes? Comparing the schematics itself is hopeless. So I’ll compare the PCAD netfiles (those with a .NET extension). I mean, they are simple text files, after all.

The problem is that Orcad feels free to change the order of the nets’ appearance in the file, and also the order of the pins connected to each net. So using diff to compare the two files gives a lot of false positives.

Solution: Sort the nets descriptions and the connections of both files. Diffing the outputs yields the true changes.

The Perl script is below. Even though the output looks OK to me, I wouldn’t think about using it for PCB manufacturing. But I suppose  it’s pretty safe to say that whatever turns up in a diff test sums up to the changes made.

#!/usr/bin/perl
use warnings;
use strict;

our @connlist;

local $/; # Slurp mode

my $netlist = <>;

my ($parts, $nets) = ($netlist =~ /(.*?)^(nets[ \t\n\r]*.*)/msi);

$nets =~ s/^([a-zA-Z_0-9]+)[ \t\n\r]*=[ \t\n\r]*(.*?);[ \t\r\n]*/canonize($1, $2)/gmse;

print ($parts, $nets);
print "%\n%Canonized nets below\n%\n";
print sort @connlist;

sub canonize {
  my ($net, $connections) = @_;

  my $out = $net;

  $out .= ' = ';

  my @conns = sort ($connections =~ /([a-zA-Z0-9]+\/[a-zA-Z0-9]+)/g);

  # Sanity check. Remove everything recognized and whitespaces.
  # We should be left with nothing.
  $connections =~ s/[a-zA-Z0-9]+\/[a-zA-Z0-9]+//g;
  my @bads = ($connections =~ /([^ \n\r\t]+)/g);

  foreach my $bad (@bads) {
    warn("Ignored token $bad for net \"$net\".\n");
  }

  $out .= join(' ', splice (@conns, 0, 5));
  while (@conns) {
    $out .= "\n       ".join(' ', splice (@conns, 0, 5));
  }
  $out .= ";\n";

  push @connlist, $out;

  return '';
}

To use it, go something like:

$ canonize_netlist.pl REV01.NET > canon01.net
$ canonize_netlist.pl REV02.NET > canon02.net
$ diff canon01.net canon02.net | less

Or use some GUI oriented diff applications (I use WinMerge on Windows, for example)

Mangling win32 executables with a hex editor

This is a short note about how to make small manipulations in executables or DLLs in order to get rid of malware behaviour. For example, if some application pops up a dialog box which I’d like to eliminate. It can also be the final step in cracking (which is very recommended as an educational experience).

Keep in mind that getting something useful done with this technique requires a very good understanding of assembly language, and how high-level languages are translated into machine code.

The general idea is to hook up a debugger (Microsoft Visual Studio’s will do, using Tools > Debug Processes), and try to get a breakpoint exactly where the bad things happens. Then, after verifying that there is a clear relation between the certain point in the code to the undesired behavior, use the debugger to skip it a few times, in order to be sure that it’s the fix. Tracing the API calls can be very helpful in finding the crucial point, as I’ve explained in another post. But if the offending behavior involves some message box (even if the popup only announces the issue), odds are that the critical point can be found by looking at the call stack, when attaching the debugger to the process, with the popup still open. Look for where the caller is the application itself (or the related DLL/OCX).

Lastly, the code must be changed in the original file. Fortunately, this can be done with a hex editor, since no CRC check is performed on executables.

One thing to bear in mind is that the code is possibly relocated when loaded into memory. During this relocation, absolute addresses in the machine code are mangled to point at the right place. This is why any opcode, which contains an address can’t be just changed to NOPs: The linker will do bad things there.

A crucial step is to match the memory viewed in the debugger with a position in the file. First we need to know where the EXE, DLL or OCX are mapped in memory. The Dumper application from the WinAPIOverride32 suite gives the mapped addresses for each component of a process, for example. The file is typically mapped linearly into memory, with the first byte going to the first address in memory. An application such as the PE Viewer (sources can be downloaded from here) can be helpful in getting a closer look on the Portable Executable data structure, but this is usually not necessary.

Once the hex data in the file matches what we see in the debugger, we’re left with pinpointing the position in the hex editor, and make the little change. There are a few classic simple tricks for manipulating machine code:

  • Put a RET opcode in the beginning of the subroutine which does the bad thing. RET’s opcode is 0xC3. Eliminating the call itself is problematic, since the linker may fiddle with the addresses.
  • Put NOPs where some offending operation takes place. NOP’s opcode is 0x90. Override only code which contains no absolute adresses.
  • Insert JMPs (opcode 0xEB). This is cool in particular when there is some kind of JNE or JEQ branching between desired and undesired behavior. Or just to skip some piece of code. This is a two-byte instruction, in which the second byte is a signed offset for how far to jump. Offset zero means NOP (go to the next instruction).

When the mangling is done, I suggest opening the application with the debugger again, and see that the disassembly makes sense, and that the change is correct. Retaining the break point once is good to catch the critical event again, and see that the program flows correctly from there on.

Tracing API calls on Windows

Linux has ltrace. Windows has…? I was looking for applications to trace DLL calls, so I could tell why a certain application goes wrong. The classic way is to get hints from library calls. Or system calls. Or both.

In the beginning, I was turned down by they idea, that most trackers only support those basic system DLLs (kerner32 and friends), but I was soon to find out that one gets loads of information about what the application is at only through them.

I found a discussion about such tools in a forum leading my to the WinAPIOverride32 application (“the tracker” henceforth), which is released under GPL. It doesn’t come with a fancy download installation, so you have to run the WinAPIOverride32.exe yourself and read the help file at WinAPIOverride.chm. Having looked at a handful of commercial and non-commercial alternatives, I have to say that it’s excellent. The only reason I looked further, is because it didn’t run on Windows 2000, which is an understandable drawback, and still, a problem for me.

The documentation is pretty straightforward about what to do, but here’s a short summary anyhow:

You can pick your target application by running it from the tracker, or hook on a live process by picking its process number, or even better, drag an icon over the target process’ window. Then click the green “play” button to the upper left. If you chose to run your process ad hoc, you’ll set up the modules immediately after the launch, and then resume operation by confirming a dialog box. As a side feature, this allows starting a process just to halt it immediately, and let a debugger hook on it, and then resume. A bit ugly, but effective (with Microsoft Studio, for example).

Module filters sets what calls are included or excluded, depending on the modules calling them. One problem with including modules such as kernel32 is that several calls are made while the hooks are made (by the tracker itself), so the log explodes with calls while the target application is paused anyhow. This is solved by using the default exclusion list (NotHookedModuleList.txt). Only be sure to have the “Use list” checked, and have the Apply for Monitoring and Apply for Overriding set. Or hell breaks loose.

At this point, the idea is to select which API calls are monitored. Click the torch. There’s a list of monitor files, basically contain the names of the DLLs to be hooked upon, and the function prototypes. One can pinpoint which functions to monitor or not, but the general idea is that the monitor files are sorted according to subjects which the calls cover (registry, I/O, etc).

Choosing kernel32 will give a huge amount of data, which reveals more or less what the target application was doing. Monitoring “reg” is also useful, as it reveals registry access. Specific other DLLs can be helpful as well.

When huge amounts of data comes out, the log will keep running even after monitoring has been stopped. If this goes on for a while, a dialog box opens, saying that unhooking seems to take too much time, and if it should wait. Answering “no” will cut the log right there, possibly causing the target application to crash. Whether to chop it right there or not is an open question, since the critical event is already over, yes, but we’re not sure whether it has been logged.

To make things easier, just minimize the tracker’s window, since it’s mainly the display of the log which slows things down. Quit the target application, wait for the popup telling about the unload of the process, and then look.

A very nice thing is that it’s possible to create monitor files automatically with the MonitoringFileBuilder.exe application, also in the bundle. Pick an application and create a monitor file for all its special DLLs, or pick a DLL to monitor its calls. The only problem with these files is that since the information about the function prototypes is missing, parsing the arguments is impossible.

It’s possible to write the call logs to an XML file or just a simple text file, of course. The only annoying thing is that the output format it 16-bit unicode. Notepad takes this easily, but simple text editors don’t.

In short, it’s a real work horse. And it just happened to help me solve a problem I had. This is the classic case where free software which was written by the person who uses it takes first prize, over several applications which were written to be sold.

I should also mention the Dumper.exe, which allows connection to any process, and not only dump and modify the process’ memory on the fly, but also see what DLL is mapped where in memory (which is useful when reading in-memory raw assembly with a debugger). Also, it allows displaying the call stack for each thread, which is sometimes more instructive than Microsoft’s debugger (is that really a surprise?).

But since I had a brief look on other things, I wrote my impressions. They may not be so accurate.

Other things I had a look at

There’s SpyStudio API monitor, which I had a look on (it isn’t free software, anyhow. That is, free as in free beer, but not as in freedom). Its main drawback is that it logs only specific functions, and doesn’t appear to allow an easy appliance of hooks to a massive amount of functions. In other words, one needs to know what the relevant functions are, which isn’t useful when one wants to know what an application is doing.

I also had a look on API monitor, which was completely useless to me, since it doesn’t allow adding command line arguments when launching a new process. Not to mention that their trial version completely stinks (buy me! buy me!). Everything useful was blocked in the trial version. I wonder if the real version is better. Was that an application I gladly uninstalled.

API sniffers with names such as Kerberos and KaKeeware Application Monitor seem to include trojan horses, according to AVG. Didn’t take the risk.

Rohitab API Monitor v1.5 (which I picked, since v2 is marked Alpha) wouldn’t let me start a new process, and since I was stupid enough to monitor all calls on all processes, this brought a nasty computer crash (when I killed the tracker). After a correspondence with the author, it turns out that the version is 10 years old, and it is possible to start a process with arguments. Then I tried v2. I would summarize it like this: It indeed looks very pretty, and seems to have a zillion features, but somehow I didn’t manage to get the simplest things done with it. Since I don’t want to rely on contacting the author for clarifications all the time, I don’t see it as an option.

Auto Debug appears pretty promising. It’s not free in any way, though, and even though it caught all kernel32 calls, and has neat dissection capabilities, I couldn’t see how to create a simple text output of the whole log. Maybe because I used the application in trial mode.

The Generic Tracker looks very neat, and it’s a command line application, which makes me like it even more. I didn’t try it though, because it allows tracking only four functions (as it’s based upon break points). But it looks useful for several other things as well.