Skip navigation

The problem of finding and handling duplicate files has been with us for a long time. Since the end of the year 1999, the de facto answer to “how can I find and delete duplicate files?” for Linux and BSD users has been a program called ‘fdupes’ by Adrian Lopez. This venerable staple of system administrators is extremely handy when you’re trying to eliminate redundant data to reclaim some disk space, clean up a code base full of copy-pasted files, or delete photos you’ve accidentally copied from your digital camera to your computer more than once. I’ve been quite grateful to have it around–particularly when dealing with customer data recovery scenarios where every possible copy of a file is recovered and the final set ultimately contains thousands of unnecessary duplicates.

Unfortunately, development on Adrian’s fdupes had, for all practical purposes, ground to a halt. From June 2014 to July 2015, the only significant functional changes to the code have been modification to compile on Mac OS X. The code’s stagnant nature has definitely shown itself in real-world tests; in February 2015, Eliseo Papa published “What is the fastest way to find duplicate pictures?” which contains benchmarks of 15 duplicate file finders (including an early version of my fork which we’ll ignore for the moment) that places the original fdupes dead last in operational speed and shows it to be heavily CPU-bound rather than I/O-bound. In fact, Eliseo’s tests say that fdupes takes a minimum of 11 times longer to run than 13 of the other duplicate file finders in the benchmark!

As a heavy user of the program on fairly large data sets, I had noticed the poor performance of the software and became curious as to why it was so slow for a tool that should simply be comparing pairs of files. After inspecting the code base, I found a number of huge performance killers:

  1. Tons of time was wasted waiting on progress to print to the terminal
  2. Many performance-boosting C features weren’t used (static, inline, etc)
  3. A couple of one-line functions were very “hot,” adding heavy call overhead
  4. Using MD5 for file hashes was slower than other hash functions
  5. Storing MD5 hashes as strings instead of binary data was inefficient
  6. A “secure” hash like MD5 isn’t needed; matches get checked byte-for-byte


I submitted a pull request to the fdupes repository which solved these problems in December 2014. Nothing from the pull request was discussed on Github and none of the fixes were incorporated into fdupes. I emailed Adrian to discuss my changes with him directly and there was some interest in certain changes, but in the end nothing was changed and my emails became one-way.

It seemed that fdupes development was doomed to stagnation.

In the venerable traditions of open source software. I forked it and gave my new development tree a new name to differentiate it from Adrian’s code: fdupes-jody. I solved the six big problems outlined above with these changes:

  1. Rather than printing progress indication for every file examined, I added a delay counter to drastically reduce terminal printing. This was a much bigger deal when using SSH.
  2. I switched the code and compilation process to use C99 and added relevant keywords to improve overall performance.
  3. The “hot” one-line functions were changed to #define functions to chop function call overhead for them in half.
  4. (Also covers 5 and 6) I wrote my own hash function (appropriately named ‘jody_hash’) and replaced all of the MD5 code with it, resulting in a benchmarked speed boost of approximately 17%. The resulting hashes are passed around as a 64-bit unsigned integer, not an ASCII string, which (on 64-bit machines) reduces hash comparisons to a single compare instruction.


After forking all of these changes and enjoying the massive performance boost they brought about, I felt motivated to continue looking for potential improvements. I didn’t realize at the time that a simple need to eliminate duplicate files more quickly would morph into me spending the next half-year ruthlessly digging through the code for ways to make things better. Between the initial pull request that led to the fork and Eliseo Papa’s article, I managed to get a lot done.


At this point, Eliseo published his February 19 article on the fastest way to find duplicates. I did not discover the article until July 8 of the same year (at which time fdupes-jody was at least three versions higher than the one being tested), so I was initially disappointed with where fdupes-jody stood in the benchmarks relative to some of the other tested programs, but even the early fdupes-jody (version 1.51-jody2) code was absolutely stomping the original fdupes.

1.5 months into development, fdupes-jody was 19 times faster than the fdupes code it was forked from.

Nothing will make your programming efforts feel more validated than seeing something like that from a total stranger.

Between the publishing of the article and finding the article, I had continued to make heavy improvements:


When I found Eliseo’s article from February, I sent him an email inviting him to try out fdupes-jody again:

I have benchmarked fdupes-jody 1.51-jody4 from March 27 against fdupes-jody 1.51-jody6, the current code in the Git repo. The target is a post-compilation directory for linux-3.19.5 with 63,490 files and 664 duplicates in 152 sets. A “dry run” was performed first to ensure all files were cached in memory first and remove variances due to disk I/O. The benchmarking was as follows:

$ ./ -nrq /usr/src/linux-3.19.5/
Installed fdupes:
real    0m1.532s
user    0m0.257s
sys     0m1.273s

Built fdupes:
real    0m0.581s
user    0m0.247s
sys     0m0.327s

Five sequential runs were consistently close (about ± 0.020s) to these times.

In half a year of casual spare-time coding,  I had made fdupes 32 times faster.

There’s probably not a lot more performance to be squeezed out of fdupes-jody today. Most of my work on the code has settled down into working on new features and improving Windows support. In particular, Windows has supported hard linked files for a long time, and I’ve taken full advantage of Windows hard link support. I’ve also made the progress indicator much more informative to the user. At this point in time, I consider the majority of my efforts complete. fdupes-jody has even gained inclusion as an available program in Arch Linux.

Out of the efforts undertaken in fdupes-jody, I have gained benefits for other projects as well. Improving jody_hash has been a fantastic help since I also use it in other programs such as winregfs and imagepile. I can see the potential for using the string_table allocator in other projects that don’t need to free() string memory until the program exits. Most importantly, my overall experience with working on fdupes-jody has improved my overall programming skills tremendously and I have learned a lot more than I could have imagined would come from improving such a seemingly simple file management tool.

If you’d like to use fdupes-jody, feel free to download one of my binary releases for Linux, Windows, and Mac OS X. You can find them here.

By default, every version of Windows since XP creates thumbnail database files that store small versions of every picture in every folder you browse into with Windows Explorer. These files are used to speed up thumbnail views in folders, but they have some serious disadvantages:

  1. They are created automatically without ever asking you if you want to use them.
  2. Deleting an image file doesn’t necessary delete it from the thumbnail database. The only way to delete the thumbnail is to delete the database (and hope you deleted the correct one…and that it’s not stored in more than one database!)
  3. These files consume a relatively small amount of disk space.
  4. The XP-style (which is also Vista/7/8 style when browsing network shares) “Thumbs.db” and the Windows Media Center “ehthumbs_vista.db” files are marked as hidden, but if you make an archive (such as a ZIP file) or otherwise copy the folder into a container that doesn’t support hidden attributes, not only does the database increase the size of the container required, it also gets un-hidden!
  5. If you write software, it can interfere with software version control systems. They may also update the timestamp on the folder they’re in, causing some programs to think your data in the folder has changed when it really hasn’t.
  6. If you value your privacy (particularly if you handle any sort of sensitive information) these files leave information behind that can be used to compromise that privacy, especially when in the hands of anyone with even just a casual understanding of forensic analysis, be it the private investigator hired by your spouse or the authorities (police, FBI, NSA, CIA, take your pick).

To shut them off completely, you’ll need to change a few registry values that aren’t available through normal control panels (and unavailable in ANY control panels on any Windows version below a Pro, Enterprise, or Ultimate version). Fortunately, someone has already created the necessary .reg files to turn the local thumbnail caches on or off in one shot. The registry file data was posted by Brink to SevenForums. The files at that page will disable or enable this feature locally. These will also shut off (or turn on) Windows Vista and higher creating “Thumbs.db” files on all of your network drives and shares.

If you want to delete all of the “Thumbs.db” style files on a machine that has more than a couple of them, open a command prompt (Windows key + R, then type “cmd” and hit enter) and type the following commands (yes, the colon after the “a” is supposed to be followed by an empty space):

cd \

del /s /a: Thumbs.db

del /s /a: ehthumbs_vista.db

This will enter every directory on the system hard drive and delete all of the Thumbs.db files. You may see some errors while this runs, but such behavior is normal. If you have more drives that need to be cleaned, you can type the drive letter followed by a colon (such as “E:” if you have a drive with that letter assigned to it, for example) and hit enter, then repeat the above two commands to clean them.

The centralized thumbnail databases for Vista and up are harder to find. You can open the folder quickly by going to Start, copy-pasting this into the search box with CTRL+V, and hitting enter:


Close all other Explorer windows that you have open to unlock as many of the files as possible. Delete everything that you see with the word “thumb” at the beginning. Some files may not be deletable; if you really want to get rid of them, you can start a command prompt, start Task Manager, use it to kill all “explorer.exe” processes, then delete the files manually using the command prompt:

cd %LOCALAPPDATA%\Microsoft\Windows\Explorer

del thumb*

rd /s thumbcachetodelete

When you’re done, either type “explorer” in the command prompt, or in Task Manager go to File > New Task (Run)… and type “explorer”. This will restart your Explorer shell so you can continue using Windows normally.

I decided this month that it was time to look at replacing my AMD Phenom II X4 965 BE chip with something that could transcode high-definition video faster. Sure enough, I chose the AMD FX-9590 CPU. Arguments against the AMD FX-9590 on forums such as Tom’s Hardware and AnandTech include “power efficiency is too low/TDP is too high” and “Intel has higher/better instructions per clock (IPC)” and “Intel’s i7 performs so much better.” Notably, the price to obtain the superior Intel performance was almost completely ignored in these discussions. Consider that the AMD FX-9590 retails for around $260 and the Intel Core i7-4770K it is often compared to costs $335; that $75 difference is enough cash to buy a cheap motherboard or a 120GB SSD, and also represents a 29% price increase over the FX-9590. Does the i7-4770K really perform 29% better than the FX-9590? The short answer is “no.” The long exception to that otherwise straightforward answer is “unless you spend all of your time calculating Julia mandelbrot sets and the digits of pi.”

Over two years ago, I wrote an article about how AMD CPUs beat Intel CPUs hands down when you factor in the price you pay compared to the performance you get. Most of the arguments I received against my assertion were against the single-figure synthetic benchmark (PassMark) I used to establish a value for CPU performance. This is understandable; synthetic benchmarks that boil down to “One Number To Rule Them All” don’t help you decide if a CPU is good for your specific computer workload. This time, I’ve sought out a more in-depth benchmark data set which can be seen here. I compiled some the relevant figures (excluding most of the gaming benchmarks) into a spreadsheet along with the Newegg retail price of each CPU as of 2014-10-23, used a dash of math to convert “lower is better” scores into an arbitrary “higher is better” value and some fixed multipliers per benchmark to make them all fit into one huge graph which can be downloaded here: cpu_performance_comparison.xls

And now, ladies and gentlemen, the moment you’ve been waiting for: a graph of a wide variety of CPU benchmarks, scaled by the price you pay for each CPU (click to expand the image.)

amd_fx-9590_vs_intel_core_i7CPUs in each bar series are ordered by retail price in ascending order. The FX-9590 is in yellow on the left of each series and Intel only has a CPU that beats the AMD offering in 4 out of 17 price-scaled benchmarks, most of which are synthetic and don’t represent any typical real-world workloads.

AMD wins again.

Update: In case you needed more proof that the FX-9590 is the best encoding chip, someone sent me a few links to more x264 benchmarks: 1 2 3


Get every new post delivered to your Inbox.

Join 71 other followers

%d bloggers like this: