Skip navigation

If you have Avast Antivirus and Mozilla Thunderbird, there is a known incompatibility between the two. Thunderbird makes temporary HTML files to send HTML emails and Avast “grabs” them for scanning when they’re made, which causes Thunderbird to get “locked out” and throw the error message about a file that cannot be attached.

You will need to exclude the “nsemail” files generated by Thunderbird from being touched by Avast. To do this, open Avast, go to Settings, go to Active Protection, and change the settings for File System Shield.

Pick the “Exclusions” option on the left side. Go to the bottom of the list and there will be a blank space to type a new exclusion into. Paste or type this string in there:

*\nsemail*.*

Click OK and close Avast. Thunderbird can now send messages without being cut off by Avast. Send a test email to yourself to be sure it works.

I get it. You want to be “interesting” or “artsy” or “experimental” with your filmmaking. You think that doing something differently will help your work stand out. This is sometimes true; there are many films that make use of unusual shots or otherwise “break the rules” of cinematography and stand out because of it. Rules in any art form are more properly thought of as guidelines, and skillful bending of the rules with a good cinematographic reason to do so can be used to spectacular effect.

The problem is not that you’re trying to be different and to make your film more interesting. The problem is that you don’t understand what you’re doing and you’re doing it so very horribly wrong.

I don’t watch as many films as I probably should, but two recent watches of mine (“Dear White People” and “Comet”) have violated the basic rules of film composition in a way that both aggravates and concerns me, because the last thing I want to see is this nonsense becoming a common trend in new independent films.

What cardinal sin did these films commit? What has incensed this blog poster so much that something simply had to be said?

Intentionally disrespecting the rule of lead room.

For those who don’t know, lead room is the space in a shot where the “energy” of the shot is directed. If a ball is rolling, a person is talking, a car is moving, or even if someone is simply looking out of a window, a basic rule of composition is that some empty space should be seen in that direction. While both of these films have plenty of well-framed shots, there are an intolerable number of intentionally poorly-framed shots too, shots which detract from the movie significantly. All too often, one character will be talking to another character, but instead of showing the “right-facing” character in the left half of the frame, they place them in the right half (or even the right third) of the frame instead, leaving no lead room and tons of dead space behind the character.

Perhaps the directors felt like these choices would increase viewer interest in the film. Perhaps they were experimenting to see if intentionally doing things wrong would be well-received. Perhaps they’re just being edgy for the sake of being edgy (because it’s totally hipster to break the rules, you guys, didn’t you know that?)

Regardless of the motivations for breaking fundamental rules of composition, every last one of these shots is visually annoying and amateurish. Directors, take note: trying to be edgy and hipster with your cinematography is the easiest way to make your movie suck.

Stop breaking the rules. Give your actors head room, lead room, and avoid creating dead space. If you break these rules and you don’t have an extremely important reason for doing so, you’ll hurt your otherwise interesting and enjoyable film, guaranteed. Your film should be interesting enough to speak for itself without edgy rebellious cinematography. Your film should stand out because of your camera work, not in spite of it.

I enjoyed watching the two films mentioned, but the frequent and intentional use of crappy framing was persistently distracting and (in the case of “Dear White People”) almost drove me to shut the film off and do something else.

The problem of finding and handling duplicate files has been with us for a long time. Since the end of the year 1999, the de facto answer to “how can I find and delete duplicate files?” for Linux and BSD users has been a program called ‘fdupes’ by Adrian Lopez. This venerable staple of system administrators is extremely handy when you’re trying to eliminate redundant data to reclaim some disk space, clean up a code base full of copy-pasted files, or delete photos you’ve accidentally copied from your digital camera to your computer more than once. I’ve been quite grateful to have it around–particularly when dealing with customer data recovery scenarios where every possible copy of a file is recovered and the final set ultimately contains thousands of unnecessary duplicates.

Unfortunately, development on Adrian’s fdupes had, for all practical purposes, ground to a halt. From June 2014 to July 2015, the only significant functional changes to the code have been modification to compile on Mac OS X. The code’s stagnant nature has definitely shown itself in real-world tests; in February 2015, Eliseo Papa published “What is the fastest way to find duplicate pictures?” which contains benchmarks of 15 duplicate file finders (including an early version of my fork which we’ll ignore for the moment) that places the original fdupes dead last in operational speed and shows it to be heavily CPU-bound rather than I/O-bound. In fact, Eliseo’s tests say that fdupes takes a minimum of 11 times longer to run than 13 of the other duplicate file finders in the benchmark!

As a heavy user of the program on fairly large data sets, I had noticed the poor performance of the software and became curious as to why it was so slow for a tool that should simply be comparing pairs of files. After inspecting the code base, I found a number of huge performance killers:

  1. Tons of time was wasted waiting on progress to print to the terminal
  2. Many performance-boosting C features weren’t used (static, inline, etc)
  3. A couple of one-line functions were very “hot,” adding heavy call overhead
  4. Using MD5 for file hashes was slower than other hash functions
  5. Storing MD5 hashes as strings instead of binary data was inefficient
  6. A “secure” hash like MD5 isn’t needed; matches get checked byte-for-byte

 

I submitted a pull request to the fdupes repository which solved these problems in December 2014. Nothing from the pull request was discussed on Github and none of the fixes were incorporated into fdupes. I emailed Adrian to discuss my changes with him directly and there was some interest in certain changes, but in the end nothing was changed and my emails became one-way.

It seemed that fdupes development was doomed to stagnation.

In the venerable traditions of open source software. I forked it and gave my new development tree a new name to differentiate it from Adrian’s code: jdupes. I solved the six big problems outlined above with these changes:

  1. Rather than printing progress indication for every file examined, I added a delay counter to drastically reduce terminal printing. This was a much bigger deal when using SSH.
  2. I switched the code and compilation process to use C99 and added relevant keywords to improve overall performance.
  3. The “hot” one-line functions were changed to #define functions to chop function call overhead for them in half.
  4. (Also covers 5 and 6) I wrote my own hash function (appropriately named ‘jody_hash’) and replaced all of the MD5 code with it, resulting in a benchmarked speed boost of approximately 17%. The resulting hashes are passed around as a 64-bit unsigned integer, not an ASCII string, which (on 64-bit machines) reduces hash comparisons to a single compare instruction.

 

After forking all of these changes and enjoying the massive performance boost they brought about, I felt motivated to continue looking for potential improvements. I didn’t realize at the time that a simple need to eliminate duplicate files more quickly would morph into spending the next half-year ruthlessly digging through the code for ways to make things better. Between the initial pull request that led to the fork and Eliseo Papa’s article, I managed to get a lot done:

 

At this point, Eliseo published his February 19 article on the fastest way to find duplicates. I did not discover the article until July 8 of the same year (at which time jdupes was at least three versions higher than the one being tested), so I was initially disappointed with where jdupes stood in the benchmarks relative to some of the other tested programs, but even the early jdupes (version 1.51-jody2) code was much faster than the original fdupes code for the same job.

1.5 months into development, jdupes was 19 times faster in a third-party test than the code it was forked from.

Nothing will make your programming efforts feel more validated than seeing something like that from a total stranger.

Between the publishing of the article and finding the article, I had continued to make heavy improvements:

 

When I found Eliseo’s article from February, I sent him an email inviting him to try out jdupes again:

I have benchmarked jdupes 1.51-jody4 from March 27 against jdupes 1.51-jody6, the current code in the Git repo. The target is a post-compilation directory for linux-3.19.5 with 63,490 files and 664 duplicates in 152 sets. A “dry run” was performed first to ensure all files were cached in memory first and remove variances due to disk I/O. The benchmarking was as follows:

$ ./compare_fdupes.sh -nrq /usr/src/linux-3.19.5/
Installed fdupes:
real 0m1.532s
user 0m0.257s
sys 0m1.273s

Built fdupes:
real 0m0.581s
user 0m0.247s
sys 0m0.327s

Five sequential runs were consistently close (about ± 0.020s) to these times.

In half a year of casual spare-time coding, I had made fdupes 32 times faster.

There’s probably not a lot more performance to be squeezed out of jdupes today. Most of my work on the code has settled down into working on new features and improving Windows support. In particular, Windows has supported hard linked files for a long time, and I’ve taken full advantage of Windows hard link support. I’ve also made the progress indicator much more informative to the user. At this point in time, I consider the majority of my efforts complete. jdupes has even gained inclusion as an available program in Arch Linux.

Out of the efforts undertaken in jdupes, I have gained benefits for other projects as well. Improving jody_hash has been a fantastic help since I also use it in other programs such as winregfs and imagepile. I can see the potential for using the string_table allocator in other projects that don’t need to free() string memory until the program exits. Most importantly, my overall experience with working on jdupes has improved my overall programming skills tremendously and I have learned a lot more than I could have imagined would come from improving such a seemingly simple file management tool.

If you’d like to use jdupes, feel free to download one of my binary releases for Linux, Windows, and Mac OS X. You can find them here.

Follow

Get every new post delivered to your Inbox.

Join 72 other followers

%d bloggers like this: