A friend and colleague Jeremy Self, who I've been working with the past few weeks on a project that will probably be released shortly, told me the interesting results of some profiling he was doing on a lazy-evaluated data structure:
"Did you know a try/except is faster than a hasattr? I was a little surprised by that."
I didn't know this, and I was very surprised by this, because it seemed contrary to everything I know about the speed of
try/except in python. I knew that, if Jeremy told me something, it's because he had profiled it. I just couldn't believe his results, so I went and had a look myself. I quickly cooked up a
hasatt function and a
tryexc function that attempted to look for
>>> timeit hasatt() 1000000 loops, best of 3: 610 ns per loop >>> timeit tryexc() 1000000 loops, best of 3: 550 ns per loop
Interesting results! He seemed to be correct. But I knew that this was the easy half of the test for the try/except code. When the attribute actually exists, no exception is thrown, and the mechanics of generating, throwing, and catching the exception is where I expected it to be slow. I rewrote my functions to look for an attribute I knew wouldn't be on
object, and tried again:
>>> timeit hasatt() 1000000 loops, best of 3: 1.61 us per loop >>> timeit tryexc() 100000 loops, best of 3: 3.14 us per loop
So, the secret is revealed. The
try/except is twice as slow when the attribute isn't present.
What is the lesson of this story? There are two. The first one, and the more important one, is that Jeremy was right. He profiled his code against typical data it'd be receiving, and noticed that the
try/except was faster, and faster it would be if 95% of the objects have the attribute you're looking for.
The second lesson is that you need to be careful about drawing generalizations when you're doing simple profiling. Think about the conditions of your tests and how the data you're using might impact the speed of the routines you are profiling!