Yesterday, I was chatting on Slack with fellow RC members about object-oriented programing and the Python language, when Paul Gowder brought up a prank he had written. It’s supposed to create a security hole, and suppress errors, so it becomes impossible to find bugs.
I started analyzing it line by line, with the help of Paul, Leo Torres and Sean Martin.
class foo(str) declares
foo to be a subclass of
str, which means
foo will do everything
str does, plus anything else you add to it.
Line 4: Here, we add the
__call__ functionality to
Line 6: We are calling
self, which interprets the string as executable python code.
Line 7: The
Exception part would suppress any errors we might get.
Line 10: Adding
str = foo replaces the standard implementation of
str with our new
foo. This line is important for the code to be malware, because everything created with
str() will actually be a
foo(), which means that now your strings created with
str() are callable. If they’re called, they’re executed as Python code.
Line 18: If we do
evil(), we end up running
'print "EVIL"', which is interpreted as the python code
print "EVIL", which then just prints
In a nutshell, anything that gets converted to a string with
str() is turned into a function that you can call. One little typo, entering the name of a string variable rather than a function, and you’ve just executed whatever random code is contained in the string.
The stack trace you get won’t give you any obvious indication that what you called was a string. It’ll just throw errors related to whatever it is that you put in the string. Or, if there’s something that will actually run in the string, then it’ll just execute, and God only knows what happens then.
If our application is reading a value for
username = input(), and the user inputs not a username but malicious code, this code ends up being run. Also, the
Exception part would suppress any errors we get from calling nonsense that way.
It kind of felt like music composition.