23 February 2014

Violent Python

Python is widely used in many fields including maths, physics, engineering, scripting, web programming and, of course, security. Its power to be a glue between many tools and programming languages make it the perfect option for pentesting.

"Violent Python" scratches the surface of python in the world security tools programming world. It's a correct book, actually a correct cookbook. Correct because although the example programs are short and simple they show python in action in many security fields: geolocation, obfuscation, exploit development, network analysis and forgery, web scrapping and a long etcetera.

Problem is that the book is just correct because example program are not very pythonic. Although code is simple and clear, python offers smarter ways to do that things. Besides example programs are unambitious and don't go further of mere curiosities. In my opinion, examples could have been more spectacular and many more fields in security could have been covered.

I don't regret having bought "Violent Python", but maybe I'm a bit dissapointed because book is geared to people in a more initial point than me in the learning journey into security engineering. For that people this book is a fun and a direct approach to security tools development.

15 February 2014

Testing your python code with unittest

When you are programming small applications, development cycle uses to be code->manual_test->code->manual_test. The problem with this method is as you project grows in complexity you have to spend more time testing it to be sure your latest changes don't have collateral effect in any part of your applications. It is usual to forget to test things or believe they are ok  after latest changes, only to find that one part of your applications you tested it runned at the beginning of development broke by a change some cycles ago and you didn't realize.

Actually, manual testing is prone to errors and inefficient so, when your project becomes complex, you should automate your testing. One of the most used libraries for automated testing is unittest, present in python since 2.1 version. This library lets you prepare small scripts to test behavior of your program components.

If you want to use unittest to check your code I'd better follow TDD methodology. This method makes you to write first test cases, scripts to check a particular section of your code.  These test cases are very useful to force you to define desired behavior and interfaces of the new functions. Once defined tests, and only then, you can write your code keeping in mind your target is to pass tests. When code is finished you put at test, if it passes you can enter your next development cycle (defining tests, write code, execute test), if your code fails the test you fix the code and try again until success.

I know that at a very first glance that this method seems innecesary complex. What a developer wants is coding his application, not spending time coding tests. That's why many developers hate this technique. But after you give it a try you really love it because it gives you great confidence in your code. Once your test are defined you only need to run them after a code change to be sure that change didn't broke anything in a remote spot of your code. Besides if you are working in a project with collaborators test are a great way to be sure that a contribution really works.

Tests can check whatever we want in our application: it's modules, it's function and classes, the GUI, etc. For example, if we were testing a web application we could combine unittest with Selenium to simulate a browser surfing our web, while if we were testing a QT based GUI we should use QTest.

When working with unittest we should keep in mind our main building block will be test cases. A test case should be focused in testing a single scenario. In python a test case is a class which inherits unittest.TestCase. They have this general structure:

    import unittest

    class TestPartOfCode(unittest.TestCase):

 def setUp(self):
     <test initialization>

 def test_something(self):
     <code to test something>
     self.assert... # All the asserts you need to be sure correctness condition is found.
     
 def test_something_else(self):
     <code to test something>
     self.assert... # All the asserts you need to be sure correctness condition is found.

 def tearDown(self):
     <test shutdown>



You can make a test case execute by itself just appending at the end:

if __name__ == '__main__':
    unittest.main()

If you don't do that you have to call your test case externally.

When unittest is run it searchs all subclasses of unittest.TestCase and the executes every method in those subclasses whose names starts with "test_". There are special methods like setUp() and tearDown(): setUp() is run prior to each test to prepare test context, while tearDown() is run after to remove that context.

Usually you don't have just one test case, you have lot of them instead to test every feature in your program. There are many approaches, in GUI applications you could have a test case for each window and the methods of that test case would check every control in that window. Another good rule of thumb is to group together in a test case all tests that share the same setUp() an tearDown() logic.

So you use to have many test case and is more efficient to load them externally to make them run in batch mode. I think is a good practice to keep your tests in a different folder than your code, for example in a "tests" folder inside your project one. I use to place an empty "__init__.py" file inside that folder to make it a package. Let's suppose that is our case, to load and run the test cases you need and script to discover them (I use to call it "run_tests.py"):

    import unittest

    def run_functional_tests(pattern=None):
       print("Running tests...")
       if pattern is None:
           tests = unittest.defaultTestLoader.discover("tests")
       else:
           pattern_with_globs = "%s" % (pattern,)
           tests = unittest.defaultTestLoader.discover("tests", pattern=pattern_with_globs)
       runner = unittest.TextTestRunner()
       runner.run(tests)

    if __name__ == "__main__":
       if len(sys.argv) == 1:
       run_functional_tests()
    else:
       run_functional_tests(pattern=sys.argv[1])


This script is usually placed at the root of your project folder, at the same level of tests directory. If it is called with no arguments it just enters tests folders an loads every test case is found inside whose filename is started by "test". If you call it with an argument, it uses it as a kind of a filter, to only load those test cases placed in python files with a name started by given argument. This way you can run only a subset of you test cases.

With unittest you can test console and web applications and even GUI one. The later are harder to test because access to GUI widgets depends on each implementation and the related tools provided by it. For instance, QT creators offer the QTest module to be used with unittest. This module let you simulate mouse and key clicks.

So, we could use a console or web example to detail how to use unittest, but as QTest tutorials (with pyQT) are so scarce I want to contribute with one of my own, that's why in this article we are going to develop test cases to check a pyQT GUI application. As example's base we are going to use pyQTMake's source code. You'd better get the whole source code using Mercurial as I explained in one of my previous articles. To clone the source code and set it to the version we are going to use type the next in your Ubuntu console:
dante@Camelot:~$ hg clone https://borjalopezm@bitbucket.org/borjalopezm/pyqtmake/ example requesting all changes adding changesets adding manifests adding file changes added 10 changesets with 120 changes to 74 files updating to branch default 67 files updated, 0 files merged, 0 files removed, 0 files unresolved dante@Camelot:~/Desarrollos$ cd example dante@Camelot:~/Desarrollos/example$ hg update 9 0 files updated, 0 files merged, 0 files removed, 0 files unresolved dante@Camelot:~/Desarrollos/example$

Ok, now that you have the source code we are going to work with asses pyqtmake.py code. Focus on function "connections":

def connections(MainWin):
    ## TODO: This signals are connected using old way. I must change it to new way
    MainWin.connect(MainWin.ui.action_About,  SIGNAL("triggered()"),  MainWin.onAboutAction)
    MainWin.connect(MainWin.ui.actionLanguajes,  SIGNAL("triggered()"),  MainWin.onLanguagesAction)
    MainWin.connect(MainWin.ui.actionOpen,  SIGNAL("triggered()"),  MainWin.onOpenAction)
    MainWin.connect(MainWin.ui.actionPaths_to_compilers,  SIGNAL("triggered()"),  MainWin.onPathsToCompilersAction)
    MainWin.connect(MainWin.ui.actionPyQTmake_Help,  SIGNAL("triggered()"),  MainWin.onHelpAction)
    MainWin.connect(MainWin.ui.actionQuit,  SIGNAL("triggered()"),  MainWin.close)
    MainWin.connect(MainWin.ui.actionSave,  SIGNAL("triggered()"),  MainWin.onSaveAction)
    MainWin.connect(MainWin.ui.actionSave_as,  SIGNAL("triggered()"),  MainWin.onSaveAsAction)
    return MainWin

Looks like this bunch of code could be improved to the new style of pyQT's signals connection. The point is that we don't want to break anything so we are going to develop some test cases to be sure our new code performs like the old one.

These connections allows to MainWin reply to mouse clicking on widgets opening appropiate windows. Our test should check these windows are still opened correctly after our changes in the code.

The complete code for these tests is in test_main_window.py file inside tests folder.

To check our application our test first have to start it. Unittest has two main methods prepare context for our test: setUp() and setUpClass(). First method, setUp() is run before every test in our test, whereas setUpClass() is run only once when the whole test case is created.

In this very test case we are going to use setUp() to create application every time we test one of its components:

    def setUp(self):
        # Initialization
        self.app, self.configuration = run_tests.init_application()
        # Main Window creation.
        self.MainWin = MainWindow()
        # SLOTS
        self.MainWin = pyqtmake.connections(self.MainWin)
        #EXECUTION
        self.MainWin.show()
        QTest.qWaitForWindowShown(self.MainWin)
        # self.app.exec_() # Don't call exec or your qtest commands won't reach
                           # widgets.

QTest.qWaitForWindowShown() method stops execution until waited window is really active. If not used we could call for widgets that don't exists yet.

Our first test is going to be really simple:

    def test_on_about_action(self):
        """Push "About" menu option to check if correct window opened."""
        QTest.keyClick(self.MainWin, "h", Qt.AltModifier)
        QTest.keyClick(self.MainWin.ui.menu_Help, 'a', Qt.AltModifier)
        QTest.qWaitForWindowShown(self.MainWin.About_Window)
        self.assertIsInstance(self.MainWin.About_Window, AboutWindow)

QTest.KeyClick() sends a key click to specified widget. It can be used with key modifiers, in this case Qt.AltModifiers means that we are simulating key is pressed at the same time that Alt one. Why am I using a key simulation? Can't QTest simulate mouse clicks? yes it can, problem is that QTest.mouseClick() only can interact with widgets and menu items are not (in QT) but menu actions instead, so the only way to call them is use their keyboard shortcuts (at least as far as I know).

The key call in every test is the "assert..." stuff. This family of functions checks that an specific condition is met, if so test is declared successful, if not is declared failed. There is a third exit state for a test: error, but this one only means that our test didn't run as expected and it broke at any point.

In our example self.assertIsInstance() checks, as its name points to, that About_Window attribute in MainWin actually is an instance of AboutWindow. If you study tested slot, MainWin.onAboutAction(), this only happens when called window is correctly opened, which is what we are testing.

Unittest offers a huge list of assert variants:



Nevertheless notice that only a small subset of them are included in older versions of Python.

If you want to test your code raises exceptions as expected you can use:


In this point, if you run "run_tests.py" the test will be successful. TDD says you have to develop tests that fail at first, but here we are no developing code from the scratch but modifying already working code, so is not wrong to get here a test successful to be sure our test is correct.

To start modifying our code to include "new style" slot connections we should comment all connections we want to change. To simplify our example we are going to modify only first connection:

def connections(MainWin):
    ## TODO: This signals are connected using old way. I must change it to new way
    #MainWin.connect(MainWin.ui.action_About,  SIGNAL("triggered()"),  MainWin.onAboutAction)
    MainWin.connect(MainWin.ui.actionLanguajes,  SIGNAL("triggered()"),  MainWin.onLanguagesAction)
    MainWin.connect(MainWin.ui.actionOpen,  SIGNAL("triggered()"),  MainWin.onOpenAction)
    MainWin.connect(MainWin.ui.actionPaths_to_compilers,  SIGNAL("triggered()"),  MainWin.onPathsToCompilersAction)
    MainWin.connect(MainWin.ui.actionPyQTmake_Help,  SIGNAL("triggered()"),  MainWin.onHelpAction)
    MainWin.connect(MainWin.ui.actionQuit,  SIGNAL("triggered()"),  MainWin.close)
    MainWin.connect(MainWin.ui.actionSave,  SIGNAL("triggered()"),  MainWin.onSaveAction)
    MainWin.connect(MainWin.ui.actionSave_as,  SIGNAL("triggered()"),  MainWin.onSaveAsAction)
    return MainWin

Here is where "run_tests.py" fails, so we are in the correct point for TDD. From here we have to develop code to make our test success again.

def connections(MainWin):
    ## TODO: This signals are connected using old way. I must change it to new way
    #MainWin.connect(MainWin.ui.action_About,  SIGNAL("triggered()"),  MainWin.onAboutAction)
    MainWin.ui.action_About.triggered.connect(MainWin.onAboutAction)
    MainWin.connect(MainWin.ui.actionLanguajes,  SIGNAL("triggered()"),  MainWin.onLanguagesAction)
    MainWin.connect(MainWin.ui.actionOpen,  SIGNAL("triggered()"),  MainWin.onOpenAction)
    MainWin.connect(MainWin.ui.actionPaths_to_compilers,  SIGNAL("triggered()"),  MainWin.onPathsToCompilersAction)
    MainWin.connect(MainWin.ui.actionPyQTmake_Help,  SIGNAL("triggered()"),  MainWin.onHelpAction)
    MainWin.connect(MainWin.ui.actionQuit,  SIGNAL("triggered()"),  MainWin.close)
    MainWin.connect(MainWin.ui.actionSave,  SIGNAL("triggered()"),  MainWin.onSaveAction)
    MainWin.connect(MainWin.ui.actionSave_as,  SIGNAL("triggered()"),  MainWin.onSaveAsAction)
    return MainWin

With this modification our test will success again, which is signal that our code works. You can verify it manually if you want.

Once you test is finished you usually would want your test windows closed, so your test case tearDown() should be:

def tearDown(self):
    #EXIT
    if hasattr(self.MainWin, "About_Window"):
        self.MainWin.About_Window.close()
    self.MainWin.close()
    self.app.exit()

To test more aspects of your code you only have to add more "test" methods into your unittest.TestCase subclasses.

With all of this you are prepared to equip yourself with a pretty bunch of tests to guide you through your development.