This article has been published 6 years ago. If you're into internet archeology, read it — but at your own risk. Honza doesn't have to necessarily agree with the content by now.
Many folks use the unittest when writing tests in Python. It's a module of Python's standard library, which allows people to quickly jump into XUnit way of writing tests, full of
While unittest is able to find and run tests, when the test suite grows larger, people often get a alternative third-party test runner, for instance nose. The aim of having a dedicated test runner is to find and run tests in a smart way - e.g. just those, which previously failed.
Thanks to the fact nose runs tests, it allows you to leave the XUnit style behind and offers some extra tricks how to make your tests better and simpler. Usually people don't use them though, because they don't know about them. However, in general, if you want to run a particular set of your tests or control the output of testing, test runner always has a smart command line option to do exactly that. That's very convenient primarily because it provides:
- Convention (my colleague doesn't need to learn which options are available in a project),
- comfort and developer experience (the option is there for me even when it's just three times in my lifetime I need it).
And then there's pytest, a framework, which is probably capable of everything nose can do, but adds even more and often crazy ways how to simplify writing of tests. Although idiomatic Python should opt for be explicit and no magic, the popularity of pytest has proved that in case of tests, magic is acceptable and useful. It turns out, it can make tests more readable and declarative.
Thanks to the fact a person who wrote PyPy (interpreter of Python written in Python) is part of the team which works on pytest, the magic goes into pretty deep levels of the language. For example, Python has a keyword
assert, which is by default quite primitive. It can just compare two values and raise a generic assertion error if they don't match. Basically it works the same way as assert function
in Node.js. However, pytest leverages introspection to dive into bytecode of the Python interpreter and enhances asserts in such way they're able to recognize what they're comparing. This way even the basic assert statement can provide very nice output. And I don't need a cheatsheet to find the right
assert.whateverComparisonFunction, I write plain Python code, using just built-in operators and types.
I also like the fact frameworks usually provide sections about testing in their documentations and give me a hand (guides, tips, tools, classes to inherit from) so I could test my application in simpler and faster ways, without writing redundantly detailed boilerplate code. See Flask, Django, Scrapy (links to dedicated chapters in respective docs).
At the end of the day, I feel very comfortable when writing tests in Python. I can write a website using a microframework, but even such stripped-down tool still gives me means to simplify writing tests. Often a test is just a few lines long, which saves my time, time of my code reviewer, and time of all other people reading the test suite in the future. I can easily run any set of tests I need, e.g. just those which didn't pass.
- It runs all tests and if I type
.onlyin code of my tests, I can run just that particular test,
- it allows usage of glob patterns to select which tests I want to run,
- it allows me to grep through descriptions of tests (if the grep matches more descriptions, all of them get ran),
- wannabe-BDD approach, which composes of
- tons of boilerplate code in
beforeEach(because Mocha currently has no concept of a shared behaviour...),
- if test fails, it's sometimes hard to even realize in which file the problem happened,
- docs are just a bit less terrible than those of Mongoose,
- it's able to print test results in colors, using unicode character ✔️, which is very cool,
- it allows to use just one reporter at a time (good luck generating test coverage or
Just to give an idea, what are pytest killer features:
- No XUnit classes, no faux BDD, just plain functions with simple asserts,
- helps to uncover what the hell actually happens,
- automatic management of
stdoutwhen running tests
- concept of fixtures for writing dependencies of individual tests,
- parametrization of tests,
- distributed testing,
- is able to run any Python tests, nonpython tests, or documentation tests.
But it's not just about maintainers. All right, there's no supply, but surprisingly, there's also no demand. From search and GitHub issues I'm getting an impression that nobody in the Node.js ecosystem has any ambitions to use or want such features. It looks like most of the writers of Mocha tests probably don't even know these things can exist. The result being,
- they won't work on these things and contribute them,
- in larger test suites their tests uncontrollably grow into 1000 lines long files full of boilerplate code, which are both hard to read and hard to be executed.
If they could somewhere spot the existence of parametrization or management of fixtures, maybe they would embrace the concepts and they'd try to use them to write cleaner tests. But it's not available, so tests get often just copy-pasted, DRY-ied out by
forEach, or by ad-hoc functions to share assertions. I don't even know which of these approaches is worse. The former is readable, but non-manageable, the later is non-readable, but at least illusively more manageable.
A person who is familiar with Node.js knows that searching npm packages is usually the ultimate answer to solve everything. But as I already indicated, because of a missing demand, this is not completely true for this case. Even if the packages exist, they often have up to 10 commits and the last one is 2 years old. The authors probably already jumped on writing Elm exclusively, so they don't bother with new versions... Moreover, it seems like Mocha-related packages usually just fix what's broken, not add any cool new features to make the developer experience better.
In the time of writing this article, mocha-clean is a Mocha extension with the most stars on GitHub. Well, among those which actually add something to Mocha - I've excluded integrations, such as Mocha + React, Mocha + Mongo, Mocha + browser, etc. You know what mocha-clean does? It fixes stack traces.
What else I'd like to have? Look at pytest-testmon. An extension, which leverages code coverage to execute just the tests which make sense to be ran again. I.e. something, which extremely improves productivity. No more manual
.only or wearing grepping.
And regarding controlling of which tests to run or how the output should look like, even if I found an npm package adding my desired command line options, it won't be a standard implementation, standard convention, which a new colleague would be automatically familiar with.
README I can't see anything which would be trying to solve any of the problems I mentioned. AVA looks like Mocha with a different syntax, even less features (i.e. it will require a brand new
ava-* ecosystem of extensions) and just one and only killer feature, asynchronous tests (Promises, async/await). Of course, that's something which isn't available in pytest...