On assertions outside of tests
Yeah, no big cliffhanger here, I vote for using assertions outside of tests. Of course, it depends on the context. What I mean is that while assertions usually belong in tests, on rare occasions they can and even should be a part of Page/Component/Action models or even utility functions.
The topic itself does not seem to be exhausted yet. Some “canonical literature”, like Selenium docs, advises against having assertions in models.
My problem with such a puritanical approach has always been that it’s illustrated using over-simplified examples. My life in test automation has never been as easy as the Selenium examples of POMs and tests. Whenever you introduce even the slightest complexity to a problem, it suddenly loses its distinct black and white colors, turning into 150 shades of gray.
“Models should not know about testing”. Right, but what if I need a custom assertion? I don’t want to create functions that return a boolean only to wrap them in expect(x).toBeTruthy(). This is weird syntactic sugar if you ask me. I do everything in one place and name it accordingly. If an assertion is model-related, it goes to the model. If it’s a common assertion, it goes to the base model or even to utility functions. This is how I end up having assertions outside of tests, kinda. Does this bring “test information” into models? I think so. Do I care? Do I suffer any significant penalty or struggle to maintain it because of this crossed line? No, not really.
Let’s also look at another type of situation. What if I know that some assertion correlates with a particular action based on involved data in 100% of cases? For example, what if I fetch an email via API by providing a recipient email address and the type of email I need, and then check that the email has the correct recipient, sender, and subject? Should I really duplicate this sequence from one test case to another, knowing there will never be any uniqueness emerging between these two?
// tests/email.spec.ts
test('Check that email is fetched correctly', async ({ emailPage }) => {
test.step('Fetch first email', async () => {
const message = await fetchEmail('test1@example.com', 'Test email')
expect(message.recipient).toBe('test1@example.com') // First parameter from fetchEmail function
expect(message.sender).toBe('test@example.com') // Remains constant, most likely environment-specific
expect(message.subject).toBe('Test email') // Second parameter from fetchEmail function
})
test.step('Fetch second email', async () => {
const message = await fetchEmail('test2@example.com', 'Another test email')
expect(message.recipient).toBe('test2@example.com') // Pattern repeats...
expect(message.sender).toBe('test@example.com')
expect(message.subject).toBe('Another test email')
})
})
I really like the idea that DRY is just another instrument, not a law. Sometimes you avoid DRYing your code to preserve readability and declarativeness. But the same is true from the other end. Sometimes you DRY your code for maintainability at the expense of being able to read everything transparently line by line in the tests. Personally, I add an expectation for the API request to be 200 OK to every API util function. Yes, you won’t read between the test lines that the request needs to be successful. But also you won’t get information exhaustion by having to look at expect(x).toBeOK() 200 times a day to the point of completely ignoring this line. In a way, doesn’t this also improve readability?
Sometimes I do something that I call a margin of safety. I add implicit extra checks that should always remain valid, and then… I forget about them. For example, when calling a function to open any page from its model, I might add an assertion for the page title to be visible and contain some particular text. This is wildly opinionated, I know, but it feels so right! I mean, at the cost of an extra 5-10ms on average per test, I reinforce stability and also make a fluent test execution flow. No title? Page was not opened. As I mentioned in my previous blog post, sometimes this fluency might require deviation from a manual test case. I mean, no real manual tester would deliberately move their eyes to the page title and assert its content every time they surf the pages. Redirection can be read much faster by human beings from other visual responses on the page. This is why, in my opinion, such a margin of safety does not hurt anyone. It’s not a test detail hidden under the hood, because it was never in the original test, but it’s still an assertion written outside of a test.
But even after saying all of the above, my proportion of implicit external assertions to explicit internal assertions is still something like 10% to 90%. As I mentioned in the beginning, assertions usually go inside the tests, unless there is a specific reason not to. I’m just not that sympathetic to “nevers” and “alwayses” that are being spilled quite commonly in similar discussions.