Attest: a refreshingly simple test framework for the age of AI
2026-05-01 —
perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away
Terre des Hommes (1939) - Antoine de Saint Exupéry
How could the world possibly need yet another test framework?
Anytime you see a space with a bunch of competing tools, it's a signal that there's a hard problem somewhere under the covers. Of course, I don't mean "hard" as in "NP-hard", but instead I mean a problem where an ergonomic solution does not yet exist for the general case.
Take build systems as a leading example. There is a vast overabundance of build tools for old languages like C++ and Java. Because converting source code into executable artifacts is extremely complicated business, there were understandably more than a few attempts needed to get it right. (As an Apache Ant user at one time, that was NOT it).
Nowadays, we've mostly figured out the hard problem underlying build systems, and modern languages generally have one build tool that works well in the general case. (There will always be niches. For example, Cargo wouldn't be a good tool for the Linux kernel.)
Testing is a hard problem
Most modern build systems also have decent facilities for unit testing. This makes sense because unit tests are tightly coupled to the original source after all.
But, once you transcend unit tests to something like integration/acceptance/end-to-end/whatever-you-call-it tests, the program's problem domain starts to creep in which makes a general test framework really hard to pull off. So build tools usually just drop you off at unit tests and call it a day.
So now you're exactly where I was, trying to find a testing framework for my mostly CLI-based applications. At one point I tried Bats, but I could never remember the fixture syntax and assertions whenever the test-writing occasion arose.
And so I decided I would make yet another (!!) test framework where the underlying design principle is simplicity. Tests are easy to write, easy to read, quick to debug, and you can place them anywhere (even inline in other scripts if you want).
Attest
Tests specify how a program is supposed to behave. Shell scripts are apt for this kind of task because they are the right level of abstraction. Low-level languages like C++ compose programs out of constructs like loops, functions, and classes. Shell scripts compose programs out of other programs!
With attest, there are only three pieces of information you need to remember:
- Functions whose names start with
testare tests. - If any command in the function exits nonzero, the test fails.
- Each test runs in its own temporary directory.
That's it. There is no assertion library. There is no config file. There are no lifecycle hooks. Just regular ole shell functions:
testHello() {
result=$(echo hello)
[ "${result}" = "hello" ]
}
## A test that does some cleanup
testWorld() {
sleep infinity &
# Cleanup that process when we're done
trap "kill $!" EXIT
# Now do something with the process
cat /proc/$!/environ
}
Plop this in example.test, or it can be inline within a larger script, and run it:
attest example.test # one file
attest example.test/testHello # one test in a file
attest . # all tests in a directory
attest --parallel 1 . # run tests one-by-one instead of concurrently
We don't need a library of assertions or lifecycle hooks because you can already do all of that with regular idiomatic shell scripts. This means you can just source a test file into your shell and call the functions directly if you wanted.
It's important that tests match what a human would type at a prompt to check the behavior by hand. Why? Because we're in the age of AI:
How does AI affect testing?
As with AI programming writ large, the limiting reagent in testing has shifted from writing tests to reading tests. Our pre-AI test frameworks are more optimized for writing tests because that was an inevitable part of the process. Needless to say, I can now "write" a comprehensive test suite with a single-sentence prompt and it will turn out pretty decent.
So test frameworks need to be amenable to code generation by AIs, but also readable/verifiable by humans. If you just run the tests the AI generated, they're probably going to pass, but that's not actually verification. You need to read each test, making sure it actually exercises what it purports to test. AIs can be sneaky about generating elaborate true == true tests.
Which one of these assertions is more readable?
# Bats-assert style
assert_equal "$(echo actual)" "expected"
# Attest style
[ "$(echo actual)" = "expected" ]
I would argue that the idiomatic [ or [[ style assertion is slightly more readable. The main argument for specialized assertions like assert_equal is that the framework can diff the inputs and tell you exactly why the contents are not equal. attest can do that without a special assertion because it parses the xtrace output of the test and renders common patterns.
The main benefit to the idiomatic approach is you can copy/paste your tests into a shell and execute them as-is.
Skills
Here are some AI generated tests for tac:
testBasicFile() {
echo -e "line1\nline2\nline3" > input.txt
result=$(tac input.txt)
[ "$result" = "$(echo -e "line3\nline2\nline1")" ]
}
testEmptyFile() {
touch empty.txt
result=$(tac empty.txt)
[ -z "$result" ]
}
testNonExistentFile() {
! tac nonexistent.txt 2>/dev/null
}
These tests are pretty readable, but to double-check, I can just copy the lines directly into a shell and verify each step.
To make sure the AI uses the correct style, you can use a skill that the attest skill command prints.
Conclusion
Chances are, the tests are not exactly the most enjoyable part of your project to trad code. Hence why people don't write tests in the first place. Try writing/generating some attest tests. See how far you get before you need a manual.
Actually nevermind, attest is too simple to have a manual...