If you are writing JS/TS, I highly recommend you try writing your own custom linter rules. It's quite simple and it's a great way to establish "architecture assertions". Use this tool (set the parser to ESLint): https://astexplorer.net/
I feel that linting has a lot of potential and is currently underappreciated. You can create rules that are heavily semantic in a way that would be impossible or extremely hard using a type system. Add to that that you can give useful error messages that include links! A type system might allow you to express an invariant, but when it's broken, it can be very hard to understand and communicate what the problem (let alone the solution!) is.
Finally, like the article touches on, linter errors can easily be ignored which means that you only have to get them, say, 80% right to add value. If an error appears somewhere you don't intend, you just ignore it with a comment.Reply
"informing without blocking can be useful to allow rapid progress, but if any phase ever decides to inform, some later phase must block on the same thing to ensure the problem doesn't stick around"
In Rust, #![deny(warnings)] is consider bad practice for the reasons explained in this article - it blocks progress too early.
Rather setting RUSTFLAGS="-D warnings" in CI will easily prevent warnings from being committed.Reply
Another consideration is how expensive is it to do the check? Maybe you don't want expensive-to-check invariants to be part of the developer's inner loop when writing code?
Incremental compilation helps, but fast performance in the non-incremental case also matters, or you might get something like Julia's "time to first plot" problem.Reply
Small error here:
> You often find programmers recommending to configure C compilers with -Wall, which turns all warnings (which typically "inform") into errors ("block").
"-Wall" turns all warnings on . What turns them into errors is "-Werror".
In a project you distribute to other people it's a horrible idea because different compiler versions will give different, sometimes wrong or spurious, warnings, and so it'll randomly fail to compile even tho the code is completely correct.
: Well, not "all" warnings, just the ones in the "all" set. It's a bit misleading.Reply
I worked on systems that have additional code controls on the build servers that can block the builds. It is a bad idea. Let me take an example from the real world. Large monolith site. Some error occurred that needed a hotfix. The hotfix did not build on the server. It was not clear why it did not build so it took a maybe an hour to fix. Turned out that it did not like how a comment was formatted. Which added even more confusion since the build error did not even point to any real code. And that is how a comment costed ~50k dollars.
Maybe there could be like a grace period instead. Like, if you do not fix this within 30 days, new builds will be blocked.Reply
I like this advice. Especially that we want to block problems as late as possible, whilst warning as early as possible (e.g. warning in an IDE, blocking with a commit hook).
I also agree with the linting approach, and do the same myself:
- Automatically enable as many high-confidence checks as we can (if we want low-confidence 'suggestions', they can be run manually)
- Make them block (eventually, e.g. via a commit hook or a failing test)
- Allow '# disable-foo' annotations (with accompanying comment) when needed (with code review judging whether it's an appropriate solution or not, just like anything else)Reply