Hacker News Re-Imagined

How safe is Zig?

  • 177 points
  • 7 hours ago

  • @orf
  • Created a post

How safe is Zig?


@nwellnhof 5 hours

Replying to @orf 🎙

UBSan has a -fsanitize-minimal-runtime flag which is supposedly suitable for production:

https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html#...

So it seems that null-pointer dereferences and integer overflows can be checked at runtime in C. Besides, there should be production-ready C compilers that offer bounds checking.

Reply


@pjmlp 5 hours

Replying to @orf 🎙

This is why for me, Zig is mostly a Modula-2 with C syntax in regards to safety.

All the runtime tooling it offers, already exists for C and C++ for at least 30 years, going back to stuff like Purify (1992).

Reply


@lmh 5 hours

Replying to @orf 🎙

Question for Zig experts:

Is it possible, in principle, to use comptime to obtain Rust-like safety? If this was a library, could it be extended to provide even stronger guarantees at compile time, as in a dependent type system used for formal verification?

Of course, this does not preclude a similar approach in Rust or C++ or other languages; but comptime's simplicity and generality seem like they might be beneficial here.

Reply


@dleslie 6 hours

Replying to @orf 🎙

And here is the table with Nim added; though potentially many GC'd languages would be similar to Nim:

https://uploads.peterme.net/nimsafe.html

Edit: noteworthy addendum: the ARC/ORC features have been released, so the footnote is now moot.

Reply


@ArrayBoundCheck 6 hours

Replying to @orf 🎙

I like zig but this is taking a page out of rust book and exaggerating C and C++

clang and gcc will both tell you at runtime if you go out of bounds, have an integer overflow, use after free etc. You need to turn on the sanitizer. You can't have them all on at the same time because code will be unnecessarily slow (ex: having thread sanitizer on in a single threaded app is pointless)

Reply


@woodruffw 6 hours

Replying to @orf 🎙

This was a great read, with an important point: there's always a tradeoff to be made, and we can make it (e.g. never freeing memory to obtain temporal memory safety without static lifetime checking).

One thought:

> Never calling free (practical for many embedded programs, some command-line utilities, compilers etc)

This works well for compilers and embedded systems, but please don't do it command-line tools that are meant to be scripted against! It would be very frustrating (and a violation of the pipeline spirit) to have a tool that works well for `N` independent lines of input but not `N + 1` lines.

Reply


@avgcorrection 1 hour

Replying to @orf 🎙

A meta point to make here but I don’t quite understand the pushback that Rust has gotten. How often does a language come around that flat out eliminates certain errors statically, and at the same time manages to stay in that low-level-capable pocket? And doesn’t require a PhD (or heck, a scholarly stipend) to use? Honestly that might be a once in a lifetime kind of thing.

But not requiring a PhD (hyperbole) is not enough: it should be Simple as well.

But unfortunately Rust is (mamma mia) Complex and only pointy-haired Scala type architects are supposed to gravitate towards it.

But think of what the distinction between no-found-bugs (testing) and no-possible-bugs (a certain class of bugs) buys you; you don’t ever have to even think about those kinds of things as long as you trust the compiler and the Unsafe code that you rely on.

Again, I could understand if someone thought that this safety was not worth it if people had to prove their code safe in some esoteric metalanguage. And if the alternatives were fantastic. But what are people willing to give up this safety for? A whole bunch of new languages which range from improved-C to high-level languages with low-level capabilities. And none of them seem to give some alternative iron-clad guarantees. In fact, one of their selling point is mere optionality: you can have some safety and/or you can turn it off in release. So runtime checks which you might (culturally/technically) be encouraged to turn off when you actually want your code to run out in the wild, where users give all sorts of unexpected input (not just your “asdfg” input) and get your program into weird states that you didn’t have time to even think of. (Of course Rust does the same thing with certain non-memory-safety bug checks like integer overflow.)

Reply


@einpoklum 1 hour

Replying to @orf 🎙

One-liner summary: Zig has run-time protection against out-of-bounds heap access and integer overflow, and partial run-time protection against null pointer dereferencing and type mixup (via optionals and tagged unions); and nothing else.

Reply


@tptacek 5 hours

Replying to @orf 🎙

"Temporal" and "spatial" is a good way to break this down, but it might be helpful to know the subtext that, among the temporal vulnerabilities, UAF and, to an extent, type confusion are the big scary ones.

Race conditions are a big ugly can of worms whose exploitability could probably be the basis for a long, tedious debate.

When people talk about Zig being unsafe, they're mostly reacting to the fact that UAFs are still viable in it.

Reply


@dkersten 5 hours

Replying to @orf 🎙

I’m not sure I understand the value of an allocator that doesn’t reuse allocations, as a bug prevention thing. Is it just for performance? (Since its never reused, allocation can simply be incrementing an offset by the size of the allocation)? Because beyond that, you can get the same benefit in C by simply never calling free on the memory you want to “protect” against use-after-free.

Reply


@AndyKelley 3 hours

Replying to @orf 🎙

I have one trick up my sleeve for memory safety of locals. I'm looking forward to experimenting with it during an upcoming release cycle of Zig. However, this release cycle (0.10.0) is all about polishing the self-hosted compiler and shipping it. I'll be sure to make a blog post about it exploring the tradeoffs - it won't be a silver bullet - and I'm sure it will be a lively discussion. The idea is (1) escape analysis and (2) in safe builds, secretly heap-allocate possibly-escaped locals with a hardened allocator and then free the locals at the end of their declared scope.

Reply


@LAC-Tech 4 minutes

Replying to @orf 🎙

Safe enough. You can use `std.testing.allocator` and it will report leaks etc in your test cases.

What rust does sounds like a good idea in theory. In practice it rejects too many valid programs, over-complicates the language, and makes me feel like a circus animal being trained to jump through hoops. Zigs solution is hands down better for actually getting work done, plus it's so dead simple to use arena allocation and fixed buffers that you're likely allocating a lot less in the first place.

Rust tries to make allocation implicit, leaving you confused when it detects an error. Zig makes memory management explicit but gives you amazing tools to deal with it - I have a much clearer mental model in my head of what goes on.

Full disclaimer, I'm pretty bad at systems programming. Zig is the only one I've used where I didn't feel like memory management was a massive headache.

Reply


@afdbcreid 5 hours

Replying to @orf 🎙

Do compilers really can never call `free()`?

Simple compiler probably can. Most complex probably cannot (I don't want to imagine a Rust compiler without freeing memory: it has 7 layers of lowering (source code->tokens->ast->HIR->THIR->MIR->monomorphized MIR, excluding the final LLVM IR) and also allocates a lot while type-checking or borrow-checking).

What is most interesting to me is the average compiler. Does somebody have statistics on the average amount compilers allocate and free?

Reply


@verdagon 5 hours

Replying to @orf 🎙

A lot of embedded devices and safety critical software sometimes don't even use a heap, and instead use pre-allocated chunks of memory whose size is calculated beforehand. It's memory safe, and has much more deterministic execution time.

This is also a popular approach in games, especially ones with entity-component-system architectures.

I'm excited about Zig for these use cases especially, it can be a much easier approach with much less complexity than using a borrow checker.

Reply


@ajross 4 hours

Replying to @orf 🎙

> In practice, it doesn't seem that any level of testing is sufficient to prevent vulnerabilities due to memory safety in large programs. So I'm not covering tools like AddressSanitizer that are intended for testing and are not recommended for production use.

I closed the window right there. Digs like this (the "not recommended" bit is a link to a now famous bomb thrown by Szabolcs on the oss-sec list, not to any kind of industry consensus piece) tell me that the author is grinding an axe and not taking the subject seriously.

Security is a spectrum. There are no silver bullets. It's OK to say something like "Rust is better than Zig+ASan because", it's quite another to refuse to even treat the comparison and pretend that hardening tools don't exist.

This is fundamentally a strawman, basically. The author wants to argue against a crippled toolchain that is easier to beat instead of one that gets used in practice.

Reply


@anonymoushn 6 hours

Replying to @orf 🎙

I would like Zig to do more to protect users from dangling stack pointers somehow. I am almost entirely done writing such bugs, but I catch them in code review frequently, and I recently moved these lines out of main() into some subroutine:

  var fba = std.heap.FixedBufferAllocator.init(slice_for_fba);
  gpa = fba.allocator();
slice_for_fba is a heap-allocated byte slice. gpa is a global. fba was local to main(), which coincidentally made it live as long as gpa, but then it was local to some setup subroutine called by main(). gpa contains an internal pointer to fba, so you run into trouble pretty quickly when you try allocating memory using a pointer to whatever is on that part of the stack later, instead of your FixedBufferAllocator.

Many of the dangling stack pointers I've caught in code review don't really look like the above. Instead, they're dangling pointers that are intended to be internal pointers, so they would be avoided if we had non-movable/non-copyable types. I'm not sure such types are worth the trouble otherwise though. Personally, I've just stopped making structs that use internal pointers. In a typical case, instead of having an internal array and a slice into the array, a struct can have an internal heap-allocated slice and another slice into that slice. like I said, I'd like these thorns to be less thorny somehow.

Reply


@belter 5 hours

Replying to @orf 🎙

1 year ago, 274 comments.

"How Safe Is Zig?": https://news.ycombinator.com/item?id=26537693

Reply


About Us

site design / logo © 2022 Box Piper