Hacker News Re-Imagined

HATEOAS: An Alternative Explaination

  • 140 points
  • 1 day ago

  • @recursivedoubts
  • Created a post

HATEOAS: An Alternative Explaination

@jounker 2 hours

Replying to @recursivedoubts 🎙

Worst acronym ever. Hate-Oa’s. Sounds like a white supremacist breakfast cereal.


@gabesullice 15 hours

I like this piece because I think it does a good job explaining the purpose of HATEOAS. The author is right that JSON can obscure its usefulness and HTML is a more familiar way to demonstrate it.

However, the author neglects to mention or is unaware of link relations, which can document and prescribe actions to take on a link just as "action" and "method" prescribe how a browser should submit a form.

Link relations can fix the <form> problem of JSON as a hypermedia format by prescribing the actions a client must take in order to follow a link. This is what the "action" and "method" of the form element do and these could be defined as target attributes of a "deposit" link represented with JSON.

The author is correct about pure JSON, but JSON-based media types like application/hal+json, JSON-ld, and application/vnd.api+json are very workable. Using these allowed my team to implement a decoupled React application with a JSON API back end. We used link relations similar to the banking example and our team was able to add and remove deposit-like links and add new buttons for previously unplanned operations without updating any JavaScript.

Full disclosure: I'm an editor of the JSON:API spec (i.e. application/vnd.api+json)


I'm a big believer that HATEOAS serves no actual purpose. Adding a layer of indirection is pointless - messages get bloated by URLs the client virtually never needs. For the vast majority of APIs, URL structures don't change that often and, when they do, it often indicates changes to content being transmitted, which would require business logic updates anyway.

For making APIs forward/backward compatible, both GraphQL and Protobufs are better options.


Hello HN.

In this article I try to explain HATEOAS in its original context, hypermedia/HTML, rather than using JSON.

JSON isn't a natural hypermedia and it makes HATEOAS (and the entire uniform interface of REST) difficult to understand and of frustratlingly little benefit to end users.

I don't expect to change the language around REST-ful APIs, where darn near anything over HTTP is called "REST-ful", but I hope that this article will help people who have struggled with understanding what HATEOAS is or why on earth it might be an interesting and useful technology.

In its natural environment, HTML, it is much easier to understand.


@Osiris 1 day

This article seems to just be describing plain-old HTML. You make a request to a URL, get back HTML, render the HTML. The HTML has forms and links that you click and get back new HTML.

I don't see how this is an API at all. It's just (server-side rendered) HTML.


@criddell 16 hours

The last two sentences are:

> While attempts have been made to impose more elaborate hypermedia controls on JSON APIs, broadly the industry has rejected this approach in favor of simpler RPC-style APIs that forego HATEOAS and other elements of the REST-ful architecture.

> This fact is strong evidence for the assertion that a natural hypermedia such as HTML is a practical necessity for building RESTful systems.

This all sounds about right to me and makes me ask what the problem actually is? Maybe an RPC-style API is a better choice for most systems?


@shrubble 1 day

As someone who has worked as part of a team on an Angular 9.x frontend with Spring Boot Java backend mediating between many other backend API based services... well, it's a huge mess, a resource hog in terms of disk, RAM and CPU, and far slower than it could be.

Development is slow and cumbersome and the browser uses 48MBytes+ on the page for what is ultimately a slightly complicated CRUD app.

Thr JSON APIs are secured by tokens and other mechanisms, so you can never query them directly without the Angular app.

We have 800MB or more in node_modules with Node taking 100s of MB and the Java backend uses over 1GB of RAM in dev mode just to start up.

It's an internal application available over the VPN. Even with high speed internet some API calls take 2 seconds to return and populate parts of the form in edit mode. A single unified backend server pumping out html would likely be both faster and easier to manage.


I will give the article credit for finally providing an example of how hyperlinks returned along with a result are used to inform a person of what to do: by returning a FORM.

I've read many an article about HATEOAS that doesn't mention exactly HOW these links are useful. Still, they seem useful only for human, manual operation. A bunch of links provides no indication of their purpose other than a textual description.


@kitd 20 hours

Lots of excellent points in the article, but the problem I have with it (and HATEOAS in general) is that, like NakedObjects before it, it expects the model of the resources and what you do with them to mirror precisely the model the user understands and wants to do. Ie, it takes no account of UX and design principles.

As a result, you end up with rather CRUDdy noun-oriented systems that don't really flow, with resources randomly thrown around by developers to try and make it make logical sense. A kind of accidental hypermedia complexity.

I love the abstract concept of HATEOAS, but it needs some overarching architectural abstractions to help designers and developers implement just the resources and links needed. A great research opportunity there IMHO.


@rektide 1 day

I 100% am onboard. I think this article still undersells how vital a shift it is, whether you are using a programming-oriented representation JSON, or the web's de-facto and de-jure representation of information HTML (un-discussed but key to me is RDFa and Microdata semantic-markup). This article is so good, draws a very clear picture, & yet it's still no-where enough in de-lineating just how key it is, how vital it is, that the actual web medium- HTML- be able to & be used for expressing information. It at least makes a solid go at trying to show how JSON is arbitrary, where-as HTML is definitive, declarative, expressive.

Back in DHTML days we talked about "data islands", which could be either JSON or XML/XHTML/HTML, either embedded in the page, powering the JS & webapp. Allowing the data & page realm to remain separate is a huge mistake. It overlooks the reverence & respect we ought/need to have for the core source of truth, for the real medium, HTML. Programmers need to take a knee & respect the vital, core, discoverable, honest truth of the web: HTML >>> JS. JS is supposed to just be a tool. As a agnostic/near-aetheist, it still feels non-contradictory to me to say we have fallen away from god in our march towards JS-driven applciationization. We have served illegitemate interests.


@dfabulich 1 day

HTTP APIs have been a huge hit, but HATEOAS hasn't. Nobody does it right, nobody "gets it," everybody must have just misunderstood Roy Fielding's dissertation. If only we had a clearer, better explanation, like this one!

Unfortunately, this article demonstrates what the industry has known/shown for years: that HATEOAS is unsuitable as a technique for HTTP APIs.

Imagine writing a programmatic client against the bank account "API" documented in this article, where the way you'd find out that you can't make a withdrawal is not by checking a "status" in JSON and finding it's overdrawn, but by parsing the HTML and noticing that there's no action link (no `<form>` element!!) that happens to make a withdrawal.

Instead of a JSON parser, you'd now need an HTML parser (one of the most notoriously quirky formats in modern use), but not just that, you'd also need your client to read and understand that HTML.

For example, where's the error message explaining why you can't withdraw? Why, it's written in the page as user-visible text. Which text is user-visible? Uh oh, now you need to parse and execute the CSS on the page to figure that out, too.

You'd need a full-blown browser to do that--a "user agent" as the HTML specification likes to call it--and now, instead of using an API, you're screen-scraping the web site by automating a browser with Selenium or Puppeteer or what have you. What will the client do when the server decides to restructure the HTML? Which one of those <a> links or <form> elements represents withdrawing cash??

It is possible to develop useful tools that automate full-blown websites, but it it's much much easier to develop useful clients when the responses are highly limited in the structure they're allowed to return, to couple the client to the server by committing, in advance, to how the API will work.

That's been Fielding's mistake from the very beginning: to think that hypertext provides useful techniques for developing programmatic interfaces. Hypertext is only useful because of its users. Users can read text, understand it, and decide what links to click on or what forms to submit. Programmatic API clients can't afford that luxury.

As the article demonstrates, if the client was intended to just be a program, we would never use HTML for any of these exchanges, and especially not use <a> links or <form> elements to convey to that program what is or is not possible.

The industry has rejected HATEOS not because we were too stupid understand how awesome it is; we've rejected it because we do understand how awesome it isn't.


@k__ 1 day

Shouldn't HTML in the examples be compared to something on the same abstraction level, like HAL?

HAL is to JSON what HTML is to SGML.

Nobody would argue thst SGML isn't suited for the web, because it too general/low-level of a language.


@juancampa 1 day

I looked at HATEOAS a few years ago and thought it would make sense in a latency-free universe. The reality is that waiting for one page to load to know what's next and then waiting again, is a recipe for bad frustratingly slow interactions. Shouldn't we download the whole application "graph" schema upfront? How big could that be?


@the_arun 1 day

REST APIs are designed to think in platform centric way. Hateoas are designed to extend the same philosophy.

I see The HTML example in the article isn't quite that. I mean the last example which renders content in HTML assumes client wants it for a POST call. What if me as a client using it for GET call? Having said that you as a developer is building a HATEOAS form of content in the user interface. Am I reading this right?


Two things (not exhaustive!) that a JSON (or any data interchange format) API can do to improve discoverability:

1. Provide OPTIONS responses for all resources, clearly and consistently documenting usage of that resource.

2. Provide relevant OpenAPI documentation for those OPTIONS requests, and an interactive UI for GET requests accepting text/html (obvious caveats aside).


IMO, the benefits of HATEOAS are very much overblown - many circumstances you have full control over the version of both client and server, and all it is doing is adding an extra layer of indirection that allows extra complexity and ambiguity to sneak in. Instead of the client depending on a specific known API endpoint, it depends both on the endpoint you give it at runtime and the process that serves that endpoint to the client. And as a bonus, the client now "helpfully" obfuscates what endpoints it needs to render which pages - what you could figure out with straightforward grep for something like "api/v1/books" now needs a deeper understanding of the application.

Like, the way in which the client accesses the "deposits" functionality is either fixed or can dynamically change. If it's fixed, the indirection serves no purpose - just have the client do the thing in the one way that you ever have the client do the thing. If it can dynamically change, the state space involved can and often will grow out of control in a hurry, and I strongly suggest pushing back on designs that require structural dynamism like this.


About Us

site design / logo © 2021 Box Piper