I want to experiment with zoomable, pannable interfaces on the web. Basically, an interface like Figma, but for interacting with different types of structured data. Think: web-ish interfaces for web-ish data on an infinite pannable, zoomable canvas.
Why? I've become incredibly bored with how homogenized the web has become, everything a neat, perfect grid. Moreover, I want to explore what might be possible in building interfaces outside of these repeatable patterns and paradigms.
I'm aware of a few react libraries for rendering to canvas such as react-konva, react-ape, etc. Most of what I've found are plugins for React itself.
That might be the right way to think about the problem, but it might not be. I'm at the earliest exploratory stage of this, so it's worth questioning if React is the right abstraction for this in the first place.
Hence, is there something like React (but not React) for building these types of zoomable/pannable interfaces?
OTOH, maybe React is the right abstraction, in which case, I'd love to hear anyone's experiences building these types of interfaces.
I work for a Figma competitor doing R&D. Both our product and my testing setups use React. It's completely fine and workable, certainly to get started, but also not at all necessary.
The place you're going to end up spending a lot of time (depending on the user input gestures you intend to support) are A) two-finger panning and pinch-to-zoom in non Safari browsers (Safari's GestureEvents makes this a LOT easier). B) Robust touch support when the number of touch points changes mid-gesture. C) Detecting the difference between a trackpad and a mousewheel (assuming you want to map the mousewheel to zoom).
React wont help with any of the above, but it also wont get in your way.
You'll also be running into all sorts of fun rendering issues (browsers implement minimum and maximums on font-size if you're using HTML text), performance (fullscreen canvas 2d? ooof). If you find yourself headed towards WebGL, the usefulness of React becomes incredibly (though not completely) suspect.
I've been looking for something similar in SwiftUI. Can anyone suggest a good tutorial??Reply
For another added dimension, there’s react-three-fiber (and similar projects for all the big web UI libraries) that render JSX to a scene in ThreeJS (OpenGL). React might be the right abstraction.Reply
I’ve used Pixi.js together with React to build a canvas based white boarding app similar to Figma or Miro.
Pixi for doing all the canvas stuff, React for the UI elements. It wasn’t very performant but for an early stage product knocked out under a year I thought it was alright. We wasted a lot of energy trying to get redux to work. It should be a lot easier now with hooks and context for managing state.
Depending on what you want to do, d3.js is another possibility for just canvas manipulation.
The main problems with canvas was having to essentially do manually what html and css gave for free, especially with regards to text manipulation. There wasn’t a way to have text wrap automatically, we had to calculate sentence length, then manually insert new lines where we thought the wrap would happen.Reply
At Whiteboards.io (https://whiteboards.io) we are simply using React, CSS, DOM, and various ways to exploit the browser performance.
This approach is NOT perfect, but it solves problems like accessibility for example and it keeps us going.
The alternative approach like Pixi.js will work for you, but it effectively means building a browser inside a browser, kind of Adobe Flash type of solution.
IMO it is better to piggy back on the browser mechanics instead of building your own on top of the HTML5 Canvas. The fun fact is that the modern websites like Facebook or news front pages are far more complex and dynamic than pannable and zoomable canvas apps.Reply