Hacker News Re-Imagined

Ask HN: Reinforcement learning for single, lower end graphic cards?

On one side, more and more hardware is being thrown in parallel to ingest and compute astonishing amounts of data generated by realistic 3d simulators, especially for robotics, with big names like OpenAI now just giving up on the field as from https://news.ycombinator.com/item?id=27861201 ; on the other side, more recent simulators like Brax from Google https://ai.googleblog.com/2021/07/speeding-up-reinforcement-learning-with.html are aiming at “matching the performance of a large compute cluster with just a single TPU or GPU”. Where do we stand on the latter side of the equation then? What is the state of the art with single, lower end GPUs like my 2016 gaming laptop’s GTX 1070 8GB? What do we lower end users need to read, learn and test these days? Thanks.

6 hours ago

Created a post 5 points @DrNuke

Ask HN: Reinforcement learning for single, lower end graphic cards?

@pepemysurprised 5 hours

Replying to @DrNuke 🎙

For many RL problems you don't really need GPUs because the networks used are relatively simple compared to supservised learning, and most simulations are CPU-bound. Many RL problems are constrained by data so that running simulations (CPU) is the bottleneck, not the network.

Reply


About Us

site design / logo © 2021 Box Piper