In this case a Cruise car was rear ended by a Cruise car driven by a human, because their unsophisticated software makes for the car to break hard. As explained in the article:
"...That said, the rear-endings demonstrate that the technology is far from perfect. Cruise cars follow road laws to a T, coming to full stops at stop signs and braking for yellow lights. But human drivers don’t—and Cruise cars will be self-driving among humans for decades to come..."
"...The fact that a driver Cruise trained to work with these vehicles still managed to rear-end one emphasizes exactly how flawed they are..."
"...In this paper we focus on the RISC-V cores where speculation is enabled and, as we show, where Spectre attacks are as effective as on x86. Even though RISC-V hardware mitigations were proposed in the past, they have not yet passed the prototype phase. Instead, we propose low-overhead software mitigations for Spectre-BTI, inspired from those used on the x86 architecture, and for Spectre-RSB, to our knowledge the first such mitigation to be proposed. We show that these mitigations work in practice and that they can be integrated in the LLVM toolchain. For transparency and reproducibility, all our programs and data are made publicly available online...."
"...In this paper, we study the capabilities of Copilot in two different programming tasks:
(1) generating (and reproducing) correct and
efficient solutions for fundamental algorithmic problems,
(2) comparing Copilot's proposed solutions with those of human programmers on a set of programming tasks.
For the former, we assess the performance and functionality of Copilot in solving selected fundamental problems in computer science, like sorting and implementing basic data structures. In the latter, a dataset of programming problems with human-provided solutions is used...
...The results show that Copilot is capable of providing solutions for almost all fundamental algorithmic problems, however, some solutions are buggy and non-reproducible. Moreover, Copilot has some difficulties in combining multiple methods to generate a solution. Comparing Copilot to humans, our results show that the correct ratio of human solutions is greater than Copilot's correct ratio, while the buggy solutions generated by Copilot require less effort to be repaired...While Copilot shows limitations as an assistant for developers especially in advanced programming tasks, as highlighted in this study and previous ones, it can generate preliminary solutions for basic programming tasks...."