Hacker News Re-Imagined

I ****Ing Hate Science

3 hours ago

Created a post 42 points @martincmartin

I ****Ing Hate Science

@zabzonk 57 minutes

Replying to @martincmartin 🎙

If you search for "science" in this article, you will have great difficulty in figuring out what the title means, unless you are smarter than me, which is entirely possible.

Reply


@Helmut10001 16 minutes

Wow, thanks for the link to sci-hub.st - this is awesome! I can finally access Elsevier's "walled" content again.

Reply


@dogorman 35 minutes

> I’ve checked a few other papers and think I’m tentatively confident in this line of reasoning: certain bugs take more time to fix (and cause more damage) than others, and said bugs tend to be issues in the design. I haven’t vetted these papers to see if they don’t make major mistakes, though. It’s more time-consuming to do that when you’re trying to synthesize an stance based on lots of indirect pieces of evidence. You’re using a lot more papers but doing a lot less with each one, so the cost gets higher.

Empiricism and quantitative metrics have indispensable value, that much should be clear to everybody I hope. But too often people forget (or have active contempt for) the value of qualitative metrics which can only be judged subjectively. Such considerations are naturally harder to deal with than cold hard data, so it doesn't surprise me that people want to avoid it. But when you blind yourself to the qualitative and subjective, you do yourself a huge disservice. Just ask McNamara; he thought he could win the Vietnam War with quantitative metrics and utterly neglected difficult to quantify metrics like public sentiment, both in Vietnam and America. I see echos of this in our industry today; we love to talk about empirical measures like the number of bugs, but subjective metrics, like the severity of those bugs, receive less attention.

Many university programs are set up to address this, by making engineering students earn credits in the humanities as well. But I fear the value of this is often inadequately explained. Contempt for the humanities and scientism go hand-in-hand, and are a worrying trend particularly in the tech industry.

Reply


@SebastianFish 35 minutes

Testing different methods of development in terms of speed, cost and quality is really hard. The most convincing approach to me would be a single blind experiment to hire two software development teams and have them build to the same set of requirements in two different ways. But then it is hard to know whether you are really comparing the method of software development or the quality of the software teams. So two software teams isn't enough to get a statistically valid inference. You can see that, given software development rates, this could become a very expensive experiment.

Last point. I think that even writing a specification down to the level that it could be implemented using formal methods might be the biggest game changer. Agile stories rarely come even close to covering all of the potential edge cases. If we had a process that required product owners to literally think through all possible failure modes (what systems of formal methods do) and write out how to handle them then the cost of writing specifications would go way up. Per economics, I think we would end-up with simpler specifications which might be its own benefit.

Reply


@Bayart 33 minutes

It's completely besides the point, but the capitalizing of the I of « **Ing » in the title makes me unreasonably grumpy.

Reply


@sseagull 50 minutes

I generally agree with everything, but thought I would just use this as a springboard into a related topic:

> What’s the difference between a “bug” and “defect”? Well, one’s a bug and the other’s a defect, duh!

This kind of issue is common but I'm not sure how to avoid it. Any group of people > 1 will start to use their own lingo, which often is made up of similar words from "outside" but have different connotations. This is true in science, medicine, software engineering, law, everywhere.

(I mean, why would I search for "bug"? Like, I'm searching for problems with computer code, not insects!)

This phenomenon unfortunately leads to misunderstandings with the general public, which leads to mistrust. Part of it is on scientists (and lawyers...) to be clearer in their communication, but I think it is also on the public to recognize that when reading scientific literature they are not the intended audience and are therefore missing a ton of context that is not explicitly stated.

Also:

>I’m sure this is slightly easier if you’re deeply embedded in academia

Also depends on the field. Chemistry has SciFinder, which although very expensive for institutions, is very good. It is fairly specific to chemistry though (and some overlapping fields).

His process for finding node papers and grinding through citations is pretty much how most scientists do it, though. And conferences.

Reply


@JohnFen 9 minutes

It sounds like the author doesn't hate science (which is a nonsensical thing to hate), but academia. Those are two different things.

Reply


@jeffreyrogers 28 minutes

I was reading this thesis the other day[0], which is on precision machine design. It got me comparing precision machine design to software engineering.

A big part of why we're able to design extremely precise machines (the author worked on a lathe used for machining optical parts for the National Ignition Facility) is because we can characterize exactly where errors will come from (e.g. structure isn't rigid enough, temperature variation causes parts to expand by different amounts, parts not round or flat enough, etc.). Once we know what errors we need to control and their rough order of importance we can start improving the machine design to control them better.

In theory, something similar could be done in software engineering (formal methods are part of this, but not a full solution). Rather than an error budget, you'd have some sort of bug budget, where you tracked what sort of bugs were caused by what sort of constructs, and design your program in such a way to minimize their chance of being introduced. I've never heard of anyone except Dan Bernstein[1] actually doing anything approximating this. Probably because the perceived level of effort is too high.

I actually don't think it would take that much effort, but it would require quite a bit of organization to track where bugs are introduced and what their root causes are. This is probably why Bernstein, an individual, is able to do this, while no large team (that I'm aware of) has done anything similar.

Of course, just like your toaster doesn't need to use precision machine design techniques (an over-engineered toaster is a bad toaster), most software doesn't need the effort/expense of a rigorous design process either, but some would benefit from it.

[0]: https://dspace.mit.edu/handle/1721.1/9414

[1]: https://cr.yp.to/djb.html

Reply


About Us

site design / logo © 2021 Box Piper